A computer scientist explains how to keep AI bias in check – instead of entirely eliminating it
Writing for The Conversation, Emilio Ferrara, Professor of Computer Science and Communication at the University of Southern California, gave an example of an AI system sometimes producing stereotypical or offensive outputs.
When he once asked OpenAI’s ChatGPT for a joke about Sicilians, it implied that Sicilians are stinky. Being someone born and raised in the Italian region, Professor Ferrara reacted to the joke with disgust, he mentioned.
But at the same time, his computer scientist brain started revolving around a simple question of whether artificial intelligence systems should be allowed to be biased. While one of the reasonable responses would be “Of course not!”, some researchers argue the opposite.
Keep Reading
AI systems like the one from OpenAI should indeed be biased, Ferrara said, but stressed not in the way most people might think. He believes eliminating biases from AI can have unintended consequences and therefore, proposes the idea of controlling it instead.
As AI is increasingly integrated into everyday technology, more and more people across the globe stress the need to address bias in the system. The offensive outputs sometimes generated can be the result of a number of different sources, including problematic training data.
It’s unclear whether bias can – or even should – be entirely removed from AI systems. If a few AI engineers notice their model producing a stereotypical response, they might think the solution is to eliminate some questionable examples in the training data.
But while this sort of “AI neurosurgery” is aimed at bringing well-intentioned changes, the outcomes could be unpredictable and possibly negative. An AI system trained to avoid certain sensitive topics is likely to generate incomplete or misleading responses.