Built-in bias is a serious challenge in emerging artificial intelligence (AI) technologies, according to a Toronto-based university professor who presented at Thompson Rivers University’s recent privacy and security conference.
Toronto Metropolitan University professor Ebrahim Bagheri spoke at the conference on Jan. 26.
Bagheri called the problem of AI bias “a vicious cycle” that stems from problems in collecting data upon which AI relies.
“It’s a vicious cycle that we are now facing and we need to systematically think about how we can address biases not only in the real world, but also when we’re doing data collection and using that data to build AI systems,” he said in his virtual presentation at the university.
Bagheri said human biases are being codified in data collected from social media and being used in machine learning applications — the underlying mechanism of AI system.
“This whole process of designing, collecting data, designing, sampling data, measurement, there is a high likelihood of introducing various forms of bias into an AI system,” he said.
Bagheri gave several examples, including a simple one to illustrate his point. He showed a video of a soap dispenser that would not function for those with darker skin tones.
“There are just some sensors there, but clearly what happens is that it only dispenses soap when the hand is white enough, kind of indicating that the designers didn’t think about it,” he said.
Bagheri said AI tools are often used to support decision-making, such as who should be approved or rejected in a mortgage application. He explained that machines are trained to make these decisions based on historical data. That is where the biases are introduced.
If, for example, certain postal codes are often rejected, the AI is likely to use that to deny mortgages to those living in specific areas, which may be correlated with race.
The professor said many of the decisions made using AI tools can be “problematic,” carrying over human biases.
Bagheri used the example of Amazon, which at one point was receiving so many job applications that it was too much to manage.
“So, they created an AI algorithm to review job applicants’ resumes and filter out the ones they didn’t want,” he said.
The results, however, carried over a bias. Bagheri said the system favoured male candidates and hiring decisions were based on that system. He said the model affected the lives of more than one-million people’s lives.
Bagheri said that is the real danger of deploying algorithms to support decision-making.
“Algorithms are typically deployed at scale. So, if you have a racist friend, they can tell you racist stuff. If you have an algorithm based on racist data and you deploy it at scale, you have the potential to impact a couple of hundred million people,” he said.
Bagheri urged AI creators and those using the systems to focus on a human-centred approach and to only use them for social good.