Portrayal of AI as white has ‘dangerous consequences’ for humans who are not

Study warns that artificial intelligence will exacerbate racial inequality if bias continues unchallenged

Powered by automated translation

The abilities of future super-smart robots may still be hotly debated, but one thing seems certain: they won’t look like ethnic minorities.

A new study has revealed that artificial intelligence is almost always portrayed with the characteristics of white Caucasians in popular culture.

And according to the authors of the study, this increases the risk of AI research becoming ever more racially biased, with algorithms reflecting a whites-only world.

Evidence of racially-biased AI has been growing for some time. Most concern surrounds facial recognition systems, which use AI methods to train computers to identify individuals.

Yet despite being increasingly used for law enforcement, research has shown that commercial AI systems are startlingly prone to mis-identify people from ethnic backgrounds.

White culture can't imagine being taken over by superior beings resembling races it has historically framed as inferior

In a 2018 study by scientists at the Massachusetts Institute of Technology, AI systems failed to identify even the gender of 1 in 3 dark-skinned women, compared to just 1 in 100 light-skinned men.

Cases of BAME individuals being wrongly accused of crimes based on evidence from facial recognition algorithms are also starting to emerge.

Now researchers at the University of Cambridge, England, are warning of further dangers if the association of AI with whiteness goes unchallenged.

"Given that society has, for centuries, promoted the association of intelligence with White Europeans, it is to be expected that when this culture is asked to imagine an intelligent machine it imagines a White machine," said Dr Kanta Dihal, co-author of the study in the journal Philosophy and Technology.

The portrayal of AI as being smarter than humans as well as white “could have dangerous consequences for humans that are not,” she said.

Dr Dihal pointed out that celebrated examples of AI in movies from Terminator to Ex Machina are all played by white actors or portrayed as white on-screen.

Even AI characters in slave-like roles -  such as the rebellious replicants in Blade Runner – are portrayed as white. "AI is often depicted as outsmarting and surpassing humanity," said Dr Dihal. "White culture can't imagine being taken over by superior beings resembling races it has historically framed as inferior."

Together with Dr Stephen Cave of the Leverhulme Centre for the Future of Intelligence (CFI), Dr Dihal found that the whiteness of AI is not only perpetuated through imagery in popular culture.

“One of the most common interactions with AI technology is through virtual assistants in devices such as smartphones, which talk in standard white middle-class English,” said Dr Dihal.

“Ideas of adding Black dialects have been dismissed as too controversial or outside the target market.”

According to Dr Dihal, the exclusively white image of AI could also affect recruitment into the field. With AI increasingly used in applications such as recruitment and criminal justice, this could be “highly consequential", she said. “If the developer demographic does not diversify, AI stands to exacerbate racial inequality”.

Such concerns are backed by evidence of bias in algorithms used to assess criminal defendants in the United States. One study by investigators at ProPublica found that black defendants were twice as likely to be mis-classified as at higher risk of re-offending than their white counterparts. In contrast, white defendants were twice as likely to be mis-classified as posing a lower risk.

The issue of biased AI is increasingly being recognised by technology companies. Microsoft has admitted refusing to supply facial recognition systems to some clients because of fears it would be biased against minorities.

Increasing efforts are also being made to fix the problem. In the case of facial recognition systems, this means finding better AI training methods.

These typically involve getting computers to classify thousands of publicly-available images, each tagged according to their ethnicity, gender and other defining characteristics.

However, according to a new study by UAE researchers, women and ethnic groups are typically under-represented in such collections of images, leading to biased outcomes. They concluded that the best hope for eliminating bias lies in using image databases for specific ethnicities and better algorithms.

Robert Matthews is Visiting Professor of Science at Aston University, Birmingham, UK