Is AI racist? Why more diversity is needed in the field of data science

Tech companies need to do more work to solve the issues of racism in artificial intelligence

If someone were to describe a person of colour as an animal, their comments would be rightly called out as racist. When artificial intelligence does the same thing, however, the creators of that AI are careful to avoid using the “r” word.

Earlier this month, a video on Facebook featuring a number of black men ended with a prompt asking the viewer if they wanted to “keep seeing videos about Primates”. Facebook’s subsequent apology described the caption as an “error” which was “unacceptable”.

An ever-growing catalogue of algorithmic bias against people of colour is referred to by the offending companies using increasingly familiar language: “problematic”, “unfair”, a “glitch” or an “oversight”.

Campaigners are now pressing for more acknowledgement by these businesses that the AI systems they have built – and that have a growing impact on our lives – may be inherently racist.

“This animalisation of racialised people has been going on since at least 2015, from a great many companies, including Google, Apple and now Facebook,” says Nicolas Kayser-Bril, a data journalist working for advocacy organisation AlgorithmWatch.

The infamous incident in 2015, in which two people of colour were labelled by Google Photos as “gorillas”, caused an outcry, but Kayser-Bril is scathing about the lack of action.

“Google simply removed the labels that showed up in the news story,” he says. “It's fair to say that there is no evidence that these companies are working towards solving the racism of their tools.”

Quote
To remove systemic racism would necessitate huge work on the part of many institutions in society, including regulators and governments.
Nicolas Kayser-Bril, data journalist

The bias demonstrated by algorithms extends far beyond the mislabelling of digital photos. Tay, a chatbot created by Microsoft in 2016, was using racist language within hours of its launch. The same year, a misconceived AI beauty contest consistently rated white people more attractive than people of colour.

Facial-recognition software has been shown to perform significantly better on white people than black, leaving people of colour susceptible to wrongful arrest when such systems are used by police.

AI has also been shown to introduce levels of prejudice and bias into social media, online gaming and even government policy – and yet the subsequent apologies apportion the blame to the AI itself, rather like a parent trying to explain the actions of a naughty child.

But as campaigners point out, AI only has one teacher: human beings. We might think that AI is neutral, a useful way of removing bias from human decision making, but it appears to be imbued with all the inequalities inherent in society.

"Data is a reflection of our history,” says computer scientist Joy Buolamwini in the Netflix documentary Coded Bias. “The past dwells within our algorithms.”

In a striking scene from the documentary, Buolamwini, a woman of colour, uses a facial recognition system that reports back “no face detected”. When she puts on a white mask, she passes the test immediately. The reason: the algorithm making the decision has been trained on overwhelmingly white data sets.

For all the efforts being made around the world to forge a more inclusive society, AI only has the past to learn from. “If you feed a system data from the past, it's going to replicate and amplify whatever bias is present,” says Kayser-Bril. “AI, by construction, is never going to be progressive.”

Data can end up creating feedback loops and self-fulfilling prophecies. In the US, police using predictive software direct greater surveillance of black neighbourhoods because that is where existing systems are prioritising. Prospective employers and credit agencies using biased systems will end up making unfair decisions, and those at the sharp end will never know that a computer was responsible.

This opacity, according to Kayser-Bril, is both concerning and unsurprising. “We have no idea of how widespread the problem is because there is no way to systematically audit the system,” he says. “It's opaque – but I would argue that it's not really a problem for these private companies. Their job is not to be transparent and to do good.”

Some companies certainly appear to be acting positively. In 2020, Facebook promised to “build products to advance racial justice … this includes our work to amplify black voices".

Every apology from Silicon Valley is accompanied by a commitment to work on the problem. But a UN report published at the beginning of this year was clear where the fault lies.

“AI tools are mainly designed by developers in the West," it said. "In fact, these developers are overwhelmingly white men, who also account for the vast majority of authors on AI topics.”

The report went on to call for more diversity in the field of data science.

People working in the industry may bristle at accusations of racism, but as Ruha Benjamin explains in her book Race After Technology, it is possible to perpetuate racist systems without having any ill intent.

“No malice needed, no N-word required, just lack of concern for how the past shapes the present,” she writes.

But with AI systems having been painstakingly built and taught from the ground up over the past few years, what chance is there of reversing the damage?

“The benchmarks that these systems use have only very recently started to take into account systemic bias,” says Kayser-Bril. "To remove systemic racism would necessitate huge work on the part of many institutions in society, including regulators and governments.”

This uphill struggle was eloquently expressed by Canadian computer scientist Deborah Raji, writing for the MIT Technology Review.

“The lies embedded in our data are not much different from any other lie white supremacy has told,” she says. “They will thus require just as much energy and investment to counteract.”

Updated: September 13th 2021, 4:35 AM
EDITOR'S PICKS
NEWSLETTERS
Sign up to:

* Please select one