Accents and AI: how speech recognition software could lead to new forms of discrimination

AI experts debate the issue of accent bias and question why enough is not being done to solve the problem

Speech recognition systems leave out a large demographic of English speakers because they can only recognise accents they’ve been trained to understand. Getty
Powered by automated translation

Anyone who has used a voice assistant such as Apple's Siri or Amazon's Alexa will have occasionally struggled to make themselves understood. Perhaps the device plays the wrong music, or puts unusual items on a shopping list, or emits a plaintive “didn't quite catch that”. But for people who speak with an accent, these devices can be unusable.

The inability of speech recognition systems to understand accents found in Scotland, Turkey, the southern states of the US or any number of other places is widely documented on social media, and yet the problem persists. With uses of the technology now spreading beyond the domestic, researchers and academics are warning that biased systems could lead to new forms of discrimination, purely because of someone’s accent.

“It's one of the questions that you don't see big tech responding to,” says Halcyon Lawrence a professor of technical communication at Towson University in Maryland who is from Trinidad and Tobago. “There's never a statement put out. There's never a plan that's articulated. And that's because it's not a problem for big tech. But it’s a problem for me, and large groups of people like me.”

Speech recognition systems can only recognise accents they’ve been trained to understand. To learn how to interpret the accent of someone from Trinidad, Eswatini or the UAE, a system needs voice data, along with an accurate transcription of that data, which inevitably has to be done by a human being. It’s a painstaking and expensive process to demonstrate to a machine what a particular word sounds like when it’s spoken by a particular community, and perhaps inevitably, existing data is heavily skewed towards English as typically spoken by white, highly educated Americans.

If you plot new accent releases on a map, you can’t help but notice that the Global South is not a consideration, despite the numbers of English speakers there
Halcyon Lawrence, a professor of Technical Communication at Towson University in Maryland

A study called Racial Disparities in Automated Speech Recognition, published last year by researchers at Stanford University, illustrates the stark nature of the problem. It analysed systems developed by Amazon, Apple, Google, IBM and Microsoft, and found that in every case the error rates for black speakers were nearly double that of white people. In addition, it found that the errors were not caused by grammar, but by “phonological, phonetic, or prosodic characteristics”; in other words, accent.

Allison Koenecke, who led the study, believes that a two-fold improvement in the system is needed. “It needs resources to ethically collect data and ensure that the people working on these products are also diverse,” she says. “While tech companies may have the funds, they may not have known that they needed to prioritise this issue before external researchers shone a light on it.”

Lawrence, however, believes that the failings are no accident.

“What, for me, shows big tech's intention is when they decide to release a new accent to the market and where that is targeted,” she says. “If you plot it on a map, you can’t help but notice that the Global South is not a consideration, despite the numbers of English speakers there. So you begin to see that this is an economic decision.”

It’s not only accented English that scupper speech recognition systems. Arabic poses a particular challenge – not simply because of the many sub-dialects, but inherent difficulties such as the lack of capital letters, recognising proper nouns and predicting a word’s vowels based on context. Substantial resources are being thrown at this problem, but the current situation is the same as with English: large communities technologically disenfranchised.

Why is this of particular concern? Beyond the world of smart speakers lies a much bigger picture. “There are many higher-stakes applications with much worse consequences if the underlying technologies are biased,” says Koenecke. “One example is court transcriptions, where court reporters are starting to use automatic speech recognition technologies. If they aren't accurate at transcribing cases, you have obvious repercussions.”

Lawrence is particularly concerned about the way people drop their accent in order to be understood, rather than the technology working harder to understand them. “Accent bias is already practiced in our community,” she says. “There's an expectation that we adapt our accent, and that's what gets replicated in the device. It would not be an acceptable demand on somebody to change the colour of their skin, so why is it acceptable to demand we change our accents?”

Money, as ever, lies at the root of the problem. Lawrence believes strongly that the market can offer no solution, and that big tech has to be urged to look beyond its profit margin. “It’s one of the reasons why I believe that we’re going to see more and more smaller independent developers do this kind of work,” she says.

One of those developers, a British company called Speechmatics, is at the forefront, using what it calls “self-supervised learning” to introduce its speech recognition systems to a new world of voices.

If you have the right kind of diversity of data, it will learn to generalise across voices, latch on quickly and understand what's going on
Will Williams, vice president of Machine Learning

“We're training on over a million hours of unlabelled audio, and constructing systems that can learn interesting things, autonomously run,” says Will Williams, vice president of machine learning at Speechmatics.

The crucial point: this is voice data that hasn’t been transcribed. “If you have the right kind of diversity of data, it will learn to generalise across voices, latch on quickly and understand what's going on.” Using datasets from the Stanford study, Speechmatics has already reported a 45 per cent reduction in errors when using its system.

An organisation called ML Commons, which has Google and Microsoft as two of its more than 50 founding members, is now looking for new ways to create speech recognition systems that are accent-agnostic.

It’s a long road ahead, but Koenecke is optimistic. “Hopefully, as different speech-to-text companies decide to invest in more diverse data and more diverse teams of employees such as engineers and product managers, we will see something that reflects more closely what we see in real life.”

Updated: November 07, 2021, 2:54 PM