Let's not allow artificial intelligence to reinforce very real stereotypes

Most digital assistants have female voices, because consumers say that they find them warmer and kinder. Surely, our bold, innovative, tech-driven world should be moving away from such gender biases

FILE - In this Aug. 16, 2018, file photo a child holds his Amazon Echo Dot in Kennesaw, Ga. Amazon met with skepticism from some privacy advocates and members of Congress last year when it introduced its first kid-oriented voice assistant , along with brightly colored models of its Echo Dot speaker designed for children. (AP Photo/Mike Stewart, File)
Powered by automated translation

Playing with my Lego, as a child, I would build human-like figures. I would create a whole cast of goodies and baddies, who would invariably end up fighting. I gave them voices. The goodies always spoke with a North American drawl, while the baddies spoke English with heavy foreign accents. The very few female characters in my games were either shrieking, hyper-feminine princesses who needed saving, or near-voiceless helpers who looked after the base and cared for the wounded heroes. My bedroom carpet was a showground for the stereotypes of the day.

The development of artificial intelligence and natural speech software has now enabled us to put voices into all kinds of devices. Many of us interact with such technology through digital assistants such as Apple's Siri, Amazon's Alexa, Microsoft's Cortana and Google Assistant. Unfortunately, the same stereotypes of my childhood games are now being perpetuated by these new technologies.

Research released earlier this year by Unesco describes how AI-powered digital assistants frequently reflect troubling gender biases. By default, Siri, Alexa and Cortana have all been assigned female names and personas. Even in the face of aggressive and offensive enquiries, these assistants remain docile, agreeable and, at times, even flirtatious.

The Unesco report, titled “I’d Blush if I Could: Closing Gender Divides in Digital Skills through Education”, warns that, as gendered AI devices become increasingly ubiquitous, they have the power to entrench and reinforce existing gender-related stereotypes.

These digital assistants are already commonplace, managing more than a billion tasks per month: helping us to find our phones, changing songs on our playlists and ordering goods online. It is predicted that the range of tasks these assistants help us with will widen exponentially in the coming years.

The Unesco report proposes that to address issues of gender stereotyping, the companies behind these AI devices should make the tech genderless, in name and voice. The name Siri, for example, reportedly comes from the old Norse language, meaning, a beautiful woman who leads to victory, while Alexa and Cortana both have that feminising a-suffix, and neither has the option to switch to a male voice if so desired. In contrast, IBM has a highly specialised digital assistant that does “serious work”, such as assisting physicians with cancer patients. It is named Watson and speaks only with a male voice.

What does it mean to describe a voice as male or female, and what would a gender-neutral voice sound like?

But what does it mean to describe a voice as male or female, and what would a gender-neutral voice sound like? Earlier this year, a team of linguists, technologists, and sound designers asked this very question and came up with Q, the world’s first gender-neutral voice technology. To arrive at Q, the team asked 4,600 people to rate voices on a scale of 1 (male) to 5 (female). Based on this study, audio researchers were then able to identify a frequency range, which is gender neutral: 153 Hz, also known as Q.  The creators of Q are already in talks with major tech companies interested in the prospect of a genderless digital assistant.

Beyond gender-neutralising the names and voices of digital assistants, another more consequential and sustainable solution to AI-related gender stereotyping is to promote greater female participation in the tech sector. According to research by Element AI and Wired magazine, only 12 per cent of AI researchers are women. Perhaps some of the male 88 per cent of the industry are projecting fantasies of subservient fembots onto their technological creations.

The spokespeople for the companies who have launched digital assistants, however, tend to claim that their gendering decisions are driven by extensive trialling and consumer research. For example, in an interview with PC Magazine, an Amazon spokesperson said of Alexa: "We tested many voices with our internal beta program and customers before launching, and this voice tested best."

There is no detail provided on how exactly Amazon undertook its consumer testing, but there is at least one example of independent research that supports the company’s claim. In an experimental study, published in the proceedings of the International Conference on Intelligent Robots and Systems, researchers reported that both men and women expressed a preference for a robot with a female voice, over that of an identical, male-voiced, counterpart, describing the former as being warmer and kinder.

Liking a thing, however, doesn’t make it right. The increasing ubiquity of this technology – globally, Siri, Alexa, Cortana and Google Assistant are installed on over 2 billion internet-connected devices – is guaranteed to have psychological and cultural impacts. We need to think, rethink and second-guess the effect technology can potentially have on our societies. After all, it would be a shame if cutting-edge innovations meant to free us and make our lives easier end up further entrenching attitudes that have held many of us back for so long.

Justin Thomas is a professor of psychology at Zayed University