Dr Justin Thomas is a chartered health psychologist, author and a columnist for The National
October 17, 2022
Last week our team, an international collaborative spanning four universities, published its latest research findings in the journal Informatics. Our project explored chatbots, those occasionally annoying computer programs that simulate conversation with human users. Led by Dr Mohammad Kuhail from Zayed University, we examined whether altering a chatbot's personality traits might make it more trustworthy, engaging and liked.
Such research is increasingly important, as these eager electronic helpers begin to facilitate more and more of our routine online transactions. Also known as conversational agents, chatbots are everywhere, from "celebrity" helpers such as Siri and Alexa to the multitude of nameless bots that pop up online offering unsolicited assistance.
Some of the earliest chatbots, however, pre-date the internet. Eliza, for example, was developed in 1966. This chatty piece of software was based on the communication style of a Rogerian psychotherapist. Eliza typically repeats back what the user said, while also posing an open-ended question. For example, tell Eliza that you hate her, and she will respond with something like: "I see. You hate me. Why do you think you feel that way?"
The fact that virtual assistants such as Alexa and Siri overwhelmingly have female voices could point to human bias during their development. AP
As an undergraduate psychology student, I would introduce some of my fellow students to Eliza as though she were an online human. These conversations might go on for several minutes before my naive classmate realised they were pouring their soul out to a bot.
Today's bots, though, are not trying to pass themselves off as humans. Generally, it's enough that they are polite and efficient and help us achieve our goals. According to a 2022 report by Gartner, a leading information technology advisory company, chatbots will become the primary customer service channel for roughly a quarter of all organisations in the US by 2027. The same report suggests that 54 per cent of us are already regularly interacting with chatbots, typically in commercial contexts, for example, online shopping or travel bookings.
What are the psychological implications of our reliance on artificially intelligent chatbots? Does the fact that we are dealing with non-human agents make us less polite?
Research presented at the Conference on Human Factors in Computing in 2019 suggests that more than 10 per cent of our interactions with chatbots are abusive. Some people feel disinhibited when talking to a chatbot, using a tone and language they would rarely use with a fellow human. In the future, such chatbot abuse might not be so lightly tolerated. An article published in 2016 in the Harvard Business Review warns that cursing out an underperforming chatbot may soon become a workplace disciplinary issue. The article suggests that, at the very least, it represents poor leadership-by-example.
Consider also how the social development of a young child might be affected by regularly witnessing an adult verbally abuse a chatbot. One of my colleagues, a psychologist specialising in childhood development, expressed concerns that her young children were being rude and abrupt when speaking to chatbots. She now insists on "please and thank you" for bots too.
Another subtle psychological impact bots can have on us is to reinforce existing social stereotypes. For example, how best to represent the bot using a human avatar? What ethnicity? What gender?
In 2019, Unesco launched a policy paper titled "I'd Blush if I Could: Closing Gender Divides in Digital Skills through Education". The document warns that, as gendered chatbots become increasingly common, they have the power to entrench and reinforce existing gender-related stereotypes. The report draws attention to the fact that, by default, the English-language versions of Siri, Alexa and Cortana were all initially assigned female names and personas. More troublingly, even when faced with aggressive and abusive enquiries, these servile bots with female personas remained docile, agreeable and, occasionally, even flirtatious. About 91 per cent of those employed in Silicon Valley are male.
The radical prospect of the metaverse means humans should be thinking even harder about making the internet safe. AFP
On March 31, 2021, Apple updated its operating system and Siri no longer defaults to female. Regardless, how chatbots are represented remains a critical social issue. Furthermore, the metaverse (collective virtual/augmented reality) will introduce us to life-like, three-dimensional chatbots: slapable Siri.
Our research is a small contribution to understanding how we can make chatbot interactions more effective, human and humane. In our most recent study, we tinkered with chatbot personality, dialling up and down the levels of extraversion. We also did the same for levels of agreeableness and conscientiousness (the tendency to be careful, diligent and dutiful). Our usage context was academic advising, helping undergraduate students navigate college life. Personality mattered. All the chatbots were equally helpful. However, the students trusted and liked (intended to use) those with higher levels of extraversion and agreeableness.
As more of life is lived online, making the world a better place becomes synonymous with improving the internet, not speed and coverage, but content and culture. Interdisciplinary research is critical to this mission. This focus is more important than ever as we begin unlocking the potential of the metaverse.
Records of all supplies and imports of goods and services
All tax invoices and tax credit notes
Alternative documents related to receiving goods or services
All tax invoices and tax credit notes
Alternative documents issued
Records of goods and services that have been disposed of or used for matters not related to business
Farage on Muslim Brotherhood
Nigel Farage told Reform's annual conference that the party will proscribe the Muslim Brotherhood if he becomes Prime Minister. "We will stop dangerous organisations with links to terrorism operating in our country," he said. "Quite why we've been so gutless about this – both Labour and Conservative – I don't know. “All across the Middle East, countries have banned and proscribed the Muslim Brotherhood as a dangerous organisation. We will do the very same.” It is 10 years since a ground-breaking report into the Muslim Brotherhood by Sir John Jenkins. Among the former diplomat's findings was an assessment that “the use of extreme violence in the pursuit of the perfect Islamic society” has “never been institutionally disowned” by the movement. The prime minister at the time, David Cameron, who commissioned the report, said membership or association with the Muslim Brotherhood was a "possible indicator of extremism" but it would not be banned.
Etihad, Emirates and Singapore Airlines fly direct from the UAE to Singapore from Dh2,265 return including taxes. The flight takes about 7 hours.
The hotel
Rooms at the M Social Singapore cost from SG $179Â (Dh488) per night including taxes.
The tour
Makan Makan Walking group tours costs from SG $90 (Dh245) per person for about three hours. Tailor-made tours can be arranged. For details go to www.woknstroll.com.sg