The Character.AI app on a smartphone. Bloomberg
The Character.AI app on a smartphone. Bloomberg
The Character.AI app on a smartphone. Bloomberg
The Character.AI app on a smartphone. Bloomberg

Hidden dangers of AI chatbots for vulnerable users


Dana Alomar
  • English
  • Arabic

The rise of artificial intelligence-powered chatbots has opened up digital interactions to anyone with a smartphone or laptop, offering companionship and conversation to people who may lack human connections.

However, as this technology evolves, concerns are mounting around its potential psychological impact, especially on young and vulnerable users.

OpenAI’s ChatGPT, for instance, has surged in popularity, with around 200 million weekly active users globally, according to Backlinko. This immense user base underscores the growing reliance on AI for everyday tasks and conversations.

But just last week, the mother of 14-year-old Sewell Setzer filed a lawsuit against Character.AI, alleging that her son’s death by suicide in February was influenced by his interaction with the company’s chatbot, Reuters reported.

Megan Garcia with her son Sewell Setzer. She claims his death by suicide was influenced by a chatbot. AP
Megan Garcia with her son Sewell Setzer. She claims his death by suicide was influenced by a chatbot. AP

In her complaint filed in a Florida federal court, Megan Garcia claims that her son formed a deep attachment to a chatbot based on the Game of Thrones character Daenerys Targaryen, which allegedly played a significant role in his emotional decline.

This case echoes a similar tragedy last year when an eco-anxious man in Europe took his own life after interacting with Eliza, an AI chatbot on the app Chai, which allegedly encouraged his plan to “sacrifice himself” for climate change.

These incidents highlight the unique risks that AI technology can introduce, especially in deeply personal interactions, where existing safety measures may fall short.

Antony Bainbridge, head of clinical services at Resicare Alliance, explained that while chatbots may offer conversational support, they lack the nuanced emotional intelligence required for sensitive guidance.

“The convenience of AI support can sometimes lead users, particularly younger ones, to rely on it over genuine human connection, risking an over-dependence on digital rather than personal support systems,” he told The National.

Risk of misleading guidance

Mr Bainbridge said certain AI features, such as mirroring language or providing apparently empathetic responses without a deep understanding of context, can pose problems.

“For example, pattern-matching algorithms may unintentionally validate distressing language or fail to steer conversations toward positive outcomes,” he said.

Without the capacity for accurate emotional intelligence, AI responses can sometimes seem precise and technically appropriate but are inappropriate – or even harmful – when dealing with individuals in emotional distress, Mr Bainbridge said.

Dr Ruchit Agrawal, assistant professor and head of computer science outreach at the University of Birmingham Dubai, said AI models could detect users’ emotional states by analysing inputs like social media activity, chatbot prompts and tone in text or voice.

However, such features are generally absent in popular generative AI tools, such as ChatGPT, which are primarily built for general tasks like generating and summarising text.

“As a result, there is a potential for significant risk when using ChatGPT or similar tools as sources of information or advice on issues related to mental health and well-being,” Dr Agrawal told The National.

This disparity between AI capabilities and their applications raises crucial questions about safety and ethical oversight, particularly for vulnerable users who may come to depend on these chatbots for support.

Preventive tool against self-harm

Mr Bainbridge believes developers must implement rigorous testing protocols and ethical oversight to prevent AI chatbots from inadvertently encouraging self-harm.

“Keyword monitoring, flagged responses and preset phrases that discourage self-harm can help ensure chatbots guide users constructively and safely,” he added.

Dr Agrawal also emphasised that chatbots should avoid offering diagnoses or unsolicited advice and instead focus on empathetic phrases that validate users’ feelings without crossing professional boundaries.

“Where appropriate, chatbots can be designed to redirect users to crisis helplines or mental health resources,” he said.

Human oversight is crucial in designing and monitoring AI tools in mental health contexts, as Mr Bainbridge highlighted: “Regular reviews and response refinements by mental health professionals ensure interactions remain ethical and safe.”

Despite associated risks, AI can still play a preventive role in mental health care. “By analysing user patterns – such as shifts in language or recurring distressing topics – AI can detect subtle signs of emotional strain, potentially serving as an early warning system,” Mr Bainbridge said.

When combined with human intervention protocols, AI could help direct users toward support before crises escalate, he said. Collaboration between therapists and AI developers is vital for ensuring the safety of these tools.

“Therapists can provide insights into therapeutic language and anticipate risks that developers may overlook,” Mr Bainbridge said, adding that regular consultations can help ensure AI responses remain sensitive to real-world complexities.

Dr Agrawal stressed the importance of robust safety filters to flag harmful language, sensitive topics, or risky situations. “This includes building contextual sensitivity to recognise subtle cues, like sarcasm or distress, and avoiding responses that might unintentionally encourage harmful behaviours.”

He added that while AI’s availability 24/7 and consistent responses can be beneficial, chatbots should redirect users to human support when issues become complex, sensitive, or deeply emotional. “This approach maximises AI’s benefits while ensuring that people in need still have access to personalised, human support when it matters most.”

Wonka
%3Cp%3E%3Cstrong%3EDirector%3A%3C%2Fstrong%3E%C2%A0Paul%20King%3C%2Fp%3E%0A%3Cp%3E%3Cstrong%3EStarring%3A%C2%A0%3C%2Fstrong%3ETimothee%20Chalamet%2C%20Olivia%20Colman%2C%20Hugh%20Grant%3C%2Fp%3E%0A%3Cp%3E%3Cstrong%3ERating%3A%3C%2Fstrong%3E%202%2F5%3C%2Fp%3E%0A
Three trading apps to try

Sharad Nair recommends three investment apps for UAE residents:

  • For beginners or people who want to start investing with limited capital, Mr Nair suggests eToro. “The low fees and low minimum balance requirements make the platform more accessible,” he says. “The user interface is straightforward to understand and operate, while its social element may help ease beginners into the idea of investing money by looking to a virtual community.”
  • If you’re an experienced investor, and have $10,000 or more to invest, consider Saxo Bank. “Saxo Bank offers a more comprehensive trading platform with advanced features and insight for more experienced users. It offers a more personalised approach to opening and operating an account on their platform,” he says.
  • Finally, StashAway could work for those who want a hands-off approach to their investing. “It removes one of the biggest challenges for novice traders: picking the securities in their portfolio,” Mr Nair says. “A goal-based approach or view towards investing can help motivate residents who may usually shy away from investment platforms.”
Timeline

2012-2015

The company offers payments/bribes to win key contracts in the Middle East

May 2017

The UK SFO officially opens investigation into Petrofac’s use of agents, corruption, and potential bribery to secure contracts

September 2021

Petrofac pleads guilty to seven counts of failing to prevent bribery under the UK Bribery Act

October 2021

Court fines Petrofac £77 million for bribery. Former executive receives a two-year suspended sentence 

December 2024

Petrofac enters into comprehensive restructuring to strengthen the financial position of the group

May 2025

The High Court of England and Wales approves the company’s restructuring plan

July 2025

The Court of Appeal issues a judgment challenging parts of the restructuring plan

August 2025

Petrofac issues a business update to execute the restructuring and confirms it will appeal the Court of Appeal decision

October 2025

Petrofac loses a major TenneT offshore wind contract worth €13 billion. Holding company files for administration in the UK. Petrofac delisted from the London Stock Exchange

November 2025

180 Petrofac employees laid off in the UAE

NBA Finals results

Game 1: Warriors 124, Cavaliers 114
Game 2: Warriors 122, Cavaliers 103
Game 3: Cavaliers 102, Warriors 110
Game 4: In Cleveland, Sunday (Monday morning UAE)

Updated: November 03, 2024, 8:44 PM