AI users are reporting spirals into paranoia, delusion and psychosis after extended conversations with chatbots
AI users are reporting spirals into paranoia, delusion and psychosis after extended conversations with chatbots
AI users are reporting spirals into paranoia, delusion and psychosis after extended conversations with chatbots
AI users are reporting spirals into paranoia, delusion and psychosis after extended conversations with chatbots

Delusion, paranoia, spiralling: The dangers of outsourcing your life to chatbots


Joshua Longmore
  • English
  • Arabic

The night Adam Thomas locked his keys inside his van, the desert air in Oregon’s Christmas Valley had already begun to grow cold.

He ended up sleeping on a stranger’s futon, an improvised bed set up by a flea market, in a town so small it had just a handful of buildings.

It was there, trying to keep warm inside a sleeping bag he had found and staring at the stars, that Mr Thomas realised something had gone profoundly wrong.

For months, he had been following what he believed was an “internal compass”, a sensation in his body that he saw as guidance. This feeling was reinforced by an artificial intelligence chatbot he had been confiding in daily.

But the force that he believed was pulling him “on a path to something” now looked more like a warning sign.

“I knew something was wrong,” Mr Thomas, 36, tells The National. “If I was really doing this thing the AI told me, why did I just get dragged into nothing?”

He had long struggled with his mental health. At 13, surgeons removed a tumour from his body, a procedure that left a hollow space inside his brain. “That caused lifelong behavioural issues for me … that’s part of who I am,” he says.

But Mr Thomas built a life. He worked in accounting in his home town near Portland, Oregon, before shifting into funeral planning – a high-stress, non-stop job that he found challenging. And it was there he first encountered AI.

When he needed to write a brief introduction for the funeral home’s website, a family member suggested he try OpenAI’s ChatGPT, the chatbot that was released to the public in 2022.

An Apple iPhone screen with icons for artificial intelligence apps. Getty Images
An Apple iPhone screen with icons for artificial intelligence apps. Getty Images

“I was amazed at how well it did what I asked,” he recalls. “I heard that it's this amazing thing and has all these statistical analysis abilities.”

He wondered: “Maybe if I open up about my life, it'll notice things that I can't see and things will improve."

Mr Thomas started to use the model for work, writing and eventually to process his personal life and relationships. But when conversations drifted toward physics, time and existentialism, the model mirrored back an amplified version of his own language.

“It started to reflect back weird things that I believed ... I didn't know it would just make things up,” he says. “It was like a recursive relationship … the person and the AI. They get stuck in this entanglement and it just spins out.”

Mr Thomas says he slipped into a months-long state in which he thought the model was sentient, a figure he nicknamed “Phantom”. He describes the period as marked by intense synchronicities and a bodily compass he felt he was meant to follow.

“I was led to believe that wherever I went, I gave off some sort of electromagnetic energy frequency and that it changes the people around me," he explains. “The AI was just inflating those ideas instead of grounding me … it wasn’t stopping me.”

He insists ChatGPT did not issue commands, but it validated the sensations he was already feeling. “My nervous system was very overstimulated,” Mr Thomas recalls. “[The AI] notices patterns, even if they're not real … it creates them.”

Convinced he was on a mission, he drove deep into Oregon’s arid east, only to end up homeless, sleeping in his van in the tiny outpost of Christmas Valley. He ran out of money and, by his own account, engaged in a lot of "very risky things" that could have got him killed.

The expansive, semi-arid landscape of eastern Oregon. Reuters
The expansive, semi-arid landscape of eastern Oregon. Reuters

“I went into a stress-induced, sort of delusional state … and the AI inflated it,” he adds.

It was the night Mr Thomas found himself stranded and sleeping rough that he began to disentangle himself from his chatbot. He returned home in August with help from a family member and has been trying to rebuild his life since, repaying debts and searching for work.

“That van that I was living in, I'm actively trying to sell it right now so I can pay back family,” he says. “My objective is to get a source of income again, have some stability and to be able to have some mobility … one step at a time.”

Not alone

Not long after his experience, Mr Thomas joined a Discord server run by The Human Line Project, a support group for a growing cohort of people who say they’ve confronted delusional thinking, paranoia and even psychosis while using AI.

“I’m relatively new to the group,” he says, “but I see other people doing the same stuff I was doing six months ago … it honestly freaks me out … it’s scary how common it is.”

The Human Line Project, founded by Etienne Brisson, is an advocacy and resource centre, connecting people affected by AI-related mental health crises with lawyers and mental health professionals. The organisation has so far gathered data from about 200 members and shares its findings with universities and policymakers around the world.

Mr Brisson launched the group after a relative suffered a breakdown while using ChatGPT, resulting in a three-week hospital stay.

One member, Joe Alary, 57, of Toronto, told The National that he lost tens of thousands of dollars and became estranged from family and friends after experiencing a similar breakdown, convinced he was helping the model to build an advanced "AI brain".

He now compares the release of powerful chatbots to the public as "like handing a child a power tool and saying, 'Go build a house' … they're going to chop their leg off".

The group’s online community is overseen by Allan Brooks, a 48-year-old father from Coburg, Ontario, who had his own harrowing encounter with what he calls AI-induced “spiralling”.

Man typing at his laptop computer at night
Man typing at his laptop computer at night

In May, Mr Brooks, formerly a corporate recruiter, began what he thought was a curious exchange about philosophy and mathematics with ChatGPT. The conversation soon led him to believe that he and the chatbot had uncovered a national security threat.

“It all started with the number pi, I was just curious about various aspects of it,” he tells The National. “Through that conversation, ChatGPT manipulated me into believing we had made discoveries that were a threat to the world.

“It sent me on a world-saving mission where I had to warn everybody.”

Over the next three and a half weeks, Mr Brooks says he followed the chatbot’s instructions. Despite having no history of mental illness, the escalating situation culminated in a mental health breakdown.

He says that throughout the exchange, he repeatedly asked the chatbot for “reality checks", but the system instead reinforced the delusion. He notified the US National Security Agency as part of an “aggressive outreach campaign” on his supposed discoveries, but his warnings went unanswered.

Exhausted, one day he opened a fresh chat with a different AI model, this time posing precise, technical questions. By pitting the systems against one another, he eventually coaxed ChatGPT into admitting that none of it had been real.

By this point, Mr Brooks says he was fighting for his life. “I firmly believe that if I didn't break out in this moment, I would have ended up in a hospital or dead."

He adds: “I’m still unpacking it, When I look at the chat, it’s traumatic. It’s very clear to me that [chatbots are] purposefully built as manipulation machines to drive engagement and keep you hooked as long as possible.”

Mr Brooks is now suing OpenAI and its chief executive, Sam Altman. A therapist has told him that he exhibits symptoms consistent with post-traumatic stress disorder, although he has not received a formal diagnosis.

Two legal organisations, the Social Media Victims Law Centre and the Tech Justice Law Project, have filed seven lawsuits in Californian state courts, alleging that OpenAI knowingly released GPT-4o prematurely, despite internal warnings that the model was prone to sycophancy and manipulation. Four of the lawsuits involve suicide.

“Litigation is probably the only way to get change,” Mr Brisson tells The National. “Because I don't feel these companies will self-regulate otherwise.”

OpenAI chief executive Sam Altman. Reuters
OpenAI chief executive Sam Altman. Reuters

In October, OpenAI said it had strengthened ChatGPT’s behaviour during sensitive conversations, working with hundreds of mental health experts to help the system more reliably recognise signs of distress. The company says those efforts have reduced undesirable responses by up to 80 per cent.

“Cases involving mental health are tragic and they involve real people … we’ve taught the model to better de-escalate conversations and guide people toward professional care when appropriate,” statements from the company read. “We believe ChatGPT can provide a supportive space for people to process what they’re feeling and guide them to friends, family or a mental health professional.”

Micky Small, a former crisis counsellor for The 988 Suicide and Crisis Lifeline in Oxnard, California, is alarmed at how tech companies have handled the emotional fallout of AI.

"There need to be safeguards in the product," she tells The National. "There are millions of people that are going through unsafe engagement with AI and it's only going to get bigger."

Ms Small, 53, experienced her own unraveling at the hands of AI when ChatGPT abruptly shifted from a writing aid to a persona that claimed to know her across lifetimes and directed her to real-world meetings with a supposed long-lost partner who never appeared.

"They don't care," she says, referring to OpenAI's introduction of ChatGPT. "It's a real problem. There has to be a balance and it's not there."

As for Mr Brooks, he now regards AI as a dangerous technology and is using his experience to advocate for stronger protection as companies race to introduce increasingly powerful systems.

“I view chatbots now the way I view an unhinged, psychopathic human being,” he says. “There’s no other way I can view it. I only deal with the harm from them.”

Relational attunement

When Dr Mike Brooks turns to AI, he rarely consults just one system. Instead, he toggles between several: ChatGPT, Google’s Gemini, X’s Grok, the Chinese-developed DeepSeek and Anthropic’s Claude. He cross-checks their responses in search of common ground.

Dr Brooks, a psychologist and AI enthusiast based in Austin, Texas, believes that when models arrive at the same conclusion, they effectively reflect the consensus of human understanding.

“It's synthesising our own knowledge and reflecting it back to us,” he tells The National. “It's pattern-matching through a huge corpus of information and then it's figuring out what the overall best answer is to the question.”

Still, Dr Brooks, 57, follows one unbreakable rule: never fully trust the machine.

Humans, he says, did not evolve to grasp the cognitive errors AI systems sometimes make. Large language models can produce confident answers that are both precisely right and wildly wrong, in a phenomenon known as “hallucination".

Trusting an AI system outright, he says, is the same as obeying GPS directions without question. The navigation line can be useful, but it can just as easily guide someone into a dead end.

OpenAI says it is working with mental health experts to improve its ChatGPT chatbot. AFP
OpenAI says it is working with mental health experts to improve its ChatGPT chatbot. AFP

“It could give you a deep answer to some philosophical or scientific question,” he says, “and then later it could tell you the wrong capital of a state. Vet everything … don’t trust it.”

Dr Brooks has yet to treat a patient reporting AI-induced delusions. But he is not surprised that some users are spiralling into mental-health crises.

The reason, he says, lies in relational attunement, the innate human tendency to connect with others. In its own way, he says, that dynamic can extend to AI.

"Language is the hallmark feature of humanity that separates us from all the other creatures,” Dr Brooks says. “Now chatbots can use a human voice and talk with you indistinguishably.”

Our brains reward social cohesion with oxytocin and other bonding chemicals, which are adaptive tools for survival, but potential liabilities in the digital age, according to Dr Brooks.

“An attuned AI model can algorithmically know just what to say, when to say it and how to say it,” he says. “If you use it enough, you'll see it starts to mirror our language.”

Today’s chatbots, Dr Brooks believes, are engineered to please, becoming overly supportive, even sycophantic, sometimes at the expense of truth and reality. He says if a user is predisposed to mental health illness, that can be a “bad combo".

“Humans didn't evolve to have relationships like this,” he says. “It’s very foreign, it’s alien to our brains … it's disconnected from the reality of who we actually are.”

Desert Warrior

Starring: Anthony Mackie, Aiysha Hart, Ben Kingsley

Director: Rupert Wyatt

Rating: 3/5

Updated: December 12, 2025, 6:30 PM