Patients are being urged to use ChatGPT and other artificial intelligence tools with caution because the technology’s polished answers can be so convincing that people mistake them for medical advice.
Health experts at the World Health Expo (WHX Tech), taking place in Dubai until Wednesday, said large language models are already changing how patients communicate with their doctors. Patients are turning up with a list of medications or treatment options recommended to them by the technology.
Experts said AI’s growing persuasive power could lead patients to misplace their trust in machine-generated responses that are not clinically validated.
ChatGPT has passed sections of the US medical licensing exam, but experts cautioned that answering test questions is very different from practising medicine in real life, where diagnosis and treatment require context, judgment and accountability.
Rashmi Rao, managing director at Rcubed Ventures, referred to findings from an MIT study into patients' responses to answers produced by large language models such as ChatGPT compared with those given by doctors.
“The study basically compared AI-generated results versus doctor-generated results, and the patients, 50 per cent of the time, could not tell whether the response was AI-generated or doctor-generated,” she said.
She said the bigger concern was how highly respondents rated AI responses even when they were inaccurate. “That’s when it goes down that slippery slope, especially if it’s a patient that is trusting the AI to give its results … but the AI is giving them low-accuracy results. They still trust that more,” she said.

“I’m on this path of really having AI and generative AI tools that are patient-facing not just be held accountable for privacy of patient data … but also this tremendous persuasive power that generative AI now has.”
Using AI out of necessity
Tjasa Zajc, a digital health expert and host of the Faces of Digital Health podcast, said she often relies on ChatGPT to guide her between appointments.
“I’ve been living with [inflammatory bowel disease] and some complications related to it for the last 20 years,” Ms Zajc said. “Patients don’t just use AI because it’s a fancy new technology. They use AI out of desperation, because they’re in between visits and when you leave the doctor’s office, your next check-up could be in half a year. There’s no easy access to the clinician.”
She said language models tend to agree with questioners, so it is important to phrase the medical query properly.
“There’s a difference between saying 'what can you tell me about this drug?' or 'is this drug harmful for me?' and 'how harmful is this drug to me?' because large language models tend to agree with you,” she said. “If I say ‘is this harmful?' or 'how harmful is this?', it already implies that I believe it’s harmful.”
During a recent flare-up of her condition, Ms Zajc said she noticed how the doctor-patient dynamic is changing, with clinicians now expecting patients to arrive better informed.
“The doctor knows that I worked in medication management, so he asked me: 'Which medication would you like?' And I thought, is this how medical visits are supposed to look now? Are clinicians expecting us to do all the research, come with ideas and then discuss those with them?” she asked.

Ms Zajc said the shift could encourage patients to take more ownership of their care, but it also comes with risks. “There’s a huge shift that’s happening. I still always like to emphasise that we need to be cautious as consumers,” she added.
Empowering some patients
Jon Christensen, senior insights director at KLAS Research, said tools such as ChatGPT could benefit patients in low-income countries in particular, where access to health care is limited.
He said there are concerns that digital health tools could widen inequalities, but his company's research with a safety-net hospital in the US suggested the opposite.
“Patients in the lowest income demographic, that did not speak English as their first language, they actually found the technology to be the most empowering,” he said.
Instead of struggling to keep up in fast conversations with doctors, patients said they could take their time to read information, write questions and translate material into their own language.
Mr Christensen said this helped them better understand what was happening and how treatment applied to them.
He said that generative AI could build on this by offering instant translation and resources in native languages, allowing patients to become “much more informed consumers of health care”.
He also said surveys by his company showed that patients remain divided about the role of AI. About half said they were comfortable with their health system using it, but about 20 per cent were opposed.
“The discomfort comes with things like security and privacy,” he said. “It comes with the closer it gets to diagnosing them. They’re very uncomfortable with that, if the AI is the primary diagnostic tool.”
He said patients want reassurance that doctors will remain directly involved. “They want to make sure that the doctor or the caregiver is still very much involved, and there’s oversight for what the diagnosis is and the treatment plan,” he said.



