Recent generations of AI have proven handy for restaurant recommendations and writing emails, but as a source of medical advice they’ve had some clear drawbacks.
Case in point: a man who followed a chatbot’s health plan ended up in hospital after giving himself a rare form of toxicity.
The story began when the patient decided to improve his health by reducing his intake of salt, or sodium chloride. To find a substitute, he did what so many other people do nowadays: he asked ChatGPT.
Related: AI ‘Ghosts’ Could Be a Serious Threat to Mental Health, Expert Warns
OpenAI’s chatbot apparently suggested sodium bromide, which the man ordered online and incorporated into his diet.
While it is true that sodium bromide can be a substitute for sodium chloride, that’s usually if you’re trying to clean a hot tub, not to make your fries tastier. But the AI neglected to mention this crucial context.
Three months later, the patient presented to the emergency department with paranoid delusions, believing his neighbor was trying to poison him.
“In the first 24 hours of admission, he expressed increasing paranoia and auditory and visual hallucinations, which, after attempting to escape, resulted in an involuntary psychiatric hold for grave disability,” the physicians write.
After he was treated with anti-psychosis drugs, the man calmed down enough to explain his AI-inspired dietary regime. This information, along with his test results, allowed the medical staff to diagnose him with bromism, a toxic accumulation of bromide.
Bromide levels are typically less than around 10 mg/L in most healthy individuals; this patient’s levels were measured at 1,700 mg/L.
Bromism was a relatively common condition in the early 20th century, and is estimated to have once been responsible for up to 8 percent of psychiatric admissions. But cases of the condition drastically dropped in the 1970s and 1980s, after medications containing bromides began to be phased out.

Following diagnosis, the patient was treated over the course of three weeks and released with no major issues.
The main concern in this case study isn’t so much the return of an antiquated illness – it’s that emerging AI technology still falls short on replacing human expertise when it comes to things that truly matter.
“It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation,” the authors write.
“It is highly unlikely that a medical expert would have mentioned sodium bromide when faced with a patient looking for a viable substitute for sodium chloride.”
The research was published in the journal Annals of Internal Medicine: Clinical Cases.
Source link