ChatGPT advice lands a man in the hospital with hallucinations


Consulting AI for medical advice can have deadly consequences.

A 60-year-old man was hospitalized with severe psychiatric symptoms — plus some physical ones too, including intense thirst and coordination issues — after asking ChatGPT for tips on how to improve his diet.

What he thought was a healthy swap ended in a toxic reaction so severe that doctors put him on an involuntary psychiatric hold.


Smartphone displaying the ChatGPT logo.
After reading about the adverse effects of sodium chloride, or table salt, on overall health, the man consulted ChatGPT and was told that chloride can be swapped with bromide. Getty Images

After reading about the adverse health effects table salt — which has the chemical name sodium chloride — the unidentified man consulted ChatGPT and was told that it could be swapped with sodium bromide.

Sodium bromide looks similar to table salt, but it’s an entirely different compound. While it’s occasionally used in medicine, it’s most commonly used for industrial and cleaning purposes — which is what experts believe ChatGPT was referring to.

Having studied nutrition in college, the man was inspired to conduct an experiment in which he eliminated sodium chloride from his diet and replaced it with sodium bromide he purchased online.

He was admitted to the hospital after three months of the diet swap, amid concerns that his neighbor was poisoning him.

The patient told doctors that he distilled his own water and adhered to multiple dietary restrictions. He complained of thirst but was suspicious when water was offered to him.

Though he had no previous psychiatric history, after 24 hours of hospitalization, he became increasingly paranoid and reported both auditory and visual hallucinations.

He was treated with fluids, electrolytes and antipsychotics and — after attempting escape — was eventually admitted to the hospital’s inpatient psychiatry unit.

Publishing the case study last week in the journal Annals of Internal Medicine Clinical Cases, the authors explained that the man was suffering from bromism, a toxic syndrome triggered by overexposure to the chemical compound bromide or its close cousin bromine.


Doctor holding the hand of a senior man in a hospital bed.
The man was suffering from bromism, a toxic syndrome triggered by overexposure to the chemical compound bromide or its close cousin bromine. Halfpoint – stock.adobe.com

When his condition improved, he was able to report other symptoms like acne, cherry angiomas, fatigue, insomnia, ataxia (a neurological condition that causes a lack of muscle coordination), and polydipsia (extreme thirst), all of which are in keeping with bromide toxicity.

“It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation,” study authors warned.

In the Terms of Use,

OpenAI, the developer of ChatGPT, states in its terms of use that the AI is “not intended for use in the diagnosis or treatment of any health condition” — but that doesn’t seem to be deterring Americans on the hunt for accessible healthcare.

According to a 2025 survey, a little more than a third (35%) of Americans already use AI to learn about and manage aspects of their health and wellness.

Though relatively new, trust in AI is fairly high, with 63% finding it trustworthy for health information and guidance—scoring higher in this area than social media (43%) and influencers (41%), but lower than doctors (93%) and even friends (82%).

Americans also find that it’s easier to ask AI specific questions versus going to a search engine (31%) and that it’s more accessible than speaking to a health professional (27%).

Recently, mental health experts have sounded the alarm about a growing phenomenon known as “ChatGPT psychosis” or “AI psychosis,” where deep engagement with chatbots fuels severe psychological distress.

Reports of dangerous behavior stemming from interactions with chatbots have prompted companies like OpenAI to implement mental health protections for users.

“While it is a tool with much potential to provide a bridge between scientists and the nonacademic population, AI also carries the risk for promulgating decontextualized information,” the report authors concluded.

“It is highly unlikely that a medical expert would have mentioned sodium bromide when faced with a patient looking for a viable substitute for sodium chloride.”



<

Leave a Reply

Your email address will not be published. Required fields are marked *