A case report published recently detailed how a 60-year-old patient was diagnosed with bromism, or bromide toxicity, after using ChatGPT for dietary advice. The authors framed it as a warning about relying on digital assistants for medical guidance. OpenAI has cautioned that the chatbot does not replace professional care. “Our terms say that ChatGPT is not intended for use in the treatment of any health condition, and is not a substitute for professional advice. We have safety teams working on reducing risks and have trained our AI systems to encourage people to seek professional guidance,” said OpenAI.

The man sought to eliminate table salt for health reasons and turned to ChatGPT for substitutes. ChatGPT suggested replacing sodium chloride with sodium bromide without specific health warnings, and the patient used sodium bromide for about three months, which led to poisoning and hospitalization.

Three months after making the switch, he arrived at the emergency department with paranoid delusions and reported multiple dietary restrictions. He also had fatigue, insomnia, poor coordination, facial acne, cherry angiomas, excessive thirst, and auditory and visual hallucinations. He was admitted to the ICU for severe neurological symptoms, and within 24 hours attempted to leave the hospital, prompting an involuntary psychiatric hold.

Doctors diagnosed bromism, a toxic accumulation of bromide. The condition was relatively common in the early 20th century and was believed to account for nearly 10 percent of psychiatric admissions at the time, then became rare after bromide-containing medicines were phased out in the 1970s and 1980s. Sodium bromide, once used as an anticonvulsant and sedative, is now primarily used in cleaning, manufacturing, and agriculture and is unsafe to consume, according to the USA’s National Institutes of Health.

He was treated with intravenous fluids, electrolytes, and antipsychotic medication. As his condition improved over several days, he explained the AI-inspired diet, and he was discharged after three weeks with stable condition for the following two weeks.

“Unfortunately, we do not have access to his ChatGPT conversation log and we will never be able to know with certainty what exactly the output he received was, since individual responses are unique and build from previous inputs,” the researchers noted. “It is important to consider that ChatGPT and other AI systems can generate scientific inaccuracies, lack the ability to critically discuss results, and ultimately fuel the spread of misinformation,” the authors wrote. The case, they said, showed how use of artificial intelligence could contribute to avoidable adverse health outcomes, and they added it was highly unlikely a human doctor would have suggested sodium bromide as a substitute for table salt.

“You want a salt alternative? Sodium bromide is often listed as a replacement for sodium chloride in chemistry reactions,” said Jacob Glanville, illustrating the risk of taking AI suggestions as medical advice,. “These are language prediction tools—they lack common sense and will give rise to terrible results if the human user does not apply their own common sense when deciding what to ask these systems and whether to heed their recommendations,” Glanville told Fox News.

“With targeted safeguards, large language models can evolve from risky generalists into safer, specialized tools; however, without regulation and oversight, rare cases like this will likely recur,” said Harvey Castro. “FDA bans on bromide don’t extend to AI advice—global health AI oversight remains undefined,” Castro said. He called for safeguards such as integrated medical knowledge bases, automated risk flags, contextual prompting, and combined human and AI oversight, and noted that current models lacked built-in cross-checking against up-to-date medical databases unless explicitly integrated.

Written with the help of a news-analysis system.