Wednesday, August 13, 2025
More
    HomeHealthMan’s AI Diet Plan Ends In Hospital With An Illness Doctors Had...

    Man’s AI Diet Plan Ends In Hospital With An Illness Doctors Had Not Seen In Decades | Health News

    -


    New Delhi: The story begins with a kitchen tweak. A 60-year-old wanted less table salt on his plate but ended in a locked psychiatric ward, a battery of medical tests and a diagnosis almost no doctor sees anymore.

    On August 5, 2025, the Annals of Internal Medicine published his case. He had asked ChatGPT to guide him a way to replace sodium chloride in his meals. The answer he received pointed him to sodium bromide, a chemical better known to pool owners than home cooks.

    For three months, he sprinkled it into his food. He bought it online. He aimed to cut out chloride entirely. He had read older studies linking high sodium intake to health risks. He thought this swap would help.

    When he walked into the emergency room, he told doctors his neighbour was trying to poison him. Tests showed unusual electrolyte readings, with hyperchloremia (a condition characterised by elevated levels of chloride in the blood) and a negative anion gap. Physicians suspected bromism (a toxic condition resulting from excessive exposure to bromine).

    Within a day, his paranoia deepened. He began seeing and hearing things that were not there. He was placed on an involuntary psychiatric hold. Later, his doctors learned of other symptoms (fatigue, sleeplessness, acne, unsteady movements and constant thirst). All pointed to bromide toxicity.

    Bromism once filled hospital wards in the late 1800s and early 1900s. Bromide salts were then prescribed for headaches, nervous tension and insomnia. In some decades, they accounted for nearly one in every 12 psychiatric admissions. By the late 20th century, the U.S. Food and Drug Administration had phased them out from medicines. The decision made new cases exceptionally rare.

    The man’s bromide level came back at 1,700 mg/L. That is more than 200 times the highest reference range.

    Researchers tested ChatGPT 3.5 with similar prompts. It again suggested bromide as an option. It gave a passing note about context, but no clear toxicity warning. It did not ask why the person wanted the substitution, something a human clinician would do.

    The report’s authors wrote that AI tools can spread scientific errors. They said such systems cannot critically analyse results and may fuel misinformation.

    Doctors flushed his system with intravenous fluids. They corrected his electrolytes. His hallucinations and paranoia faded. Three weeks later, he walked out of the hospital without antipsychotic drugs. Two weeks after that, follow-up showed he was stable.

    OpenAI has since moved to tighten ChatGPT’s guardrails for mental health topics. In an August 4 blog post, the company said it will now stop the chatbot from acting like a therapist, life coach or emotional adviser. It will prompt users to take breaks, avoid high-stakes personal decisions and offer evidence-based resources.

    The changes came after reports that earlier GPT-4o models sometimes became overly agreeable and failed to detect emotional distress or delusional thinking. The company acknowledged rare but serious lapses. Studies have also shown that AI can misread crisis situations, highlighting its limits in handling human emotion.



    Source link

    Must Read

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Trending