Wednesday, October 22, 2025
More
    HomeTechnologyWhen sycophancy and bias meet medicine

    When sycophancy and bias meet medicine

    -



    Once upon a time, two villagers visited the fabled Mullah Nasreddin. They hoped that the Sufi philosopher, famed for his acerbic wisdom, could mediate a dispute that had driven a wedge between them. Nasreddin listened patiently to the first villager’s version of the story and, upon its conclusion, exclaimed, “You are absolutely right!” The second villager then presented his case. After hearing him out, Nasreddin again responded, “You are absolutely right!” An observant bystander, confused by Nasreddin’s proclamations, interjected, “But Mullah, they can’t both be right.” Nasreddin paused, regarding the bystander for a moment before replying, “You are absolutely right, too!”

    In late May, the White House’s first “Make America Healthy Again” (MAHA) report was criticized for citing multiple research studies that did not exist. Fabricated citations like these are common in the outputs of generative artificial intelligence based on large language models, or LLMs. LLMs have presented plausible-sounding sources, catchy titles, or even false data to craft their conclusions. Here, the White House pushed back on the journalists who first broke the story before admitting to “minor citation errors.”

    It is ironic that fake citations were used to support a principal recommendation of the MAHA report: addressing the health research sector’s “replication crisis,” wherein scientists’ findings often cannot be reproduced by other independent teams.

    Yet the MAHA report’s use of phantom evidence is far from unique. Last year, The Washington Post reported on dozens of instances in which AI-generated falsehoods found their way into courtroom proceedings. Once uncovered, lawyers had to explain to judges how fictitious cases, citations, and decisions found their way into trials.

    Despite these widely recognized problems, the MAHA roadmap released last month directs the Department of Health and Human Services to prioritize AI research to “…assist in earlier diagnosis, personalized treatment plans, real-time monitoring, and predictive interventions…” This breathless rush to embed AI in so many aspects of medicine could be forgiven if we believe that the technology’s “hallucinations” will be easy to fix through version updates. But as the industry itself acknowledges, these ghosts in the machine may be impossible to eliminate.

    Consider the implications of accelerating AI use in health research for clinical decision making. Beyond the problems we’re seeing here, using AI in research without disclosure could create a feedback loop, supercharging the very biases that helped motivate its use. Once published, “research” based on false results and citations could become part of the datasets used to build future AI systems. Worse still, a recently published study highlights an industry of scientific fraudsters who could deploy AI to make their claims seem more legitimate.



    Source link

    Must Read

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Trending