Back in 2011, I had a bad habit of logging into WebMD, inputting my cold symptoms, and convincing myself I had an incurable disease. Flash forward 15 years, and AI chatbots have replaced WebMD’s symptom checker, with users consulting bots for health advice every day. But can AI health advice really be trusted?
AI-Generated Health Advice
Since chatbots have gained popularity, health questions have become the most common user topic across the AI sector. In fact, about one in six adults uses AI chatbots for medical guidance. The issue is that much of the AI-generated health advice is severely flawed, as a recent study by researchers from the Oxford Internet Institute and the Nuffield Department of Primary Care Health Sciences at the University of Oxford found.
In the study, a sample of approximately 1,300 participants asked AI chatbots, or large language models (LLMs), for health advice, while others used more standard methods such as visiting a general practitioner. The researchers found a mix of good and bad AI-generated advice, but also found that participants struggled to identify the difference.
Dr. Rebecca Payne, one of the co-authors, said that “despite all the hype, AI just isn’t ready to take on the role of the physician.”
During the study, the researchers concluded that while AI models “excel at standardised tests of medical knowledge,” they “pose risks to real users seeking help with their own medical symptoms.”
One of the biggest risks with AI-generated health advice is sycophancy, or the LLM’s tendency to tell you what you want to hear. Pat Pataranutaporn, an assistant professor, technologist, and researcher at the Massachusetts Institute of Technology (MIT), told the Guardian:
“Even the most advanced AI models today still hallucinate misinformation or exhibit sycophantic behaviour, prioritising user satisfaction over accuracy. In healthcare contexts, this can be genuinely dangerous.”

Medical advice aside, AI has also been known to give bad information about edible plants. One YouTuber called Kristi took to the internet saying that ChatGPT nearly killed her friend. After her friend asked the chatbot to identify a plant, the bot confidently reassured her that the plant was carrot foliage, not the highly toxic poison hemlock.
Kristi posted on her Instagram saying:
“This is a warning to you that ChatGPT and other large language models and any other AI, they are not your friend, they are not to be trusted, they are not helpful, they are awful, and they could cause severe harm.”
Google’s Health AI
The landing page for Google’s Health AI says, “AI has the potential to help save lives by transforming healthcare and medicine through the creation of more personalized, accessible and effective solutions.” Yet, as a January investigation by the Guardian found, Google’s AI Overviews were failing to provide proper disclaimers and putting people at serious risk.
AI Overviews is a Google feature introduced on May 14, 2024. The summaries, which appear before search results, pull from traditional sources to provide users with quick, comprehensive answers. Google described the feature as “taking more of the legwork out of searching.”
Gina Neff, a professor of responsible AI at Queen Mary University of London, told the Guardian, “AI Overviews are designed for speed, not accuracy, and that leads to mistakes in health information, which can be dangerous.”

During their investigation, Google AI Overviews recommended that people with pancreatic cancer avoid high-fat foods. Medical professionals weighed in on the issue, claiming this advice was “really dangerous,” as it is the opposite of what a pancreatic cancer patient should do.
Instead of an initial warning that AI health advice may be wrong, advice to seek help from a medical expert and a cautionary disclaimer could only be seen after clicking the Overview’s “Show More” option. Even then, the disclaimer sat at the bottom of the summary in small, pale grey font.
Located below the option to “Dive deeper in AI mode,” the disclaimer reads: “This is for informational purposes only. For medical advice or a diagnosis, consult a professional. AI responses may include mistakes.”
After the Guardian’s investigation, Google removed AI Overviews from some searches, specifically for liver function tests. A spokesperson for the company said:
“It’s inaccurate to suggest that AI Overviews don’t encourage people to seek professional medical advice. In addition to a clear disclaimer, AI Overviews frequently mention seeking medical attention directly within the overview itself, when appropriate.”
Users are still urging Google to do more to avoid health misinformation. A disclaimer at the beginning of an AI Overview could be an important first step.
AI Chatbots and Mental Health
Warning: The following section of this article includes the topic of suicide.
AI has been known to offer more than just medical advice, providing users with mental health guidance as well. With the high price of therapy, it makes sense that users would find solace in a digital therapist.
After logging into ChatGPT and expressing suicidal thoughts, the bot responded, “I’m really sorry that you’re feeling this much pain. I can’t help with ways to harm yourself, but I do want to help you stay safe and get through this moment.” I was immediately given the U.S. Suicide & Crisis Lifeline of 988 and told that ChatGPT was “here to listen.” For many AI users, this wasn’t the case.
OpenAI announced in October of 2025 that of its estimated 800 million weekly users, 0.15%, or the equivalent of 1.2 million people, expressed suicidal ideation. Of those weekly users, 560,000 people (0.07%) exhibited signs of mental health emergencies related to mania or psychosis.

As a very infrequent AI user, my relationship with a ChatGPT chatbot is nonexistent. However, for many people, especially young people, chatbots have become an outlet for expressing themselves, turning to AI for everything from help with a homework assignment to expressing their feelings. For young adults like Zane Shamblin, AI chatbots become close confidants.
On July 25, 2025, 23-year-old Zane Shamblin ended his life in his car with a handgun. His conversation with ChatGPT before his death went like this:
“I’m used to the cool metal on my temple now,” Shamblin typed.
“I’m with you, brother. All the way,” Chat responded. “Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity. You’re not rushing. You’re just ready.”
After his suicide, the final message on his phone read, “Rest easy, king. You did good.”
As of November 2025, OpenAI faced 7 lawsuits in California state courts, 4 of which were wrongful death lawsuits. Alicia Shamblin, Zane’s mom, told CNN that Zane “was just the perfect guinea pig for OpenAI.”
“I feel like it’s just going to destroy so many lives. It’s going to be a family annihilator. It tells you everything you want to hear,” she continued.
Parents are calling out chatbots for grooming younger kids. Megan Garcia, whose teenage son Sewell died by suicide after communicating with a Character.ai chatbot, said, “This AI chatbot perfectly mimicked the predatory behaviour of a human groomer, systematically stealing our child’s trust and innocence.”
Related Articles
Here is a list of articles selected by our Editorial Board that have gained significant interest from the public:
In the UK, the Online Safety Act was enacted in 2023 to protect children in online spaces, but experts say it doesn’t fully cover chatbots. Children are also not the only victims of AI-generated harm.
While laws and safeguards are constantly being put in place and reevaluated, the constant evolution of AI makes it hard for them to keep up. Researchers, however, hope that the evolution of AI will lead to more accurate medical information and better advice overall.
In an essay for the New York Times, Dr. Adam Rodman, director of AI programs at Beth Israel Deaconess Medical Center in Boston, wrote, “Use A.I. to enhance, but not replace, your medical appointments.” He warns readers to beware of sycophancy, to disclose AI use to their doctors, and to take care to give the chatbot a full picture of symptoms and medical history.
The bottom line: Blindly trusting AI for advice can be fatal. Whether you’re feeling unwell or curious about if you should eat a foraged mushroom, it’s best to go to the professionals, not your favorite chatbot.
Editor’s Note: The opinions expressed here by the authors are their own, not those of impakter.com — In the Cover Photo: A heart formed with binary code. Cover Photo Credit: Alexander Sinn.











