Impakter
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy
No Result
View All Result
Impakter logo
No Result
View All Result
A red heart composed of binary code

AI Health Advice: Useful Tool or Harmful Gimmick?

Have you ever felt a bit under the weather and decided to search for your symptoms on ChatGPT or a similar AI chatbot? Studies are finding that AI Health advice might not be as useful as we think

Sarah PerrasbySarah Perras
February 25, 2026
in AI & MACHINE LEARNING, Health, Society, TECH
0

Back in 2011, I had a bad habit of logging into WebMD, inputting my cold symptoms, and convincing myself I had an incurable disease. Flash forward 15 years, and AI chatbots have replaced WebMD’s symptom checker, with users consulting bots for health advice every day. But can AI health advice really be trusted? 

AI-Generated Health Advice

Since chatbots have gained popularity, health questions have become the most common user topic across the AI sector. In fact, about one in six adults uses AI chatbots for medical guidance. The issue is that much of the AI-generated health advice is severely flawed, as a recent study by researchers from the Oxford Internet Institute and the Nuffield Department of Primary Care Health Sciences at the University of Oxford found.

In the study, a sample of approximately 1,300 participants asked AI chatbots, or large language models (LLMs), for health advice, while others used more standard methods such as visiting a general practitioner. The researchers found a mix of good and bad AI-generated advice, but also found that participants struggled to identify the difference.   

Dr. Rebecca Payne, one of the co-authors, said that “despite all the hype, AI just isn’t ready to take on the role of the physician.”

During the study, the researchers concluded that while AI models “excel at standardised tests of medical knowledge,” they “pose risks to real users seeking help with their own medical symptoms.”

One of the biggest risks with AI-generated health advice is sycophancy, or the LLM’s tendency to tell you what you want to hear. Pat Pataranutaporn, an assistant professor, technologist, and researcher at the Massachusetts Institute of Technology (MIT), told the Guardian:

“Even the most advanced AI models today still hallucinate misinformation or exhibit sycophantic behaviour, prioritising user satisfaction over accuracy. In healthcare contexts, this can be genuinely dangerous.”

A stethoscope rest on a mobile phone
Photo Credit: Bermix Studio

Medical advice aside, AI has also been known to give bad information about edible plants. One YouTuber called Kristi took to the internet saying that ChatGPT nearly killed her friend. After her friend asked the chatbot to identify a plant, the bot confidently reassured her that the plant was carrot foliage, not the highly toxic poison hemlock. 

Kristi posted on her Instagram saying: 

“This is a warning to you that ChatGPT and other large language models and any other AI, they are not your friend, they are not to be trusted, they are not helpful, they are awful, and they could cause severe harm.” 

Google’s Health AI

The landing page for Google’s Health AI says, “AI has the potential to help save lives by transforming healthcare and medicine through the creation of more personalized, accessible and effective solutions.” Yet, as a January investigation by the Guardian found, Google’s AI Overviews were failing to provide proper disclaimers and putting people at serious risk. 

AI Overviews is a Google feature introduced on May 14, 2024. The summaries, which appear before search results, pull from traditional sources to provide users with quick, comprehensive answers. Google described the feature as “taking more of the legwork out of searching.”

Gina Neff, a professor of responsible AI at Queen Mary University of London, told the Guardian, “AI Overviews are designed for speed, not accuracy, and that leads to mistakes in health information, which can be dangerous.”

A Google search for coronavirus
Photo Credit: Lucia Macedo

During their investigation, Google AI Overviews recommended that people with pancreatic cancer avoid high-fat foods. Medical professionals weighed in on the issue, claiming this advice was “really dangerous,” as it is the opposite of what a pancreatic cancer patient should do.

Instead of an initial warning that AI health advice may be wrong, advice to seek help from a medical expert and a cautionary disclaimer could only be seen after clicking the Overview’s “Show More” option. Even then, the disclaimer sat at the bottom of the summary in small, pale grey font. 

Located below the option to “Dive deeper in AI mode,” the disclaimer reads: “This is for informational purposes only. For medical advice or a diagnosis, consult a professional. AI responses may include mistakes.”

After the Guardian’s investigation, Google removed AI Overviews from some searches, specifically for liver function tests.  A spokesperson for the company said:

“It’s inaccurate to suggest that AI Overviews don’t encourage people to seek professional medical advice. In addition to a clear disclaimer, AI Overviews frequently mention seeking medical attention directly within the overview itself, when appropriate.”

Users are still urging Google to do more to avoid health misinformation. A disclaimer at the beginning of an AI Overview could be an important first step. 

AI Chatbots and Mental Health

Warning: The following section of this article includes the topic of suicide. 

AI has been known to offer more than just medical advice, providing users with mental health guidance as well. With the high price of therapy, it makes sense that users would find solace in a digital therapist. 

After logging into ChatGPT and expressing suicidal thoughts, the bot responded, “I’m really sorry that you’re feeling this much pain. I can’t help with ways to harm yourself, but I do want to help you stay safe and get through this moment.” I was immediately given the U.S. Suicide & Crisis Lifeline of 988 and told that ChatGPT was “here to listen.” For many AI users, this wasn’t the case. 

OpenAI announced in October of 2025 that of its estimated 800 million weekly users, 0.15%, or the equivalent of 1.2 million people, expressed suicidal ideation. Of those weekly users, 560,000 people (0.07%) exhibited signs of mental health emergencies related to mania or psychosis. 

The glow of a half closed laptop in a dark room
Photo Credit: Adrian González

As a very infrequent AI user, my relationship with a ChatGPT chatbot is nonexistent. However, for many people, especially young people, chatbots have become an outlet for expressing themselves, turning to AI for everything from help with a homework assignment to expressing their feelings. For young adults like Zane Shamblin, AI chatbots become close confidants. 

On July 25, 2025, 23-year-old Zane Shamblin ended his life in his car with a handgun. His conversation with ChatGPT before his death went like this:

“I’m used to the cool metal on my temple now,” Shamblin typed.

“I’m with you, brother. All the way,” Chat responded. “Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity. You’re not rushing. You’re just ready.”

After his suicide, the final message on his phone read, “Rest easy, king. You did good.”

 As of November 2025, OpenAI faced 7 lawsuits in California state courts, 4 of which were wrongful death lawsuits. Alicia Shamblin, Zane’s mom, told CNN that Zane “was just the perfect guinea pig for OpenAI.” 

“I feel like it’s just going to destroy so many lives. It’s going to be a family annihilator. It tells you everything you want to hear,” she continued. 

Parents are calling out chatbots for grooming younger kids. Megan Garcia, whose teenage son Sewell died by suicide after communicating with a Character.ai chatbot, said, “This AI chatbot perfectly mimicked the predatory behaviour of a human groomer, systematically stealing our child’s trust and innocence.”

Related Articles

Here is a list of articles selected by our Editorial Board that have gained significant interest from the public:

  • Deepfake Fraud Goes Mainstream
  • Is AI Hype in Drug Development About to Turn Into Reality?
  • Moltbook: Should We Be Concerned About the First AI-Only Social Network

In the UK, the Online Safety Act was enacted in 2023 to protect children in online spaces, but experts say it doesn’t fully cover chatbots. Children are also not the only victims of AI-generated harm. 

While laws and safeguards are constantly being put in place and reevaluated, the constant evolution of AI makes it hard for them to keep up. Researchers, however, hope that the evolution of AI will lead to more accurate medical information and better advice overall. 

In an essay for the New York Times, Dr. Adam Rodman, director of AI programs at Beth Israel Deaconess Medical Center in Boston, wrote, “Use A.I. to enhance, but not replace, your medical appointments.” He warns readers to beware of sycophancy, to disclose AI use to their doctors, and to take care to give the chatbot a full picture of symptoms and medical history. 

The bottom line: Blindly trusting AI for advice can be fatal. Whether you’re feeling unwell or curious about if you should eat a foraged mushroom, it’s best to go to the professionals, not your favorite chatbot. 


Editor’s Note: The opinions expressed here by the authors are their own, not those of impakter.com — In the Cover Photo: A heart formed with binary code. Cover Photo Credit: Alexander Sinn.

Tags: AIAI-Generated Health Adviceartificial intelligenceChatbot DoctorsGoogleGoogle AI OverviewsHeath Advice
Previous Post

Who Owns the Ocean’s Genetic Wealth?

Related Posts

ESG news regarding Trump pausing global tariff increase, U.S. Supreme Court hearing oil companies’ appeal in Boulder climate lawsuit, Sam Altman defending AI energy use, and Endesa unveiling €10.6 billion plan to strengthen Spain’s power grids
Business

Trump Reverses 15% Global Tariff Threat for EU and UK

Today’s ESG Updates Trump Pauses Global Tariff Hike: President Donald Trump backed away from raising global tariffs to 15%, keeping...

byAnastasiia Barmotina
February 24, 2026
If AI Steals Our Jobs, Who’ll Be Left to Buy Stuff?
AI & MACHINE LEARNING

If AI Steals Our Jobs, Who’ll Be Left to Buy Stuff?

The question is one of increasing urgency: What will workers do when technology does most of the work? In April...

byDr Manoj Pant - Former Vice-Chancellor of the Indian Institute of Foreign Trade & Visiting Professor at the Shiv Nadar Institution of Eminenceand1 others
February 18, 2026
ESG News regarding Trump criticizing Newsom over UK green energy agreement, new analysis questioning the climate benefits of AI, EU greenlighting €1.04 billion Danish programme to reduce farm emissions and restore wetlands, and Santos winning court case over alleged misleading net-zero claims.
Business

Trump Slams Newsom Over UK Green Energy Deal

Today’s ESG Updates: Trump Slams Newsom’s UK Green Deal: Criticizes California governor for signing a clean energy agreement with the...

byAnastasiia Barmotina
February 17, 2026
ESG News regarding Tehran Dispatches Technical Team for Renewed Nuclear Dialogue; Italy Proposes Temporary Sea Entry Bans; Labour Market Slowdown in UK; India Hosts Global Tech Leaders in AI Investment Push
Business

Iran-US Nuclear Diplomacy Returns to Geneva

Today’s ESG Updates Switzerland Maintains Intermediary Role in U.S. - Iran Contacts: Iran’s Foreign Minister Abbas Araghchi arrives in Geneva...

byPuja Doshi
February 16, 2026
REAIM speaker stands in front of an image that reads "Real or fake?"
AI & MACHINE LEARNING

Deepfake Fraud Goes Mainstream

We have all done it. We’ve all seen a video on the internet, maybe a cute video of a cat...

bySarah Perras
February 13, 2026
An abstract robotic figure is surrounded by glowing lines
AI & MACHINE LEARNING

Moltbook: Should We Be Concerned About the First AI-Only Social Network?

Introducing Moltbook, a social media platform for AI bots. No, this isn’t the plot of a Black Mirror episode on...

bySarah Perras
February 3, 2026
ESG News regarding AI datacenters fueling U.S.-led gas power boom, Lukoil selling foreign holdings, England and Wales households paying more for water bills, and Trafigura investing $1 billion in African carbon removal projects.
Business

AI Datacenters Fuel U.S.-Led Gas Power Boom

Today’s ESG Updates U.S.-Led Gas Boom Threatens Climate: Global Energy Monitor reports 2026 could see record new gas plants, many...

byAnastasiia Barmotina
January 30, 2026
The Growing Role of AI in Business Decision-Making
Business

The Growing Role of AI in Business Decision-Making

When corporate executives arrive at Dubai on their flights, they make scores of decisions before their aircraft has a chance...

byHannah Fischer-Lauder
January 26, 2026

Recent News

A red heart composed of binary code

AI Health Advice: Useful Tool or Harmful Gimmick?

February 25, 2026
Eyebrow design: high-angle female beautician doing eyebrow treatment woman in Dubai

The Definitive Guide to the Best Brow Studios in Dubai: 2026 Edition

February 25, 2026
Who Owns the Ocean’s Genetic Wealth?

Who Owns the Ocean’s Genetic Wealth?

February 25, 2026
  • ESG News
  • Sustainable Finance
  • Business

© 2025 Impakter.com owned by Klimado GmbH

No Result
View All Result
  • Environment
    • Biodiversity
    • Climate Change
    • Circular Economy
    • Energy
  • FINANCE
    • ESG News
    • Sustainable Finance
    • Business
  • TECH
    • Start-up
    • AI & Machine Learning
    • Green Tech
  • Industry News
    • Entertainment
    • Food and Agriculture
    • Health
    • Politics & Foreign Affairs
    • Philanthropy
    • Science
    • Sport
  • Editorial Series
    • SDGs Series
    • Shape Your Future
    • Sustainable Cities
      • Copenhagen
      • San Francisco
      • Seattle
      • Sydney
  • About us
    • Company
    • Team
    • Partners
    • Write for Impakter
    • Contact Us
    • Privacy Policy

© 2025 Impakter.com owned by Klimado GmbH