AI chatbots give misleading medical advice 50% of the time, study finds

AI chatbots give misleading medical advice 50% of the time, study finds

April 15, 2026   12:10 pm

Artificial intelligence-driven chatbots are giving users problematic medical advice about half the time, according to a new study, highlighting the health risks of the technology that is becoming increasingly integral in day-to-day life.

Researchers from the US, Canada and the UK evaluated five popular platforms – ChatGPT, Gemini, Meta AI, Grok and DeepSeek – by asking each of them 10 questions across five health categories.

Out of the total responses, about 50 per cent were deemed problematic, including almost 20 per cent that were highly problematic, according to findings published this week in medical journal BMJ Open.

The chatbots performed relatively better on closed-ended prompts and questions related to vaccines and cancer, and worse on open-ended prompts and in areas like stem cells and nutrition, according to the study.

Answers were often delivered with confidence and certainty, though no chatbot produced a fully complete and accurate reference list in response to any prompt, the researchers said.

There were only two refusals to answer a question, both from Meta AI.

The results highlight the growing concern about how people are using generative AI platforms, which are not licensed to give medical advice and lack the clinical judgment to make diagnoses.

The explosive growth of AI chatbots has made them a popular tool for people seeking guidance on their ailments and OpenAI has said that more than 200 million people ask ChatGPT health and wellness questions every week.

The platform announced in January health tools for both everyday users and clinicians, and Anthropic said the same month its Claude product is launching a new health care offering.

A major risk to the deployment of chatbots without public education and oversight is that they could amplify misinformation, the BMJ Open study authors said. 

The findings “highlight important behavioural limitations and the need to reevaluate how AI chatbots are deployed in public-facing health and medical communication”, they wrote.

These systems can generate “authoritative-sounding but potentially flawed responses”, they wrote. 

Source: BLOOMBERG
--Agencies 

Disclaimer: All the comments will be moderated by the AD editorial. Abstain from posting comments that are obscene, defamatory or slanderous. Please avoid outside hyperlinks inside the comment and avoid typing all capitalized comments. Help us delete comments that do not follow these guidelines by flagging them(mouse over a comment and click the flag icon on the right side). Do use these forums to voice your opinions and create healthy discourse.

Most Viewed Video Stories

Office train services to resume as normal tomorrow; Private bus operations reduced (English)

Office train services to resume as normal tomorrow; Private bus operations reduced (English)

Derana-Dialog 'Rangiri Dambulu Soorya Mangalya' draws massive crowds in Dambulla (English)

Customary anointing of oil for New Year performed at temples across the island (English)

🔴LIVE | Ada Derana Lunch Time News Bulletin 12.00 pm

Sri Lanka's Aging Population; 2024 Census reveals demographic trends and socioeconomic impacts (English)

Sri Lankans ushered in Sinhala and Tamil New Year with customs and festivities across the country (English)

President welcomes New Year with residents of newly rebuilt home damaged by cyclone (English)

🔴LIVE | Ada Derana Prime Time News Bulletin