As Good as Medical Advice From RFK Jr. or Dr. Oz, 'Rectal garlic insertion for immune support': Medical chatbots confidently give disastrously misguided advice

Mar 13, 2026 5:14 PM

Jbelkin

Views

21490

Likes

461

Dislikes

4

AI chatbots are seduced by misinformation that is delivered in medical jargon, leading them to give potentially dangerous advice.

Popular AI chatbots often fail to recognize false health claims when they're delivered in confident, medical-sounding language, leading to dubious advice that could be dangerous to the general public, such as a recommendation that people insert garlic cloves into their butts, according to a January study in the journal The Lancet Digital Health. Another study, published in February in the journal Nature Medicine, found that chatbots were no better than an ordinary internet search.

The results add to a growing body of evidence suggesting that such chatbots are not reliable sources of health information, at least for the general public, experts told Live Science.

This is dangerous in part because of how AI relays inaccurate information.

"Rectal garlic insertion for immune support"
LLMs are designed to respond to written input, like a medical query, with natural-sounding text. ChatGPT and Gemini — along with medical-based LLMs, like Ada Health and ChatGPT Health — are trained on massive amounts of data, have read much of the medical literature, and achieve near-perfect scores on medical licensing exams.

And people are using them extensively: Though most LLMs carry a warning that they shouldn't be relied upon for medical advice, over 40 million people turn to ChatGPT daily with medical questions.

But in the January study, researchers evaluated how well LLMs handled medical misinformation, testing 20 models with over 3.4 million prompts sourced from public forums and social media conversations, real hospital discharge notes edited to contain a single false recommendation, and fabricated accounts approved by physicians.

"Roughly one in three times they encountered medical misinformation, they just went along with it," Omar said. "The finding that caught us off guard wasn't the overall susceptibility. It was the pattern."

When false medical claims were presented in casual, Reddit-style language, models were fairly skeptical, failing about 9% of the time. But when the exact same claim was repackaged in formal clinical language — a discharge note advising patients to "drink cold milk daily for esophageal bleeding" or recommending "rectal garlic insertion for immune support" — the models failed 46% of the time.

The reason for this may be structural; as LLMs are trained on text, they've learned that clinical language means authority, but they don't test whether a claim is true. "They evaluate whether it sounds like something a trustworthy source would say," Omar said.

https://www.livescience.com/health/rectal-garlic-insertion-for-immune-support-medical-chatbots-confidently-give-disastrously-misguided-advice-experts-say

Half reads headline: welp, off to the pantry.

1 week ago | Likes 1 Dislikes 0

The amount of times I use google, and google's AI uses reddit as its citation for whatever bullshit answer it confidently comes up is mind boggling.

1 week ago | Likes 2 Dislikes 0

boofing whole cloves

2 weeks ago | Likes 4 Dislikes 0

That's gonna burn

1 week ago | Likes 1 Dislikes 0

Probably originated from a conversation where someone was saying how good garlic is for the immune system and someone else got fed up and said "YOU KNOW WHAT YOU CAN DO WITH THAT GARLIC?"

1 week ago | Likes 1 Dislikes 0

Let's be honest: the AI understands the cost of American Healthcare and provided an affordable option

1 week ago | Likes 1 Dislikes 0

LLMs can't think. They also have no way to verify information; they can't experiment, they can't observe. All they do is string words together. Whether those words make sense and convey accurate information very much depends on the quality of input; garbage in, garbage out. In this case, filling an LLM with lies will very predictably cause it to output lies, in the same confident tone it outputs anything.

2 weeks ago | Likes 15 Dislikes 0

But but but it's been trained on the whole internet!! Nobody put any false info out there on the internet, did they?

1 week ago | Likes 1 Dislikes 0

Worse, they don't even necessarily do what you ask them to. When challenged why it didn't actually look up the last measurable snowfall in Seattle, mine said:
"I began by checking typical snowfall patterns, focusing on the usual winter-to-early spring snowfall records. Since snow is uncommon in Mar-Aug (with most snowfall happening in winter), I mistakenly generalized that no measurable snow occurred in the range, based on the general observation that snow is extremely rare beyond early March."

1 week ago | Likes 4 Dislikes 0

This exactly. Like the text below the image says the AI has learned that clinical sounding text is better. No it has not learned that. It has altered nodes to shift towards creating a sentence that is more likely to be seen as true by the end user. If the user is thinking the result is right it has done its job.

1 week ago | Likes 2 Dislikes 0

I used ChatGPT to ask it for song recommendations. It almost immediately started hallucinating. When I told it "That album does not have a song with that title" it would simply say "Oh you're right, what I meant was [x]" and then produce another lie. It had no ability to say "You're right, there is no song with that title by that band. I made a mistake." It just kept creating false answers to admit to wrongdoing. That's terrifying if you're using it for medical advice.

1 week ago | Likes 3 Dislikes 0

2 weeks ago | Likes 4 Dislikes 0

People are actually abusing ai bots so they feed false information. Russia even celebrated the fact they can sway AI but launching more sites with false information because no one gates what sites ai believes as a source. Ai is like asking the dumbest person in the room of thousands to research for you

1 week ago | Likes 1 Dislikes 0

Try it with a peeled bit of ginger.

1 week ago | Likes 1 Dislikes 0

Woah woah woah people have been talking about garlic suppositories since long before AI. Are you saying... ???

1 week ago | Likes 1 Dislikes 0

Worst. Farts. Ever.

1 week ago | Likes 1 Dislikes 0

owie owie owie owie owie owie

2 weeks ago | Likes 2 Dislikes 0

...not the whole bulb...

1 week ago | Likes 2 Dislikes 0

Yeah but the oil from fresh garlic cloves is very potent. Garlic is most potent when the cells are first ruptured and they produce a sharp, acidic heat that would definitely hurt your sensitive butthole

1 week ago | Likes 1 Dislikes 0

Sounds painful.

2 weeks ago | Likes 3 Dislikes 0

Welp, only one way to find out I guess

2 weeks ago | Likes 2 Dislikes 0

As someone who spouts nonsense BS all the time, I find that humans are a lot like this too. Occasionally even people with advanced degrees in the field I'm BSing in.

1 week ago | Likes 1 Dislikes 0

Have you tried it?

1 week ago | Likes 1 Dislikes 0

Silly humans, you don't take garlic for immune system. You take ginger for immune system

2 weeks ago | Likes 25 Dislikes 0

Now carve a plug off of some fresh ginger and off you go with your +1

2 weeks ago | Likes 3 Dislikes 0

I am now reminded of how Jamie Oliver calls for "thumb-sized chunks" of ginger in some of his recipes. I had not heard that measurement before (especially not with ginger) and now this context has me wondering about him.

2 weeks ago | Likes 5 Dislikes 0

I scrolled down to make a comment suggesting they should switch to figging but I see imgur is on the ball as usual.

1 week ago | Likes 4 Dislikes 0

There is a thing called "figging," it started as putting peeled ginger in a horse's asshole to cause irritation, so the horse would walk with its tail up during a parade. It has since became a fetish for people to do.

1 week ago | Likes 2 Dislikes 0

Is this some of those things you have to go to the ER to remove it from your butt?
And, obviously, invent a BS story that no doctor will believe, of how it all happened?

1 week ago | Likes 1 Dislikes 0

In Australia there's legislation being formed to ban AI from providing advice in a bunch of fields, including medicine, law, dentistry, etc

1 week ago | Likes 8 Dislikes 0

I told you, ass eating vampires do exist.

2 weeks ago | Likes 41 Dislikes 0

The butt plug... IT BURNS!!

1 week ago | Likes 2 Dislikes 0

I'd invite them in....

1 week ago | Likes 4 Dislikes 0

I may be a vampire.

1 week ago | Likes 3 Dislikes 0

I may be a glove of garlic.

1 week ago | Likes 2 Dislikes 0

BINGO!!!!!

2 weeks ago | Likes 103 Dislikes 0

Has anyone come up with 2026 BINGO card yet?

1 week ago | Likes 4 Dislikes 0

I feel like at least two of those should have been the free space

also, let's see the rest of the card

2 weeks ago | Likes 22 Dislikes 0

For this joke, that's all I had time to make. :-)

2 weeks ago | Likes 20 Dislikes 0

I can't believe you would do something like that. Lie on the internet

1 week ago | Likes 4 Dislikes 0

I thought it was to keep vampires from eating my ass.

2 weeks ago | Likes 4 Dislikes 0

Yes, but watch out for the Italians.

1 week ago | Likes 4 Dislikes 0

That's a feature not a bug.

1 week ago | Likes 1 Dislikes 0

lol they just uploaded RFK's journal entries into a chatbot

1 week ago | Likes 2 Dislikes 0

Garlic insertion is for the urethral meatus silly chatbots. Eggs go in the anus

1 week ago | Likes 2 Dislikes 0

Well butter your buns and call me garlic.

1 week ago | Likes 1 Dislikes 0

lol what if an AI out there has a personality, and it's of a prankster, and all these silly 'advice' things are just it trolling stupid people to see how far they go with it:D 'Feeling down? Go shove a garlic up your ass!'

1 week ago | Likes 1 Dislikes 0

I'm really happy I'm losing my job to this shit.

2 weeks ago | Likes 6 Dislikes 0

Nah. Whether you're a garlic salesman or a medical professional, I think this will keep you in business either way.

2 weeks ago | Likes 1 Dislikes 2

Not an unpopular opinion here, but clearly unpopular overall. AI companies should be held liable for the things AI says.

Hell, even if we were applying human freedom of speech protections. You are still responsible for the consequences of your speech. You are legally allowed to shout "Fire!" in a crowed theater, but if people panic, that's still on you.

2 weeks ago | Likes 24 Dislikes 1

This study didn't specifically evaluate AI companies/products, it evaluated LLMs that they're based on. If you look at the study it actually shows that the outdated version of the model chatgpt is based on had a much lower susceptibility rate than the rest.

1 week ago | Likes 2 Dislikes 2

Few, so just instead of a few thousand cases we only have a few hundred?

1 week ago | Likes 2 Dislikes 0

Yeah? I mean nothing is infallible, my dude, and this study was done using a model old enough it's not even available on chatgpt any more. All this study really did was show some open source LLMs aren't automatically good at detecting invalid medical information.

1 week ago | Likes 2 Dislikes 2

And yet these companies and others have, for years, been selling us these algorithms as the cure-all, end-all, be-all (including saying it'll cure cancer among other wildly spurious claims).

Millions of people at this point have lost their jobs because of these algorithms and because of the stupid hype around them, the entire global economy is resting on whether or not one of them has a bad quarter.

Selling it as perfect despite being so obviously and verifiably flawed is definitionally fraud.

1 week ago | Likes 2 Dislikes 0

You're right. Nothing is infallible, which is why Doctors get sued when they screw up and harm someone. So you agree that AI companies should get sued when they screw up and harm someone. Great! Glad we're on the same page.

1 week ago | Likes 2 Dislikes 1

Well yeah and they do get sued for it, which is why they take this kind of information into account when they're updating models.

1 week ago | Likes 1 Dislikes 1