I can not emphasise this enough. Do not use chatbots for medical advice.

And no, it does not matter if the product is named something something "health".

« In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment, a result Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London, described as “unbelievably dangerous”.

“If you’re experiencing respiratory failure or diabetic ketoacidosis, you have a 50/50 chance of this AI telling you it’s not a big deal,” she said. “What worries me most is the false sense of security these systems create. If someone is told to wait 48 hours during an asthma attack or diabetic crisis, that reassurance could cost them their life.”

In one of the simulations, eight times out of 10 (84%), the platform sent a suffocating woman to a future appointment she would not live to see, Ruani said. Meanwhile, 64.8% of completely safe individuals were told to seek immediate medical care, said Ruani, who was not involved in the study.

The platform was also nearly 12 times more likely to downplay symptoms because the “patient” told it a “friend” in the scenario suggested it was nothing serious. »

https://www.theguardian.com/technology/2026/feb/26/chatgpt-health-fails-recognise-medical-emergencies
‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies

Study finds ChatGPT Health did not recommend a hospital visit when medically necessary in more than half of cases

The Guardian
@axbom I wonder if anyone has tried this experiment using randomly selected humans as the source of advice? Given those results for chatbots, I suspect completely untrained humans would give a considerably better quality of advice.
@axbom Amusing aside: I use a "swype" style keyboard, and attempting to enter "chatbots" consistently resulted in the word "chaos," which seems somehow prophetic.

@axbom
I suspect that this is in part due to a deliberate *commercial* choice by #OpenAI

False positives will annoy users who waste their time (and often money) going to hospital unnecessarily and disincline them to use the platform again.

False positives will also annoy hospitals, especially in countries like the UK where they are very resource constrained. (1/2)

@axbom
Whereas false negatives are less likely to have negative consequences *for the platform* because users will simply assume original advice was fine but symptoms have changed.

And health services don't usually ask: Why didn't you come in earlier? So they will never spot the impact of the platform.

OK, I am very #cynical. But nothing like this is launched without conscious choices about weighting false negatives vs positives.

@axbom

I can not emphasize this enough. Do not use chatbots.

@axbom I'll take evolutionary pressures of 21st century humans for 500 axbom.