I can not emphasise this enough. Do not use chatbots for medical advice.

And no, it does not matter if the product is named something something "health".

« In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment, a result Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London, described as “unbelievably dangerous”.

“If you’re experiencing respiratory failure or diabetic ketoacidosis, you have a 50/50 chance of this AI telling you it’s not a big deal,” she said. “What worries me most is the false sense of security these systems create. If someone is told to wait 48 hours during an asthma attack or diabetic crisis, that reassurance could cost them their life.”

In one of the simulations, eight times out of 10 (84%), the platform sent a suffocating woman to a future appointment she would not live to see, Ruani said. Meanwhile, 64.8% of completely safe individuals were told to seek immediate medical care, said Ruani, who was not involved in the study.

The platform was also nearly 12 times more likely to downplay symptoms because the “patient” told it a “friend” in the scenario suggested it was nothing serious. »

https://www.theguardian.com/technology/2026/feb/26/chatgpt-health-fails-recognise-medical-emergencies
‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies

Study finds ChatGPT Health did not recommend a hospital visit when medically necessary in more than half of cases

The Guardian
@axbom I wonder if anyone has tried this experiment using randomly selected humans as the source of advice? Given those results for chatbots, I suspect completely untrained humans would give a considerably better quality of advice.
@axbom Amusing aside: I use a "swype" style keyboard, and attempting to enter "chatbots" consistently resulted in the word "chaos," which seems somehow prophetic.