“ChatGPT Health regularly misses the need for medical urgent care and frequently fails to detect suicidal ideation, a study of the AI platform has found, which experts worry could “feasibly lead to unnecessary harm and death”.”

“In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment, a result Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London, described as “unbelievably dangerous”.”

“The platform was also nearly 12 times more likely to downplay symptoms because the “patient” told it a “friend” in the scenario suggested it was nothing serious.”

https://www.theguardian.com/technology/2026/feb/26/chatgpt-health-fails-recognise-medical-emergencies

‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies

Study finds ChatGPT Health did not recommend a hospital visit when medically necessary in more than half of cases

The Guardian
@gregeganSF OpenAIs blatant dismissal of the results is terrifying
@jesser29 @gregeganSF Sam Altman is a eugenicist.

@emma @jesser29 @gregeganSF tbf, 9 out of 10 the GP sends me away with the advice: "take paracetamol with a tea and if it persists for more than two weeks, come back again".

Now, I'm not saying this to extenuate OpenAI's latest creation: likely people use it as they don't trust their GP, but the root of the problem lies elsewhere.