After 17 doctors and dentists over 3 years couldn't make the diagnosis for Alex, his mother entered his symptoms and got it from ChatGPT.
The recent progress in #AI for medicine, reviewed in the new Ground Truths
https://erictopol.substack.com/p/all-eyes-on-medical-ai
All Eyes on Medical A.I.

Ground Truths
@erictopol once ChatGPT gave dr.s a clue they were able to confirm the problem right away I would assume via spinal MRI? Nobody gave him a spinal MRI in 3 years when he had obvious neurological issues ? did chatGPT succeed here or did this kids dr's fail miserably ?
@zzzeek Or was the medical team prevented from doing the MRI by insurance prior auth requirements?
@erictopol just my opinion, but it seems to me a survivorship bias/cherry picking: a success in a galaxy of algorithms-generated rubish.

@erictopol But as the old adage goes,

Computers can never make management decisions because computers can never be held accountable.

How many missed diagnoses does ChatGPT make? What harm would have been caused if those diagnoses had been used as the basis for treatment?

@erictopol considering the mistakes got 3.5 makes and how it's analysis is often wrong with complex problems. It worries me people think it's useful for this sort of thing.
@dom_mecfs @erictopol Yeah, I am not at all surprised a desperate parent at wits' end would try it - plenty turn to divination for less - but forwarding it as more than a lucky strike seems perilous.

@cwicseolfor @erictopol 3.5 is just bad I don't even rely on it for hobby stuff or coding. It produces bad answers. Better models might come along there's a few opensource ones now that are just as good as gpt4. I do worry they won't do anything meaningful with it for medicine tho. It's also not the answer to medicines problems.

We need newer better treatments for a huge range of different medical issues. It's a shame AI is being talked up so much.

@dom_mecfs Well, and it's also a resource-use disaster, and it's actively polluting its own datasets because people are posting the answers it generates online, reinforcing its errors, and-and-and.

It *could have been* useful if it had been built with guardrails preventing GIGO issues from the start, but the eagerness to cash in and monetize it won out and now the whole internet is worse off as a consequence.