TIL that saying "holy shit don't use ChatGPT for medical advice" is a "purity test". i didn't know that before. in fact I still don't.

@davidgerard this one hit close to my heart because I’ve had two family members die in large part because their caretaker ignored medical advice and used awful alternative medicine information from the internet to try and treat them.

an LLM can’t do critique. as you’ve said, truth is not a data type in an LLM. all of these models suck in every form of medical crankery available on the internet, mix it with words from authentic medical sources, and present it all as credible.

@davidgerard I know that alternative medicine has a body count; I’ve seen it in the flesh. I know what some of the horseshit on the Internet can do if you’re very desperate or very trusting.

the LLM lowers the trust barrier because the crank information is no longer crank flavored, but it’s still dangerous as fuck to follow the advice.

I keep seeing LLMs be presented as better than nothing and that’s wrong. I wish the people who needed help could get it, but the LLM is worse than nothing.

@davidgerard LLMs get alternative medicine patients to the “I don’t care what you say, *I* feel better” point of no return so much quicker because they don’t know it’s alternative medicine. some of it might even be legitimate medicine that works! and all this does is make them less skeptical until they get output that’s plausible but fatal, or until the damage from what they’ve been doing builds up and they can’t survive anymore. and thanks to the LLM, they’ll fight off anyone who tries to help.
@zzt @davidgerard Lies are never more effective than when they're sprinkled with truth, and that's exactly the bread and butter of LLMs: truth-flavoured bullshit.