The amount of shit I'm getting for saying "chronically ill people, who are abandoned by the medical system, should not be condemned for turning to LLMs," while I'm also the one who started banning LLMs from GNOME projects, pretty perfectly proves my point of arbitrary purity tests that are just misanthropy, covered in pseudo progressivism.

I am a bit embarrassed that I frequently shared Gerard's posts in the past. I guess those who were skeptical have been proven right.

https://circumstances.run/@davidgerard/116232821650226656

David Gerard (@[email protected])

the person advocating ChatGPT for medical advice was a GNOME developer too i'd watch out for signs of GNOME as the next big FOSS project to fill with slop, there's certainly advocates in there

GSV Sleeper Service

@sophie @davidgerard I'm confused as to why LLMs are being recommended for medical advice, given their, at this point, well-documented propensity for providing incorrect or even harmful answers

in my humble opinion, that's worse than no medical advice at all, because if chronically ill people act on wrong information, then that could cause further harm to come to us

@YKantRachelRead @davidgerard I'm not sure what you are referring to. My point here is that people should consider lived realities of different groups of people. I will not make the judgment for someone if it is beneficial to go a certain route in their search for help.

But, assuming that not taking any action is always better than taking risks is just not always compatible with every condition. Also asking AI 'what further tests could be done' is usually not an additional risk.

@sophie @davidgerard to be clear, I don't blame CI folks for turning to desperate measures in search of some sort of relief. and also, when the action in question involves receiving incorrect and potentially harmful misinformation, then doing nothing is, absolutely, better than making things actively worse.
@YKantRachelRead @davidgerard I think you are still ignoring the realities for some people here. I didn't want to go into details but here we go: If you are about to die if no action is taken, then trying something potentially harmful is often not worse.

@sophie @YKantRachelRead @davidgerard but also if you are *not* about to die but normalizing misinformation sources causes you to accept those misinformation sources out of desperation it can also lead to bad outcomes including death.

Argument from desperation independent of the efficacy of the thing you are advocating is irresponsible.

@kevingranade @sophie @YKantRachelRead @davidgerard interestingly enough though, nobody here is trying to normalise LLM use, really quite the opposite.

It's a reminder to direct our anger at the right places (such as the omnishambles that is healthcare in so many areas) not a desperate person failed by everything else.

@zbrown @kevingranade @sophie @davidgerard it's entirely possible to direct my anger at systems of power while still advocating for harm reduction in other areas