The amount of shit I'm getting for saying "chronically ill people, who are abandoned by the medical system, should not be condemned for turning to LLMs," while I'm also the one who started banning LLMs from GNOME projects, pretty perfectly proves my point of arbitrary purity tests that are just misanthropy, covered in pseudo progressivism.

I am a bit embarrassed that I frequently shared Gerard's posts in the past. I guess those who were skeptical have been proven right.

https://circumstances.run/@davidgerard/116232821650226656

David Gerard (@[email protected])

the person advocating ChatGPT for medical advice was a GNOME developer too i'd watch out for signs of GNOME as the next big FOSS project to fill with slop, there's certainly advocates in there

GSV Sleeper Service

@sophie @davidgerard I'm confused as to why LLMs are being recommended for medical advice, given their, at this point, well-documented propensity for providing incorrect or even harmful answers

in my humble opinion, that's worse than no medical advice at all, because if chronically ill people act on wrong information, then that could cause further harm to come to us

@YKantRachelRead @davidgerard I'm not sure what you are referring to. My point here is that people should consider lived realities of different groups of people. I will not make the judgment for someone if it is beneficial to go a certain route in their search for help.

But, assuming that not taking any action is always better than taking risks is just not always compatible with every condition. Also asking AI 'what further tests could be done' is usually not an additional risk.

@sophie @davidgerard to be clear, I don't blame CI folks for turning to desperate measures in search of some sort of relief. and also, when the action in question involves receiving incorrect and potentially harmful misinformation, then doing nothing is, absolutely, better than making things actively worse.
@YKantRachelRead @davidgerard I think you are still ignoring the realities for some people here. I didn't want to go into details but here we go: If you are about to die if no action is taken, then trying something potentially harmful is often not worse.

@sophie @YKantRachelRead @davidgerard but also if you are *not* about to die but normalizing misinformation sources causes you to accept those misinformation sources out of desperation it can also lead to bad outcomes including death.

Argument from desperation independent of the efficacy of the thing you are advocating is irresponsible.

@kevingranade @sophie @YKantRachelRead @davidgerard interestingly enough though, nobody here is trying to normalise LLM use, really quite the opposite.

It's a reminder to direct our anger at the right places (such as the omnishambles that is healthcare in so many areas) not a desperate person failed by everything else.

@zbrown @sophie @YKantRachelRead @davidgerard characterizing "LLMs are not fit for any purpose and you shouldn't use them even if, ESPECIALLY if, you are desperate" as "condemnation" is a bad faith opening to the discussion that I chose to ignore.
This line of argument says what, that is ok to remind people that it doesn't work? No, instead they're using loaded terms to shut down discussion.
That is support for LLMs, if you don't see that you need to look closer and reconsider.
@kevingranade @zbrown @sophie @YKantRachelRead @[email protected] Please take a couple steps back and re-read what Sophie said.

@tragivictoria @zbrown @sophie @YKantRachelRead ok sure, I just re-read it. She still characterizes seeking out medical misinformation from the dedicated misinformation machine as, "taking risks". That's not advocating for desperate disabled people, that's throwing them under a bus.

Serious question, would you agree with that statement if it were a specific piece of misinformation like taking ivermectin for covid as opposed to a well- known *source* of misinformation like chatgpt?

@kevingranade @zbrown @sophie @YKantRachelRead I'm not sure what else to say. All Sophie said was: attacking people in deep need for using ChatGPT when they exhausted all other options is not OK. That's it. I have no idea what else to say. She didn't said „Oh yeah just use chatgpt it's good” or „chatgpt is basically as good as doctors” nor „chatgpt is the best”.

Serious question, would you agree with that statement if it were a specific piece of misinformation like taking ivermectin for covid as opposed to a well- known source of misinformation like chatgpt?

Except covid is pretty well-known thing and any doctors is gonna be fine here. It's not what she was talking about.

@tragivictoria @kevingranade @zbrown @sophie COVID is not in any way a "pretty well known thing." you'd be surprised at how many medical professionals I've run into who still, for example, subscribe to the droplet theory of transmission, or who think it's "just a cold now."