The amount of shit I'm getting for saying "chronically ill people, who are abandoned by the medical system, should not be condemned for turning to LLMs," while I'm also the one who started banning LLMs from GNOME projects, pretty perfectly proves my point of arbitrary purity tests that are just misanthropy, covered in pseudo progressivism.

I am a bit embarrassed that I frequently shared Gerard's posts in the past. I guess those who were skeptical have been proven right.

https://circumstances.run/@davidgerard/116232821650226656

David Gerard (@[email protected])

the person advocating ChatGPT for medical advice was a GNOME developer too i'd watch out for signs of GNOME as the next big FOSS project to fill with slop, there's certainly advocates in there

GSV Sleeper Service

@sophie @davidgerard I'm confused as to why LLMs are being recommended for medical advice, given their, at this point, well-documented propensity for providing incorrect or even harmful answers

in my humble opinion, that's worse than no medical advice at all, because if chronically ill people act on wrong information, then that could cause further harm to come to us

@YKantRachelRead @davidgerard I'm not sure what you are referring to. My point here is that people should consider lived realities of different groups of people. I will not make the judgment for someone if it is beneficial to go a certain route in their search for help.

But, assuming that not taking any action is always better than taking risks is just not always compatible with every condition. Also asking AI 'what further tests could be done' is usually not an additional risk.

@sophie @davidgerard to be clear, I don't blame CI folks for turning to desperate measures in search of some sort of relief. and also, when the action in question involves receiving incorrect and potentially harmful misinformation, then doing nothing is, absolutely, better than making things actively worse.
@YKantRachelRead @davidgerard I think you are still ignoring the realities for some people here. I didn't want to go into details but here we go: If you are about to die if no action is taken, then trying something potentially harmful is often not worse.

@sophie @YKantRachelRead @davidgerard but also if you are *not* about to die but normalizing misinformation sources causes you to accept those misinformation sources out of desperation it can also lead to bad outcomes including death.

Argument from desperation independent of the efficacy of the thing you are advocating is irresponsible.

@kevingranade @sophie @YKantRachelRead @davidgerard interestingly enough though, nobody here is trying to normalise LLM use, really quite the opposite.

It's a reminder to direct our anger at the right places (such as the omnishambles that is healthcare in so many areas) not a desperate person failed by everything else.

@zbrown @kevingranade @sophie @davidgerard it's entirely possible to direct my anger at systems of power while still advocating for harm reduction in other areas
@zbrown @sophie @YKantRachelRead @davidgerard characterizing "LLMs are not fit for any purpose and you shouldn't use them even if, ESPECIALLY if, you are desperate" as "condemnation" is a bad faith opening to the discussion that I chose to ignore.
This line of argument says what, that is ok to remind people that it doesn't work? No, instead they're using loaded terms to shut down discussion.
That is support for LLMs, if you don't see that you need to look closer and reconsider.
@kevingranade @sophie @YKantRachelRead @davidgerard I'm so sorry that I'm not the person you think you are arguing with, have fun out there!
@zbrown @sophie @YKantRachelRead @davidgerard you just replied to me to say, "no one is normalizing LLM use" yes? I'm disagreeing with that assertion.
@kevingranade @zbrown @sophie @YKantRachelRead @[email protected] Please take a couple steps back and re-read what Sophie said.

@tragivictoria @zbrown @sophie @YKantRachelRead ok sure, I just re-read it. She still characterizes seeking out medical misinformation from the dedicated misinformation machine as, "taking risks". That's not advocating for desperate disabled people, that's throwing them under a bus.

Serious question, would you agree with that statement if it were a specific piece of misinformation like taking ivermectin for covid as opposed to a well- known *source* of misinformation like chatgpt?

@kevingranade @zbrown @sophie @YKantRachelRead I'm not sure what else to say. All Sophie said was: attacking people in deep need for using ChatGPT when they exhausted all other options is not OK. That's it. I have no idea what else to say. She didn't said „Oh yeah just use chatgpt it's good” or „chatgpt is basically as good as doctors” nor „chatgpt is the best”.

Serious question, would you agree with that statement if it were a specific piece of misinformation like taking ivermectin for covid as opposed to a well- known source of misinformation like chatgpt?

Except covid is pretty well-known thing and any doctors is gonna be fine here. It's not what she was talking about.

@tragivictoria @kevingranade @zbrown @sophie COVID is not in any way a "pretty well known thing." you'd be surprised at how many medical professionals I've run into who still, for example, subscribe to the droplet theory of transmission, or who think it's "just a cold now."

@tragivictoria @sophie @YKantRachelRead

You need to look at what the alternative to "attacking" is and what "attacking" actually means here. The alternative is "don't say not to use it", attacking is simply saying "don't use it, it will kill you". "disabled people should use ChatGPT if they're desperate for life-saving information" is about as helpful as "disabled people should gamble if they're desperate for life-saving money".

In both cases, what is offered is a lie.

@tragivictoria @sophie @YKantRachelRead This is about the dozenth example of "stop telling people not to use LLMs, their use case is SPECIAL" characterized as "attacking" I've seen. Well meaning or not, this is pro-LLM propaganda.

@kevingranade @sophie @YKantRachelRead

It's not „their usecase is special”, but that there is no actual alternative in their case. How hard it is to understand it?

@kevingranade @sophie @YKantRachelRead ffs „pro-LLM proganda” coming literally from most anti-LLM people you can only imagine

@tragivictoria @sophie @YKantRachelRead no it's pretty trivial to find people that say "LLMs are not fit for ANY purpose", and that's obviously more anti-LLM than "it's ok for disabled people in particular to use LLMs for medical advice".

There has been a LOT of pro-LLM propaganda coming from "LLM critics" for years now, it's a whole thing.

@tragivictoria @kevingranade @sophie let me try approaching this discussion from a different angle.

I'm assuming you're aware of the Politician's Syllogism, yes? if not, it goes something like:

"something must be done. this is something, therefore it must be done."

to restate that in terms of the argument being made here, it seems like you and OP are trying to say something along the lines of:

"chronically ill people need treatment. LLMs offer treatment of a sort, therefore disabled people need LLMs."

when, like, the point that I've been trying to make here is that LLMs are not reliable producers of information. there has been so much written about how what LLMs really provide people is what they want to hear, stated in homogenized, pleasant, authoritative voices to make the result sound convincing.

LLMs don't reliably provide correct answers, therefore using LLMs for medical advice is worse than doing nothing due to the potential for harm inherent in their use.

does that make sense?

@kevingranade @sophie @YKantRachelRead

No, the alternative is to actually offer help instead of being a smartass. I wasn't meant to reply, but you really struck a nerve here.

You should work on your reading comprehension, because you're either ignoring it or just can't understand anything that's being said to you.

@tragivictoria @sophie @YKantRachelRead please answer the simple question, everything hinges on this.
"Are LLMs reliable sources of medical information?"

If your answer is yes, you're mistaken.
If your answer is no, you're advocating for "don't discourage people from consuming medical misinformation".

It's really that simple. "no information" is actually better than "misinformation".

@sophie purity culture is harmful wherever it arises, I think. I've seen it pop up in queer or socialist groups I was part of, and it always became a self-destructive tailspin
@sophie David has weird axes to grind with GNOME, and he orbits around a whole group of folks (mainly sysadmins or former sysadmins) that hate everything invented after they got the job, and they all coalesced around the anti-LLM sentiment because it validates their retro-fetish. It's incredibly easy for these folks to behave like shell scripts, and error out at the first thing their read that they don't like, instead of figuring out the nuances.
@ebassi @sophie I love this one: "behave like shell scripts, and error out at the first thing their read that they don't like"
@sophie Thank you very much for all the work you do for GNOME, and how much you care about others. We don't say this enough. I think your work is unmatched and underappreciated

@pabloyoyoista Thanks you. That's very kind.

I'm getting some love, especially for Pika Backup, once in a while. Creating a welcoming atmosphere with users does pay off in my experience :)