No, opposing LLMs isn't "purity culture." I've seen this now from quite a few different people, and I disagree vehemently. It is good, actually, to have moral principles and hold to them, even when people with more money than you find said principles annoying.
@xgranade it depends so much, I mean I can oppose screwdrivers being used to drive nails into the wall

@codinghorror Sure, but we're not talking about "which tool is best for driving a nail that I own into a wall that I own," we're talking about "is it ethical to use a technology built on fascist ideology and stolen work, that carries unconscionable environmental costs, and that's used to disrupt labor movements to perform a task that that technology is fundamentally unsuited to?"

It's quite fair to have a very firm "no" by way of answer to the second question.

@codinghorror Anyway, this isn't the first time you've replied to me to make the argument that LLMs are just another kind of tool. I suspect we won't see eye-to-eye on that, especially as my work has been abused to make LLM products.

I hope we can agree though, that my objection *even though you disagree with it* is principled and neither knee jerk nor purity culture.

@xgranade
Jeff Atwood is a part of the ownership class and this is despite his attempts to appear "uncomfortable" with it, or attempts to give a part of it away and look like one of the "good ones".

His wealth is ultimately the product of stolen labor and the fascist systems you're critiquing is by and large something he benefits from so there really isn't a way to reconcile here.

@xgranade LLMs told me something critical about my health that no healthcare professional -- and I have a whole team working on me, because I'm bonkers -- ever did. If you want to ask, ask, I can provide very detailed citations and proof.
@codinghorror I'm not a doctor (well, not that *kind* of doctor, anyway), so I'll absolutely admit that I'm not the right person to evaluate those citations. I'll say that from a pretty damned nontrivial degree of expertise with machine learning, I would find it extremely surprising if *on average* text recombination without any underlying semantic model yielded useful advice more commonly than outright dangerous advice.
@codinghorror Like, nothing about LLMs and the theory behind them prevents anyone from getting lucky — and I'm glad that you got lucky instead of the much more common and probable case. But that doesn't mean that they're anything other than outright terrifyingly dangerous in a medical context more generally.
@xgranade people should absolutely be taught all the pros and cons, but I really dislike absolutism and zealotry.. it's not useful, it's not practical, it accomplishes nothing (except in the very narrow cases of civil rights or human dignity). If I wanted more ones and zeroes, I'd own more computers..
@xgranade and as I've said before, if you want to be angry, be angry at cryptocurrency which is gambling, grifters, and human trafficking to the bone. It's horrendous.
@codinghorror @xgranade The push for LLM inevitability is all the same people as cryptocurrency. That should tell you something about LLMs. It certainly tells me something.
@eschaton @xgranade not true, as I (for one, and I'm not alone) am a data point disproving this.
@eschaton @xgranade and I wouldn't say "inevitable" just "this tool has practical uses". Remember that I really, really dislike ALL software by default. All of it. I'm surprised when I don't.
@codinghorror @xgranade See? You’re not a data point against the people pushing LLM inevitability, you’re taking a more measured approach than they are. ;) Those people are doing absolutely *insane* things like tracking LLM usage metrics and saying that’ll be factored into performance reviews. (Like is happening at Microsoft with Copilot.)
@eschaton @xgranade well, I am totally with y'all on that 🤗

@codinghorror @eschaton Hey, don't put words in my mouth, I'm not part of that "y'all." I do not agree that doing propaganda work for some of the worst people on the planet, whether intentionally or not, counts as "measured."

But that's what you're doing right now by arguing in favor of LLMs.

@codinghorror @eschaton Given how messy this exchange has gotten, let me pull back slightly. I made a claim, that opposition to LLMs is not an example of "purity culture."

You, despite my explicit ask to not, came into my replies to make a separate but related claim: namely, that LLMs are sometimes useful, and implicitly that that utility is sufficiently great as to justify their ethical problems.

@codinghorror @eschaton While I explicitly said I didn't get into the second point, as the Discourse™ has gotten *incredibly* tedious by now, fine. You seem to insist on having that discussion out in my replies anyway.

To that end, I laid out several reasons that I find the claim that LLMs are "just a tool" odious: the euginicist origin, the fascist way they're funded and developed, that they attack and undermine labor, that they impose extreme environmental cost, and that they don't work.

@codinghorror @eschaton You've been very clear that you disagree with that latter point, and also that you expect I will find your disagreement compelling. I don't. It's an extraordinary claim that spicy autocomplete would produce the results ascribed to it, and that claim requires correspondingly extraordinary evidence. Anecdotes are a form of evidence, but without understanding the selection bias that goes into their collection, not on their own sufficient to show extraordinary claims.

@codinghorror @eschaton But fine, you disagree, I believe, as you've said earlier. I think you are very wrong on that, but I don't think either of us are budging on that right now.

Do you refute or disagree with the other points? Do you believe that there is some degree to which LLMs could, if they worked well enough, justify their usage given those problems?

@codinghorror @eschaton To be clear, I don't think you owe me any answers. I'm just one woman who's been doing this shit for decades, and who knows what the fuck she's talking about, but whatever.

It's that you made the claim *to me*, and have used that claim to justify that opposition to LLMs is pseudoreligous "zealotry." But you haven't addressed any of the substance of the opposition beyond putting forward one anecdote that I can't personally evaluate the veracity of.

@codinghorror @eschaton So far, the justification you've given for the "zealotry" comment has been almost entirely about the *shape* of the claims I made, almost without any reference to the *substance*.

This strikes me as a very strange way to approach other human beings and moral decisions in general.

Is there any strong claim that you would consider to not be "zealotry," or any degree to which a claim could be evidenced such that it would not be "zealotry" to you?

@xgranade @eschaton yes, I do.. jpeg for words has a lot of solid use cases but it is indeed lossy and has pros and cons.. there is nuance

@codinghorror @eschaton Perhaps that's the root of our impasse, then. I fairly firmly believe that if something does that much harm to the environment, to labor movements, and to victims of fascism, it cannot be justified by appeals to its efficacy alone.

I suspect that if we cannot agree on basic moral precepts like "don't help fascists get rich" and "don't be a scab," there's probably not much hope for a favorable resolution.

@xgranade @codinghorror @eschaton After a few in-person conversations with folks who have gone “all-in” on LLMs, I think there is another effect at play. There appears to be a severe dopamine addiction happening. At least one friend of mine shows signs of it. And has said so. Likens it to social media addiction.

It is very hard to have rational arguments with addicts. So don’t worry if reason doesn’t always work. It isn’t your fault.

One friend has also mentioned how they are likely being managed out because of mild dyslexia. They can’t keep up with the sheer volume of text and code generated by the LLM and their colleagues who aren’t as neurodivergent can. It is very interesting to see this happening. They were a FANTASTIC programmer. And now are being put out to pasture very early.

It makes me very sad.

#NoGenAI

@codinghorror @xgranade we can be angry at multiple things

@codinghorror @xgranade All of the "zealotry", including sea lioning 👆, is from the people who want to force us to give their precious slop machines a fair chance.

Wanting to be left alone by that shit, not to have people submitting PRs and bug reports with fraudulent provenance to our projects, wanting not to have our time wasted reading slop nobody actually wrote, wanting not to have our servers hammered by gigabits per second of scraper hits, etc. isn't called "zealotry". It's called boundaries. Something tech bro culture refuses to understand.

@codinghorror @xgranade Among the folks who came out of that culture, you're one of the few I largely respect and consider decent. But you really need to realize sometime that the whole culture was rotten to the core in matters of consent and boundaries. And overall folks here on the fedi are done with that shit. We're not having it.
@codinghorror You seem to think that a strong position is necessarily one reached without reason or rationality? If I'm incorrect in understanding your position, please let me know, but it seems like you're conflating my having a strong view — one that I have repeatedly explained and justified on my feed — with a pseudoreligious "zealotry."

@codinghorror But that conflation doesn't hold in other cases. To do the physicist-coded thing of looking at the extremes to understand the bulk (I'm not that kind of doctor, but I do have a PhD in physics, it comes up in my thinking sometimes), would you similarly say that a position like "no one should ever be a Nazi or do Nazi-like things" is one of zealotry?

The truth isn't always in the middle, and assuming that it is gives bad-faith actors immense power to unduly shift narratives.

@codinghorror Regardless, though, I think you've badly missed the point of my thread. I'm not looking to convince you on LLMs, you've convinced me you have enough vested interest in the success of LLMs that I recognize that's a fruitless endeavor.

But you jumped in my replies, on a thread that didn't mention or refer to you, a thread about what goes wrong with "purity culture" rhetoric, to make the only marginally related argument that a strong opposition to LLMs is necessarily one of zealotry.

@codinghorror To get to my original point, then, if you believe as I do that it is bad to use tools developed under eugenicist philosophies, that predominantly profit and fund fascists, that carry inordinate environmental costs, that are based on stolen labor, that act as automated scabs, and that don't work, then an opposition to those same tools is a moral position and not one of "purity culture."
@codinghorror I've made my arguments for each of those many times, it's beside the point here. But critically, none of the above requires me to be correct in my beliefs — only that I have reached those rationally if perhaps based on incomplete or flawed data. In which case, make that argument (not to me, as noted above)! But it's intellectually dishonest to say that that opposition is "purity culture."
@xgranade email me if you want to know. I have a rare set of DNA in some cases, as it turns out.
@xgranade fair; I want to be alive, see earlier response.