Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.

EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful

https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/

Acting ethically in an imperfect world

Life is complicated. Regardless of what your beliefs or politics or ethics are, the way that we set up our society and economy will often force you to act against them: You might not want to fly somewhere but your employer will not accept another mode of transportation, you want to eat vegan but are […]

Smashing Frames
@tante It's also annoying tone policing: Sure, you can protest against it, but not like *that*, not in a way that inconvenes *me*.

@tante One thing I'd love to hear more about from you, and @pluralistic, @simon and others is which of these models are doing the best, most interesting things to mitigate harms? Some are being trained with thought about & limits on the inputs, and the public interest in mind in general. To at least some degree. (e.g., Apertus)

Who's reviewing, comparing models with that lens? If someone understands most companies are part of the problem and asks for alternatives, what do you offer them?

@tante At Tuesday night's #Faradayprize presentation from Mike Woodbridge, more than one of the slides about #LLMs brought #Trump to mind

@tante I, for one, enjoyed the write-up.

As you, I do plenty of things that I either cannot morally or ethically defend or simply am ignorant of the harms of.

For other technology, I am either pressured to use it (by government or similar institutions) or I can at least use it 'for good' (facilitating some other good - in current case, student administration) and also make a living.

For LLMs, I fail to see the redeeming qualities that make the compromise 'worth it'. It's all nett negatives.

@tante My point is that LLMs 'try' (and very often fail) to solve the wrong problem.

Writing all the boilerplate -> should be solved by better frameworks.
Spell check -> I think this is already invented.
Summarize long texts -> Executive summaries.
Produce (verbose) text -> Writing in my view is as much thinking as it is writing - skipping the thinking part is counter-productive.

@tofticles @tante Finally someone else says this about frameworks and boilerplate!

(Btw. the Android grammar check wants me to write boilerplateS.)

@[email protected] That's not the only thing where the actions and words of Doctorow do not match.
@Life_is @tante he’s a very efficient grifter. Has been for decades.
@Colman @Life_is @tante I'll always remember him having a right go at some woman working at a coffee stand at the most recent London Worldcon. I intervened because I wanted coffee and I wasn't sure if he was there for the five minute argument or the full half hour, and he flounced off.
@tante Using LLM's for various tasks may be convenient, but it may also put you at a disadvantage because you lose the ability to do those things by yourself.

@sibrosan this reminds me of Ship of Theseus as in every time one outsources the art to a tool there will become a point where one does not have the craft of the art anymore.

In this case, the art of grammar checking and the tool being a LLM. By constantly using a LLM to do the grammar checking, one is replacing the art of grammar checking slowly through time to the point that nothing on the original art of grammar checking is equivalent to the current version.

@tante

@barefootstache

Note that argument applies just as well to non-LLM grammar checkers…

@tante chef's kiss: "This also shines through in Cory arguing that we need to “liberate” technology. What a strange idea: Technology doesn’t need liberation, people do."

Thank you for writing this cogent piece.

@dingemansemark @tante And this is it! I truly think even if technology were completely liberated from corporatized capture, I really do not think that any societal thing would improve greatly. It might, say, allow easier community creation but at the end of the day, people need that liberation a lot more than the technology
@tante Dunno where you got the idea that I have a "libertarian" background. I was raised by Trotskyists, am a member of the DSA, am advising and have endorsed Avi Lewis, and joined the UK Greens to back Polanski.
@pluralistic @tante My impression was, Tante meant this specific argument and the way it is structured, and the way it functions. I hold the both of you in high esteem, and I don't have the impression that he'd somehow characterize anything beyond that argument he discusses.

@herrLorenz @tante

> Cory shows his libertarian leanings here...

> Many people criticizing LLMs come from a somewhat leftist (in contrast to Cory’s libertarian) background.

@herrLorenz @tante

This falls into the "you are entitled to your own opinions, but not your own facts" territory.

@pluralistic @tante I just spoke about my impression, but didn't lay claim to objective truth. I'll keep reading along. ✌️

@pluralistic @herrLorenz @tante that second example goes well into overreach territory, and I can see why you'd be not happy with it.

And/but a big part of libertarian appeal is that it does muddy how being "individually free from regulation" can be cast as liberatory. As if individual freedom is all that's needed. "I'm free when there are no regulations" is obviously shallow to lefties, but it (individual freedom) is also a component of why people are lefties, there's real overlap.

@CJPaloma @herrLorenz @tante

There is no virtue in being constrained or regulated per se.

Regulation isn't a good unto itself.

Regulation that is itself good - drawn up for a good purpose, designed to be administrable, and then competently administered - is good.

@pluralistic @herrLorenz @tante Of course! Agreed.

The overlap ends around -when- reasons are "good" enough. Laws about how to treat other people are relatively easy.

But until enough people see rivers on fire, regulations on -doing certain things- aren't imposed, despite many people saying "hey, this isn't good" decades prior.

Not reining in/regulating until after -foreseeable- catastrophes results in all kinds of shit shows (from the MIC, to urban sprawl, to plastics, to tax laws, etc)

@[email protected]

Well, we are not only influenced by our legacy: however strong we are, we can't avoid some fundamental influence from the hegemonic culture we live in.

Yet I see how the ethical misalignment here may not be about libertarian values but about utilitarian ones.

Even more subtly, it might be a misalignment about respective utility functions, while both #pluralistic and @[email protected] adopt an utilitarian framework instead of a normative one.

For example, the Pluralistic use of a local LLM might be explained with a slightly higher evaluation of the benefits that his own writings brings to society and thus (indirectly) the value the LLM brings, despite its issues.
Otoh, Tante might value a lot more the political harm that Cory's words did by blaming a political choice as irrational while it's totally rationale: in a way, by justifying the use of a #LLM, #Doctorow justified (even just a little bit) the industry that built it.

And since Pluralistic's strawman is centered around a normative "purity culture" blamed as irrational, Tante framed his response over rationality.

What if a normative behaviour was in fact totally rational in presence of unreducible complexity and informational asymmetry?

I don't use LLM for so many technical and political reasons that would take hours to list. And you both would almost certainly nod to most of them as a strictly rational arguments.
Yet the choice itself, bound to the society I want to build for my daughters and children, is normative: based on the values of truth, freedom and communion.

None of these could ever come from the LLM we are talking about: they are weapons designed to fool people (Turing test included!), so there's no way to wield them to benefit people.

As for "purity culture", I'm a catholic #christian, not a puritan: we brag about the #Church being a casta meretrix (Latin for something like "a pure bitch" 🤣), and we preach a man who hanged with the worst sinners and sometimes even hacking the law to save their lifes, so... 🤷‍♂️
Bible Gateway passage: Matthew 9:10-13 - New International Version

While Jesus was having dinner at Matthew’s house, many tax collectors and sinners came and ate with him and his disciples. When the Pharisees saw this, they asked his disciples, “Why does your teacher eat with tax collectors and sinners?” On hearing this, Jesus said, “It is not the healthy who need a doctor, but the sick. But go and learn what this means: ‘I desire mercy, not sacrifice.’ For I have not come to call the righteous, but sinners.”

Bible Gateway
@giacomo We learned he's a slacker. Let's be honest about it and move on to genuine thinkers.
@pluralistic
Fair enough, but that's not the core of the argument
@tante made. He had the same complaint for starters (your argument was heavily drenched in 'you ppl are purists' ), but he also makes the valid argument that technology isn't neutral in itself. Open weights based on intellectual theft and forced labor is still a problem. Until we have a discussion on how the weights come to fruitition, LLM's are objectively problematic from an ethical view. That has nothing to do with purism.

@tante There must be something in the air; I think this is something that has been on many folks' minds for a while... not criticisms of Cory, but rather the general cognitive difficulty of reconciling survival in a world far less under our control than we thought it was when our character and values accreted and congealed.

It's a nuanced phenomenon especially for a limited-attention-span world. It's good to see people taking up the challenge. Thanks for this.

@tante

That doesn't seem to be the best idea @pluralistic

AI and LLM output is 90% bullshit, and most people don't have the time nor the patience to work out which 10% might actually be useful.

That's completely ignoring the environmental and human impacts of the AI bubble.

Try buying DDR memory, a GPU or an SSD / HDD at the moment.

@simonzerafa @tante

What is the incremental environmental damage created by running an existing LLM locally on your own laptop?

As to "90% bullshit" - as I wrote, the false positive rate for punctuation errors and typos from Ollama/Llama2 is about 50%, which is substantially better than, say, Google Docs' grammar checker.

@pluralistic

I am astonished that I have to explain this,

but very simply in words even a small child could understand:

using these products *creates further demand*

- surely you know this?

Well, either you know this and are being facetious, or you are a lot stupider than I ever thought possible for someone with your privilege and resources.

I am absolutely floored at this reveal, just wow, "where's Cory and what have you done with him?" 🤷

Massive loss of respect!

@simonzerafa @tante

@kel @pluralistic @simonzerafa @tante Not only that, but popularizing LLMs but running them all locally is less efficient than running them in the cloud. It's false that it minimizes harm when you are still consuming power, but more of it since the chip in your computer isn't nearly as efficient as the ones the providers use.

Plus it's all stolen and biased fashware.

@reflex
A big component of the problem of AI data centers is they concentrate energy usage into one place and require water and active cooling. i dont think thats true for laptop users.
@kel @pluralistic @simonzerafa @tante
@dlakelan @kel @pluralistic @simonzerafa @tante Laptop users are still drawing power from centralized power production facilities with all the same issues, it does not magically go away by being distributed on the consumption end.

@reflex @kel @pluralistic @simonzerafa @tante

Yes, but in Cory's case, he measured the usage, and it was not different from watching a YouTube video, something millions do daily for hours at a time. He ran his grammar checker for minutes per day. and none of the extra problems of density (cooling/water use) were applicable. I don't see power consumption or environmental concerns that are different from just "people individually have computers"

@dlakelan @kel @pluralistic @simonzerafa @tante Yeah, I'm not going to have this debate with you. You can feel free to disagree with individual points if you like, but either address my entire case if you disagree or recognize that people can agree or disagree with individual parts without the argument being invalid.

Youtube videos take a lot more power than text editor grammar checkers, and are worker hostile for a wealthy guy who can afford an actual editor.

@dlakelan @reflex @kel @pluralistic @simonzerafa @tante you don't see the difference between running a spellchecker at 2% CPU usage and running a local LLM at 100% GPU for long periods of time?

@stooovie @reflex @kel @pluralistic @simonzerafa @tante

I never said any of that. What I said was there was no measurable difference in power consumption between him running his LLM enabled grammar checker procedure for a few minutes, and him watching a YouTube video for a few minutes.

@dlakelan okay, sorry. I misread that as no difference between a spellchecker and general local LLM.
@dlakelan @stooovie @kel @pluralistic @simonzerafa @tante I mean, I know when I'm normally checking spelling I watch youtube instead, they are totally substitutes for each other and should be compared.

@reflex @kel @pluralistic @simonzerafa @tante

Looking at how server farms are built not for resource efficiency but space efficiency, I’m not too sure about your point, ai server farms run; gasoline backup generators, freshwater usage, and the technical problems of scale.

My laptop never needed fresh water or gasoline to host a website, during it’s running lifetime.

Not to mention the collective noise pollution:

https://gerrymcgovern.com/data-centers-are-noisy-as-hell/

This is, on the other hand, no defense against LLM’s and the ignorant statements of Cory Doctorow, the continuing theft, unending greed, cannot be ignored by running the freeware models locally.

Data centers are noisy as hell

Gerry McGovern

@pluralistic @tante

Of course, I am speaking in generalities.

Encouraging the use of LLM's is counterproductive in so many ways, as I highlighted.

Pop a power meter on that LLM adorned PC and let us all know what the power usage looks like with and without your chosen LLM running on a typical task 🙂

That's power that generated somewhere, even if it's with renewable energy.

The main issue with LLM's is that they don't encourage critical thinking, in a world which is already suffering from a massive shortage.

@simonzerafa @tante

As I wrote (and it seems you haven't read what I wrote, which is weird, because that seems like a good first step if you're going to criticize my conduct), I'm running Ollama on a laptop that doesn't even have a GPU.

Its power consumption is comparable to, say, watching a Youtube video.

I know this because my laptop is running free software that lets me accurately monitor its activity, and because the model is also free software.

@simonzerafa @tante

Checking for punctuation errors is does not discourage critical thinking. It's weird to laud "critical thinking" and also make this claim.

@pluralistic @simonzerafa on this one for example I fully agree with Cory. This is not him having a genAI system write or anything like that.

@tante @pluralistic @simonzerafa I agree in principle with Cory, but I really wish that he had clarified that:

1. Ollama is not an LLM, it's a server for various models, of varying degrees of openness.
2. Open weights is not open source, the model is still a black box. We should support projects like OLMO, which are completely open, down to the training data set and checkpoints.
3. It's quite difficult to "seize that technology" without using Someone Else's Computer to do so (a.k.a clown/cloud)

@tante @pluralistic @simonzerafa But ALSO: using a multi-billion-parameter synthetic text extruding machine to find spelling and syntax errors is a blatant example of "doing everything the least efficient way possible" and that's why we are living on an overheating planet buried under toxic e-waste.

If I think about it harder I could probably come up with a more clever metaphor than killing a mosquito with a flamethrower, but you get the idea.

@dhd6 @tante @simonzerafa

No. It's like killing a mosquito with a bug zapper whose history includes thousands of years of metallurgy, hundreds of years of electrical engineering, and decades of plastics manufacture.

There is literally no contemporary manufactured good that doesn't sit atop a vast mountain of extraneous (to that purpose) labor, energy expenditure and capital.

@pluralistic @tante @simonzerafa As always, yes and no. A bug zapper is designed to zap bugs, it is a simple mechanism that does that one thing, and does it well. An LLM is designed to read text and generate more text.

That we have decided that the best way to do NLP is to use massively overparameterized word predictors that we have trained using RL to respond to prompts, rather than just, like, doing NLP, is just crazy from an engineering standpoint.

Rube Goldberg is spinning in his grave!

@dhd6 @tante @simonzerafa

Remember when Usenet's backbone cabal worried about someone in Congress discovering that the giant, packet-switched research network that had been constructed at enormous public expense was being used for idle chit chat?

The nature of general purpose technologies is that they will be used for lots of purposes.

@pluralistic @tante @simonzerafa indeed, I guess the question is whether the scale of the *ahem* waste, fraud and abuse *ahem* of resources that LLMs seem to imply, even in benign use cases like yours, is out of line with historical precedent or not.

Am I an old man yelling at a cloud?

No, it's the children who are wrong!

@dhd6 @tante @simonzerafa

Rockets were literally perfected in Nazi slave labor camps.

@pluralistic @dhd6 @tante @simonzerafa what a shit take dude. rockets being perfected by nazis, project paperclip, and now a neonazi in charge of one of the largest space tech programs on the planet, along with a bullshit generating LLM.

so yeah, maybe this is all fash tech, and maybe taking a stand of "I'm not touching that shit with a thousand-meter pole" is not "neoliberal purity culture". and ollama of all things? the shit pumped out by fucking Meta? are you shitting me?

@elle @dhd6 @tante @simonzerafa

"You used the wrong open model because I don't like the company that made it" is the actual definition of nonsense purity culture.

@pluralistic @dhd6 @tante @simonzerafa you wrote a book on how much of a shitbag company corpos like Meta are. now you're saying "oh it's not that bad, look it's marginally better than Google Docs spell checker"?! did someone hack your fucking account?

there are legitimately open models that originate from academic institutions, train on open data with full consent. even those models take tens-of-thousands of euros to train. well outside the resources available to most open-source enjoyers

@elle @pluralistic @dhd6 @tante @simonzerafa the "enshittifcation" has hit the originator. hope you got paid well, now go away Cory.