Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.

EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful

https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/

Acting ethically in an imperfect world

Life is complicated. Regardless of what your beliefs or politics or ethics are, the way that we set up our society and economy will often force you to act against them: You might not want to fly somewhere but your employer will not accept another mode of transportation, you want to eat vegan but are […]

Smashing Frames

@tante

That doesn't seem to be the best idea @pluralistic

AI and LLM output is 90% bullshit, and most people don't have the time nor the patience to work out which 10% might actually be useful.

That's completely ignoring the environmental and human impacts of the AI bubble.

Try buying DDR memory, a GPU or an SSD / HDD at the moment.

@simonzerafa @tante

What is the incremental environmental damage created by running an existing LLM locally on your own laptop?

As to "90% bullshit" - as I wrote, the false positive rate for punctuation errors and typos from Ollama/Llama2 is about 50%, which is substantially better than, say, Google Docs' grammar checker.

@pluralistic

I am astonished that I have to explain this,

but very simply in words even a small child could understand:

using these products *creates further demand*

- surely you know this?

Well, either you know this and are being facetious, or you are a lot stupider than I ever thought possible for someone with your privilege and resources.

I am absolutely floored at this reveal, just wow, "where's Cory and what have you done with him?" 🤷

Massive loss of respect!

@simonzerafa @tante

@kel @pluralistic @simonzerafa @tante Not only that, but popularizing LLMs but running them all locally is less efficient than running them in the cloud. It's false that it minimizes harm when you are still consuming power, but more of it since the chip in your computer isn't nearly as efficient as the ones the providers use.

Plus it's all stolen and biased fashware.

@reflex
A big component of the problem of AI data centers is they concentrate energy usage into one place and require water and active cooling. i dont think thats true for laptop users.
@kel @pluralistic @simonzerafa @tante
@dlakelan @kel @pluralistic @simonzerafa @tante Laptop users are still drawing power from centralized power production facilities with all the same issues, it does not magically go away by being distributed on the consumption end.

@reflex @kel @pluralistic @simonzerafa @tante

Yes, but in Cory's case, he measured the usage, and it was not different from watching a YouTube video, something millions do daily for hours at a time. He ran his grammar checker for minutes per day. and none of the extra problems of density (cooling/water use) were applicable. I don't see power consumption or environmental concerns that are different from just "people individually have computers"

@dlakelan @kel @pluralistic @simonzerafa @tante Yeah, I'm not going to have this debate with you. You can feel free to disagree with individual points if you like, but either address my entire case if you disagree or recognize that people can agree or disagree with individual parts without the argument being invalid.

Youtube videos take a lot more power than text editor grammar checkers, and are worker hostile for a wealthy guy who can afford an actual editor.

@dlakelan @reflex @kel @pluralistic @simonzerafa @tante you don't see the difference between running a spellchecker at 2% CPU usage and running a local LLM at 100% GPU for long periods of time?

@stooovie @reflex @kel @pluralistic @simonzerafa @tante

I never said any of that. What I said was there was no measurable difference in power consumption between him running his LLM enabled grammar checker procedure for a few minutes, and him watching a YouTube video for a few minutes.

@dlakelan okay, sorry. I misread that as no difference between a spellchecker and general local LLM.
@dlakelan @stooovie @kel @pluralistic @simonzerafa @tante I mean, I know when I'm normally checking spelling I watch youtube instead, they are totally substitutes for each other and should be compared.

@reflex @kel @pluralistic @simonzerafa @tante

Looking at how server farms are built not for resource efficiency but space efficiency, I’m not too sure about your point, ai server farms run; gasoline backup generators, freshwater usage, and the technical problems of scale.

My laptop never needed fresh water or gasoline to host a website, during it’s running lifetime.

Not to mention the collective noise pollution:

https://gerrymcgovern.com/data-centers-are-noisy-as-hell/

This is, on the other hand, no defense against LLM’s and the ignorant statements of Cory Doctorow, the continuing theft, unending greed, cannot be ignored by running the freeware models locally.

Data centers are noisy as hell

Gerry McGovern

@pluralistic @tante

Of course, I am speaking in generalities.

Encouraging the use of LLM's is counterproductive in so many ways, as I highlighted.

Pop a power meter on that LLM adorned PC and let us all know what the power usage looks like with and without your chosen LLM running on a typical task 🙂

That's power that generated somewhere, even if it's with renewable energy.

The main issue with LLM's is that they don't encourage critical thinking, in a world which is already suffering from a massive shortage.

@simonzerafa @tante

As I wrote (and it seems you haven't read what I wrote, which is weird, because that seems like a good first step if you're going to criticize my conduct), I'm running Ollama on a laptop that doesn't even have a GPU.

Its power consumption is comparable to, say, watching a Youtube video.

I know this because my laptop is running free software that lets me accurately monitor its activity, and because the model is also free software.

@simonzerafa @tante

Checking for punctuation errors is does not discourage critical thinking. It's weird to laud "critical thinking" and also make this claim.

@pluralistic @simonzerafa on this one for example I fully agree with Cory. This is not him having a genAI system write or anything like that.

@tante @pluralistic @simonzerafa I agree in principle with Cory, but I really wish that he had clarified that:

1. Ollama is not an LLM, it's a server for various models, of varying degrees of openness.
2. Open weights is not open source, the model is still a black box. We should support projects like OLMO, which are completely open, down to the training data set and checkpoints.
3. It's quite difficult to "seize that technology" without using Someone Else's Computer to do so (a.k.a clown/cloud)

@tante @pluralistic @simonzerafa But ALSO: using a multi-billion-parameter synthetic text extruding machine to find spelling and syntax errors is a blatant example of "doing everything the least efficient way possible" and that's why we are living on an overheating planet buried under toxic e-waste.

If I think about it harder I could probably come up with a more clever metaphor than killing a mosquito with a flamethrower, but you get the idea.

@dhd6 @tante @simonzerafa

No. It's like killing a mosquito with a bug zapper whose history includes thousands of years of metallurgy, hundreds of years of electrical engineering, and decades of plastics manufacture.

There is literally no contemporary manufactured good that doesn't sit atop a vast mountain of extraneous (to that purpose) labor, energy expenditure and capital.

@pluralistic @tante @simonzerafa As always, yes and no. A bug zapper is designed to zap bugs, it is a simple mechanism that does that one thing, and does it well. An LLM is designed to read text and generate more text.

That we have decided that the best way to do NLP is to use massively overparameterized word predictors that we have trained using RL to respond to prompts, rather than just, like, doing NLP, is just crazy from an engineering standpoint.

Rube Goldberg is spinning in his grave!

@dhd6 @tante @simonzerafa

Remember when Usenet's backbone cabal worried about someone in Congress discovering that the giant, packet-switched research network that had been constructed at enormous public expense was being used for idle chit chat?

The nature of general purpose technologies is that they will be used for lots of purposes.

@pluralistic @tante @simonzerafa indeed, I guess the question is whether the scale of the *ahem* waste, fraud and abuse *ahem* of resources that LLMs seem to imply, even in benign use cases like yours, is out of line with historical precedent or not.

Am I an old man yelling at a cloud?

No, it's the children who are wrong!

@dhd6 @tante @simonzerafa

Rockets were literally perfected in Nazi slave labor camps.

@pluralistic @dhd6 @tante @simonzerafa what a shit take dude. rockets being perfected by nazis, project paperclip, and now a neonazi in charge of one of the largest space tech programs on the planet, along with a bullshit generating LLM.

so yeah, maybe this is all fash tech, and maybe taking a stand of "I'm not touching that shit with a thousand-meter pole" is not "neoliberal purity culture". and ollama of all things? the shit pumped out by fucking Meta? are you shitting me?

@elle @dhd6 @tante @simonzerafa

"You used the wrong open model because I don't like the company that made it" is the actual definition of nonsense purity culture.

@pluralistic @dhd6 @tante @simonzerafa you wrote a book on how much of a shitbag company corpos like Meta are. now you're saying "oh it's not that bad, look it's marginally better than Google Docs spell checker"?! did someone hack your fucking account?

there are legitimately open models that originate from academic institutions, train on open data with full consent. even those models take tens-of-thousands of euros to train. well outside the resources available to most open-source enjoyers

@elle @pluralistic @dhd6 @tante @simonzerafa the "enshittifcation" has hit the originator. hope you got paid well, now go away Cory.

@pluralistic @elle @dhd6 @tante @simonzerafa I beg to differ. Demand is a powerful and legit tool of people responding to corporate behavior. Choosing a different product because you dislike a maker's conduct is nought but the invisible hand of the market slapping that maker for their conduct. Provenience does matter.

Smearing choice of provenience as wilful purism would be the perfect argument for any company to disregard social or ethical standards, constituting a right to demand for the 'best' offer. Which would take us into the field of classic objectivism, and be in itself as willful and naive as the purism it accuses consumers of.

@pluralistic @dhd6 @tante @simonzerafa Good grief, these ad hoc rationalizations are absurd and you know it.

FYI, rockets are enormously environmentally destructive (fuel, pollution, noise, etc.). The planet would be better off with as few rockets launching as possible.

Saying an LLM is OK because some completely other "good" technology was invented by evil people is a *non argument*.

@jaredwhite @dhd6 @tante @simonzerafa You're right, that would be a silly thing to say.

Good thing I didn't say it.

@dhd6 @tante @pluralistic @simonzerafa

Isn't this just purity testing? Aka liberal aestheticism masquerading as praxis?

The planet is hot because capitalism is a malformed cancer that can't stop growing until it kills itself and everything growing in its environment, not because a writer used an LLM. Therefore we need environment change to make capitalism maladaptive, see: ice age, mammals.

The environment is society, not one guy. Purity testing is the opposite of focusing on social change

@dhd6 @tante @pluralistic @simonzerafa IMHO this is already going down the wrong path.

If you follow anything I write or boost, you'll quickly note that I'm very vocal against AI. But that is a shorthand; my actual position is that I'm fine with the *tech*, strongly dislike the *waste* (where applicable), but my actual complaint is that the AI bubble is literally a fascist project.

Outside of FOMO, every reason people use or promote AI based things in this bubble is designed to...

@dhd6 @tante @pluralistic @simonzerafa ... disenfranchise people, by partially replacing them with a machine that imitates their work. And unlike people, machines can be owned.

Their output functions like a natural resource (except it's not natural), and there is insurmountable historic precedent that this promotes tyrannies. The TL;DR of it being that when you can mine natural resources, you are less reliant on a fed, educated, healthy, mobile population - so public spending becomes a waste.

@dhd6 @tante @pluralistic @simonzerafa The problem isn't ingesting text from the web. The problem isn't using this to generate new text, or spell check existing text.

The problem is that capitalist logic demands that this is used to move "value" from the general population to property oligarchs own. Marx would have started talking about labour here.

That this promotes fascism is certainly the effect, and when you look at those who stand to win, probably also the reason.

@dhd6 @tante @simonzerafa So, yes, whether something is or isn't open plays into that, and I get the complaint.

But at the same time, it's a distraction.

The general position @pluralistic holds in the blog post is very much in line with distinguishing between the tech and the bubble.

Personally, I feel like responding to that with "yeah, but it's not good enough" is a very good example of the kind of Leftist purity culture that is so, so effective at hindering collaboration.

@simonzerafa @pluralistic @tante

Pop a power meter on that LLM adorned PC and let us all know what the power usage looks like with and without your chosen LLM running on a typical task

challenge accepted! :D

my laptop uses about 6w per hour when idling, and 25w when playing games or running inference

I'd attribute the difference as about 19w per hour of inference

my 900W microwave uses 15w per minute

so microwaving a frozen burrito for two and a half minutes is equivalent to two hours of inference (or games) on my laptop

also, that burrito was frozen. refrigerator wattage varies widely, but an average hourly running wattage of 150w is nominal

at 150w the freezer takes almost 8x watts per hour more than the laptop inference, and the freezer runs 24/7/365!

most of my inference tasks complete in about 30 seconds at about 0.16 watt per inference job. thats almost 940 inference jobs (assuming 30s average) equivalent to 1 hour of refrigerator running wattage

@memoria @simonzerafa @pluralistic @tante I don't think the majority of people use a computer for LLM operations that can be powered by a small fits-in-a-window solar panel.
@memoria Not mentioning everyone here. I am curious what kind of use cases of useful inferences you can do on your home machine and with which models ?

(I have a M1 Studio Ultra on my desk but only tried local inference long ago)

@santi

sure, i use inference in a few ways:

  • karakeep - tagging bookmarks, semantic search
  • immich - face detection, semantic search
  • paperless-gpt - document titles, tags, and OCR
  • libretranslate - language translations
  • Speakr - voice to text transcription, tagging, summaries, semantic search
  • audiomuse - sonic analysis on my music collection to generate sonically similar playlists and track queues

as for LLM models:

i really like the IBM granite4 models, specifically the 7B hybrid model (granite4:7v-a1b-h). It's hands down the best text-only model for it's CPU and memory (4.2GiB) requirements.

Gemma3:4b is an all around good model for it's size, and can output text from text and image inputs. it's a pure transformer model so it's heavier to run than hybrid models, and 4B models do tend to go off the rails faster and more frequently.

qwen2.5vl:3b is the best image to text model i can run on my system. qwen3vl:4b is significantly better, but i can't reasonably run it

with an M1 ultra you could probably run the largest of these models and have it complete inference instantly 

@memoria Thanks ! I’ll definitely play with this again one of these days !
@simonzerafa @pluralistic @tante > The main issue with LLM's is that they don't encourage critical thinking

And studies even suggest that it does the opposite and discourages critical thinking. I wonder who benefits from that  (it's politicians, it's always politicians, especially right wing ones)
@pluralistic @simonzerafa @tante
But Google Docs anything is rubbish.

@raymaccarthy @simonzerafa @tante

I see. And do you have moral opinions about whether people should use Google Docs? Do you seek out strangers to tell them that it's dangerous to use Google Docs?

@pluralistic @raymaccarthy @simonzerafa @tante I do. It's proprietary SaaSS and should be avoided.

@lispi314 @simonzerafa @pluralistic @tante
Pros: Handy for real-time collaborative editing of something that will be made public.

Cons:
Like 1960s; a terminal to someone else's computer. A browser is more overhead than an editor.
Google can deny access without warning.
Google used to scan for advertising/profile purposes, now also for LLM AI.
Don't do formatting; export is poor.
Needs Internet.
On ChromeOS, for text only Jota Android is better, or install Crostini & have any Linux editor / WP.

@pluralistic @simonzerafa @tante
"What is the incremental environmental damage created by running an existing LLM locally on your own laptop?"

I dunno. But how about a couple of million people?

The person who coins the term 'enshittification' defends LLM. Just...wow. We truly are fucked.

Let's all do what Cory does!
☠️
Meanwhile:
https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/?gad_source=1&gad_campaignid=20737314952&gbraid=0AAAAADgO_miNIDzn-BdCIXzZ6r87g94-L&gclid=Cj0KCQiA49XMBhDRARIsAOOKJHbvIzPACe0EdEyWK86TnS7rNlnUaePKc5y22qT0ZsfqUeGDe72zzc0aAhFFEALw_wcB
#doomed #ClimateChange

We did the math on AI’s energy footprint. Here’s the story you haven’t heard.

The emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next.

MIT Technology Review

@clintruin @simonzerafa @tante

Which "couple million people" suffer harm when I run a model on my laptop?

@pluralistic @simonzerafa @tante
Missed the point, sir.

When one person does it...no big deal.

When a couple of million people do it...well, see the MIT article above.

@pluralistic @simonzerafa @tante
Subhead quote from the article:
"The emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next."

@clintruin @simonzerafa @tante

You are laboring under a misapprehension.

I will reiterate my question, with all caps for emphasis.

Which "couple million people" suffer harm when I run a model ON MY LAPTOP?

@pluralistic @simonzerafa @tante
I'll reiterate my response.

When you *alone* do it...no big deal.
When a couple of million do it ON THEIR OWN LAPTOPS...problem.

@clintruin @simonzerafa @tante

OK, sorry, i was under the impression that I was having a discussion with someone who understands this issue.

You are completely, empirically, technically wrong.

Checking the punctuation on a document on your laptop uses less electricity than watching a Youtube video.

@pluralistic @simonzerafa @tante

Fair enough, Cory. You're gonna do what you want regardless of my accuracy or inaccuracy anyway. And maybe I've misunderstood this. The same way many many will.

But visualize this:

"Hey...I just read Cory Doctrow uses an LLM to check his writing."
"Really?"
"Yeah, it's true."
"Cool, maybe what I've read about ChatGPT is wrong too..."

@clintruin @simonzerafa @tante

This is an absurd argument.

"I just read about a thing that is fine, but I wasn't paying close attention, so maybe something bad is good?"

Come.

On.

@pluralistic @simonzerafa @tante
Maybe...
Maybe not.

You have a good day.

@pluralistic @clintruin @simonzerafa @tante

Which "couple million people" suffer harm when I run a model ON MY LAPTOP?

Anyone who's hosting a website, and is getting hammered by the bots that seek content to train the models on. Those of us are the ones who continue getting hurt.

Whether you run it locally or not, makes little difference. The models were trained, and training very likely involved scraping, and that continues to be a problem to this day. Not because of ethical concerns, but technical ones: a constant 100req/sec 24/7, with over 2.5k req/sec waves may sound little in this day and age, but at around 2.5k req/sec (sustained for about a week!), my cheap VPS's two vCPUs are bogged down trying to deal with all the TLS handshakes, let alone serving anything.

That is a cost many seem to forget. It costs bandwidth, CPU, and human effort to keep things online under the crawler DDoS - which often will require cold, hard cash too, to survive.

Ask Codeberg or LWN how they fare under crawler load, and imagine someone who just wants to have their stuff online having to deal with similar abuse.

That is the suffering you enable when using any LLM model, even locally.

@algernon @pluralistic @clintruin @simonzerafa @tante I host on low-cost hardware out of my house and crawlers made my forgejo unusable until I forcefully blocked access to useful features (viewing commits) for everyone. Now they just hammer my login page a bunch but not at a rate that impacts my use anymore

@algernon @pluralistic @clintruin @simonzerafa @tante

Ok, sure. But you won't stop that train by purity testing other leftists. So what's the plan to stop openai ddosing all our blogs?

@komali_2 @algernon @pluralistic @simonzerafa @tante
In the big scheme of things I don’t care if Doctrow is using some local LLM to proof his writing. The fact he happens to be the same fellow who coined the term ‘enshittification’ smacks of a dark irony, but whatever. That’s merely how I view it. He’s convinced it’s copasetic, and I don’t really give a fuck. But when we consider that this tech is contributing enormously to the much larger problem of climate change, there’s a real issue. 1/
@komali_2 @algernon @pluralistic @simonzerafa @tante
2/Doctrow has explained he does not believe his use of a local LLM is contributing to that overall problem. I don’t know if it is, or it isn’t. But that does not derail his fame, or that others may view his use as a tacit approval of this tech in general. Is it fair to lay this on him merely for using a local LLM to proof his work? Should he even be concerned about this?