Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.

EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful

https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/

Acting ethically in an imperfect world

Life is complicated. Regardless of what your beliefs or politics or ethics are, the way that we set up our society and economy will often force you to act against them: You might not want to fly somewhere but your employer will not accept another mode of transportation, you want to eat vegan but are […]

Smashing Frames

@tante

That doesn't seem to be the best idea @pluralistic

AI and LLM output is 90% bullshit, and most people don't have the time nor the patience to work out which 10% might actually be useful.

That's completely ignoring the environmental and human impacts of the AI bubble.

Try buying DDR memory, a GPU or an SSD / HDD at the moment.

@simonzerafa @tante

What is the incremental environmental damage created by running an existing LLM locally on your own laptop?

As to "90% bullshit" - as I wrote, the false positive rate for punctuation errors and typos from Ollama/Llama2 is about 50%, which is substantially better than, say, Google Docs' grammar checker.

@pluralistic

I am astonished that I have to explain this,

but very simply in words even a small child could understand:

using these products *creates further demand*

- surely you know this?

Well, either you know this and are being facetious, or you are a lot stupider than I ever thought possible for someone with your privilege and resources.

I am absolutely floored at this reveal, just wow, "where's Cory and what have you done with him?" 🤷

Massive loss of respect!

@simonzerafa @tante

@kel @pluralistic @simonzerafa @tante Not only that, but popularizing LLMs but running them all locally is less efficient than running them in the cloud. It's false that it minimizes harm when you are still consuming power, but more of it since the chip in your computer isn't nearly as efficient as the ones the providers use.

Plus it's all stolen and biased fashware.

@reflex
A big component of the problem of AI data centers is they concentrate energy usage into one place and require water and active cooling. i dont think thats true for laptop users.
@kel @pluralistic @simonzerafa @tante
@dlakelan @kel @pluralistic @simonzerafa @tante Laptop users are still drawing power from centralized power production facilities with all the same issues, it does not magically go away by being distributed on the consumption end.

@reflex @kel @pluralistic @simonzerafa @tante

Yes, but in Cory's case, he measured the usage, and it was not different from watching a YouTube video, something millions do daily for hours at a time. He ran his grammar checker for minutes per day. and none of the extra problems of density (cooling/water use) were applicable. I don't see power consumption or environmental concerns that are different from just "people individually have computers"

@dlakelan @reflex @kel @pluralistic @simonzerafa @tante you don't see the difference between running a spellchecker at 2% CPU usage and running a local LLM at 100% GPU for long periods of time?

@stooovie @reflex @kel @pluralistic @simonzerafa @tante

I never said any of that. What I said was there was no measurable difference in power consumption between him running his LLM enabled grammar checker procedure for a few minutes, and him watching a YouTube video for a few minutes.

@dlakelan okay, sorry. I misread that as no difference between a spellchecker and general local LLM.
@dlakelan @stooovie @kel @pluralistic @simonzerafa @tante I mean, I know when I'm normally checking spelling I watch youtube instead, they are totally substitutes for each other and should be compared.