I was disappointed to read Cory Doctorow's post where he got weirdly defensive about his LLM use and started arguing with an imaginary foe.

@tante has a very thoughtful reply here:

https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
A few further comments, 🧵>>

Acting ethically in an imperfect world

Life is complicated. Regardless of what your beliefs or politics or ethics are, the way that we set up our society and economy will often force you to act against them: You might not want to fly somewhere but your employer will not accept another mode of transportation, you want to eat vegan but are […]

Smashing Frames
It was particularly disappointing to see Doctorow misconstrue (and thus, if he is believed) undermine the work that many of us are doing to shine a light on the ways in which the ideology of "AI" and the specific ways in which LLMs and other "AI" products are created do real harm.
>>

I also want to point out (again) the ways in which lumping together all uses of LMs (like the lumping of technologies into "AI") obscures the issues at hand.

Language modeling is a useful component of many technologies that can be built without extractive, exploitative means. Take the automatic transcription built by and for the Māori people -- there's te reo Māori language model that's part of that.
>>

And the transformer architecture represented an important step forward in language modeling, that brought improvements to things like spell checking (Doctorow's use case).
>>

And you can build and use language models without turning them into the synthetic text extruding machines that are despoiling our information ecosystem.

And even if those are easily accessible, because OpenAI et al want to burn through cash with their demos, we can still refute and refuse the narrative that synthetic text is somehow a panacea to be used across social services (medicine, education) and in science, etc.
>>

Doctorow could have gone into these details; could have said something about the particular LLM he chose was built (whose data, trained how, how much data, what kind of further data work in RLHF); could have drawn a distinction about use cases.
>>

But instead he wrote a defensive screed, seemingly imagining someone knowing about his LLM use ascribing to him all of the ills of everyone's LLM production and use.

A missed opportunity, to be sure.

@emilymbender

His position on subjects is distorted by his personal position in society.

It's a common side-effect of successful critics of society. He speaks, now, to the only people he thinks matter, but they are a narrow group of exceptionals, culled from the privileged, who he interacts most with.

Success has its isolations, and he hasn't confronted this yet . . .

@_chris_real @emilymbender He's been responsive when I've communicated with him, and I'm not a celebrated luminary.

It could be that success has corrupted him. It seems to corrupt everyone.

I thought he was only talking about the ethics of the things, though. Is he actually using them then? For what? I'm curious, as I've only seen the peripheries of discussions about Doctorow and LLMs do far. A link to whatever he said that sparked all this would be welcome.

@mason @_chris_real @emilymbender this is the web version: https://pluralistic.net/2026/02/19/now-we-are-six/#stock-buyback if you search for "llm" on the page you'll find the part this has been talking about
Pluralistic: Six Years of Pluralistic (19 Feb 2026) – Pluralistic: Daily links from Cory Doctorow

@paulsilver @_chris_real @emilymbender Oh, that's disappointing. I've sent him error corrections in the past, but I'd rather see the occasional typo than have him contribute to cooking the planet.

He doesn't talk about the training data for his model, nor whether he's using their cloud services or not. He talks about "purity culture" but disregards ongoing harm.

Thank you for the pointer.

@mason @paulsilver @_chris_real @emilymbender He's running Ollama locally to do a grammar check. Let's not pretend that's a significant use of resources.

@krishooper @mason @emilymbender I've been losing my mind about this. There might be valid criticisms to what he wrote, but like, the idea that he is directly harming the environment with his use case is a straight up denial of reality and yet people (seem?) to be saying that en masse.

Like shit man, attack the parts that you feel stick out. Everyone seems to just be copy-pasting their general argument against AI into their replies to his post, despite the fact that a lot of that doesn't apply to that post.

I feel like I'm going crazy. Either I'm missing something, or everybody is just talking past each other.