I was disappointed to read Cory Doctorow's post where he got weirdly defensive about his LLM use and started arguing with an imaginary foe.

@tante has a very thoughtful reply here:

https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
A few further comments, 🧵>>

Acting ethically in an imperfect world

Life is complicated. Regardless of what your beliefs or politics or ethics are, the way that we set up our society and economy will often force you to act against them: You might not want to fly somewhere but your employer will not accept another mode of transportation, you want to eat vegan but are […]

Smashing Frames
It was particularly disappointing to see Doctorow misconstrue (and thus, if he is believed) undermine the work that many of us are doing to shine a light on the ways in which the ideology of "AI" and the specific ways in which LLMs and other "AI" products are created do real harm.
>>

I also want to point out (again) the ways in which lumping together all uses of LMs (like the lumping of technologies into "AI") obscures the issues at hand.

Language modeling is a useful component of many technologies that can be built without extractive, exploitative means. Take the automatic transcription built by and for the Māori people -- there's te reo Māori language model that's part of that.
>>

And the transformer architecture represented an important step forward in language modeling, that brought improvements to things like spell checking (Doctorow's use case).
>>

And you can build and use language models without turning them into the synthetic text extruding machines that are despoiling our information ecosystem.

And even if those are easily accessible, because OpenAI et al want to burn through cash with their demos, we can still refute and refuse the narrative that synthetic text is somehow a panacea to be used across social services (medicine, education) and in science, etc.
>>

@emilymbender It's so incredibly sad we have found a method to turn any snippet of text into some numbers that somehow encode the meaning behind it, and yet the most popular usecase is just guessing what the next word is

@emilymbender @me

There are many ways #Aiantagonists lose credibility building inaccurate mythos.

One of which is, they assume AI is frozen in stone with zero development, and because they are outright hostile to the tech they rarely keep up with advancements.

The "Guessing text" is a case in point.

#kona is an Energy Based Model which presents MATHEMATICALLY PROVABLE answers.

If I had a cent for every time in my timeline somebody talks about stochiastic parrots, I'd have 67 cents, and thats just yesterday.

Angry posts won't fix AI, political engagement will.
Get off your fat arses and activate politically, #regulateai