I was disappointed to read Cory Doctorow's post where he got weirdly defensive about his LLM use and started arguing with an imaginary foe.

@tante has a very thoughtful reply here:

https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
A few further comments, 🧵>>

Acting ethically in an imperfect world

Life is complicated. Regardless of what your beliefs or politics or ethics are, the way that we set up our society and economy will often force you to act against them: You might not want to fly somewhere but your employer will not accept another mode of transportation, you want to eat vegan but are […]

Smashing Frames
It was particularly disappointing to see Doctorow misconstrue (and thus, if he is believed) undermine the work that many of us are doing to shine a light on the ways in which the ideology of "AI" and the specific ways in which LLMs and other "AI" products are created do real harm.
>>

I also want to point out (again) the ways in which lumping together all uses of LMs (like the lumping of technologies into "AI") obscures the issues at hand.

Language modeling is a useful component of many technologies that can be built without extractive, exploitative means. Take the automatic transcription built by and for the Māori people -- there's te reo Māori language model that's part of that.
>>

And the transformer architecture represented an important step forward in language modeling, that brought improvements to things like spell checking (Doctorow's use case).
>>

And you can build and use language models without turning them into the synthetic text extruding machines that are despoiling our information ecosystem.

And even if those are easily accessible, because OpenAI et al want to burn through cash with their demos, we can still refute and refuse the narrative that synthetic text is somehow a panacea to be used across social services (medicine, education) and in science, etc.
>>

Doctorow could have gone into these details; could have said something about the particular LLM he chose was built (whose data, trained how, how much data, what kind of further data work in RLHF); could have drawn a distinction about use cases.
>>

@emilymbender

This distinction about use cases is the important point in my view. So much so that I wasn't fully on board with the first paragraphs of the Smashing Frames article (though I loved the rest).

For example, the analogy to wanting to be vegan but accepting vegetarian. I am convinced of the value of reducing our meat consumption and animal farming. But personally I don't find eating meat morally objectionable on principle. If I did, I'd *not* make exceptions.

>>

Re: veganism "If I did, I'd not make exceptions."

Yeah, I am a vegan. I've even worked as a chef at vegan restaurants.

Life has a "funny" way of testing convictions in my experience.

For me, for example: I have been incarcerated, more than once. Despite requesting vegan meals, such things were never availed to me.

However: I found that others with whom I was incarcerated, were generally more than happy to trade their meals' vegetables, for my meals' meat. Same for milk, etc.

Of all the weird economies that I encountered whilst incarcerated? It certainly seemed as if it was among the more benign. I managed to maintain being vegan as best I could in a food desert, and cultivated some camaraderie from carnivores who were happy with my generosity with things I had no interest in consuming.

I would posit: @[email protected] probably isn't vegan, and isn't writing from a perspective of authority in such realms. Alas, while analogies are perhaps useful for trying to convey an idea, they're also a fundamental logical fallacy that critical thinking classes in junior colleges will typically highlight as something to avoid in writing.

I'll leave you with a vegan joke: "When I was an omnivore, I didn't understand vegetarians. Now that I am vegan, I understand them even less."