Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.

EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful

https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/

Acting ethically in an imperfect world

Life is complicated. Regardless of what your beliefs or politics or ethics are, the way that we set up our society and economy will often force you to act against them: You might not want to fly somewhere but your employer will not accept another mode of transportation, you want to eat vegan but are […]

Smashing Frames

@tante

I really like and admire @pluralistic and have utmost respect for him, and that's why I'm totally baffled about why he is claiming "fruit of the poisoned tree" arguments as cause of LLM scepticism.

The objections to LLMs aren't about origins but about what they they are doing right now: destroying the planet, stealing labour, giving power over knowledge to LLM owners etc.

The objections are nothing to do with LLMs' origins, they're entirely about LLMs' effects in the here and now.

@FediThing @tante

Which parts of running a model on your own laptop are implicated in "destroying the planet?" How is checking punctuation "stealing labor?" Or, for that matter "giving power over knowledge to LLM owners?"

@pluralistic I think you can answer these questions yourself.

Suppose you wore a coat made out of mink fur. The minks are already dead, simply wearing the coat won't kill more minks. What does wearing mink fur have to do with cruelty to minks?

Suppose you live in the time of the Luddites. Legislation prohibits trade unions and collective bargaining. Mill owners introduce machines, reducing wages. But you build your own machine. Problem solved? You helping labor or capital?

@FediThing @tante

@skyfaller @FediThing @tante

This is a "fruit of the poisoned tree" argument.

Suppose you use a computer to post to Mastodon, despite the fact that silicon transistors were invented by the eugenicist William Shockley, who spent his Nobel money offering bribes to women of color to be sterlized?

Suppose you sent that Mastodon post on a packet-switched network, despite the fact that this technology was invented by the war criminals at the RAND corporation?

@pluralistic I don't think mink fur or LLMs are comparable to criticizing the origins of the internet or transistors. It's the process that produced mink fur and LLMs that is destructive, not merely that it's made by bad people.

For example, LLM crawlers regularly take down independent websites like Codeberg, DDoSing, threatening the small web. You may say "but my LLM is frozen in time, it's not part of that scraping now", but it would not remain useful without updates.

@FediThing @tante

@skyfaller @pluralistic @FediThing @tante This is precisely it; it's about the process, not their distance from Altman, Amodei, et al. (which the Ollama project and those like it achieve).

The LLM models themselves are, per this analogy, still almost entirely of the mink-corpse variety, and I think it's a stretch to scream "purity!" at everyone giving you the stink eye for the coat you're wearing.

It's not impossible to have and use a model, locally hosted and energy-efficient, that wasn't directly birthed by mass theft and human abuse (or training directly off of models that were). And having models that aren't, that are genuinely open, is great!
That's how the wickedness gets purged and the underlying tech gets liberated.

Maybe your coat is indeed synthetic, that much is still unclear, because so far all the arguing seems to be focused on the store you got it from and the monsters that operate the worst outlets.

@correl @skyfaller @FediThing @tante

More fruit of the poisoned tree.

"This isn't bad, but it has bad things in its origin. The things I use *also* have bad things in their origin, but that's OK, because those bad things are different because [reasons]."

This is the inevitable, pointless dead-end of purity culture.

@pluralistic @skyfaller @FediThing @tante While I can understand your argument and almost certain exhaustion at hollow criticism, that response feels very dismissive of the points being made against your application of that argument.

I'm not sure how fruitful of an argument can be had with regard to what you may or not be using, as you really haven't clarified that anyhow besides locally hosted software that
could be used to run terrible models, so this whole mess is just an endless back and forth of "You seem to be dodging the nature of the evil you may be accepting" vs "You're over-concerned with purity", and I think that's justifiably leaving a bad taste in everyone's mouth.

@correl @skyfaller @FediThing @tante

> as you really haven't clarified that anyhow

I'm sorry, this is entirely wrong.

The fact that you didn't bother to read the source materials associated with this debate in no way obviates their existence.

I set out the specific use-case under discussion in a single paragraph in an open access document. There is no clearer way it could have been stated.

@pluralistic @skyfaller @FediThing @tante Again, this feels dismissive, and dodges the argument. The clarity I was referring to wasn't the use case you laid out (automated proofreading) or the platform (Ollama), but (as has been discussed at length through this thread of conversation) which models are being employed.

This entire conversation has been centered around how currently available models not evil due to vague notions of who incepted the technology they're based upon, but the active harm employed in their creation.

To return to the discussion I'm attempting to have here, I find your fruits of the poisoned tree argument weak, particularly when you're invoking William Shockley (who is most assuredly had no direct hand in the transistors installed in the hardware on my desk nor their component materials) as a counterpoint to the stolen work and egregious cost that are intrinsic to even the toy models out there. It reads to me as employing hyperbole and false equivalence defensively rather than focusing on why what you're comfortable using is, well, comfortable.

@correl @skyfaller @FediThing @tante

Scraping work is categorically not "stealing."

@pluralistic @skyfaller @FediThing @tante That's a major oversimplification and minimization of the rampant feeding of every bit of human creation that can be gotten hold of, be it publicly posted or private, copyrighted or permissively offered, purchased or otherwise acquired. I'm not in the mood to argue the value of consent. And that's to say nothing of the abusive human labor used to filter, tag, and train regurgitated output. Theft is not the only harm that's been brought up, and not even the worst by far.

Finally, bringing that up nothing to address the argument that I'm tiring of reiterating. If you're comfortable wearing your dead mink coat, fine. They're dead already, and I'm not going to assume wearing is a statement that you're out to bludgeon more animals to make more (at least, unless the lady doth protest too much).

Just don't take umbrage at people complaining about the smell.

@pluralistic @correl @skyfaller @FediThing @tante massive oversimplification. This is the wording of a policy I support…. It… disagrees….

“Employees must not enter personal information, confidential information, or intellectual property into any Artificial Intelligence Tool, including approved tools, as doing so may expose …… to privacy breaches, legal and security risks, or loss of control over institutional data.”