Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.

EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful

https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/

Acting ethically in an imperfect world

Life is complicated. Regardless of what your beliefs or politics or ethics are, the way that we set up our society and economy will often force you to act against them: You might not want to fly somewhere but your employer will not accept another mode of transportation, you want to eat vegan but are […]

Smashing Frames

@tante

I really like and admire @pluralistic and have utmost respect for him, and that's why I'm totally baffled about why he is claiming "fruit of the poisoned tree" arguments as cause of LLM scepticism.

The objections to LLMs aren't about origins but about what they they are doing right now: destroying the planet, stealing labour, giving power over knowledge to LLM owners etc.

The objections are nothing to do with LLMs' origins, they're entirely about LLMs' effects in the here and now.

@FediThing @tante @pluralistic Some people - in fact quite a lot; if my reading is correct - do indeed argue that LLMs can *never* be ethically used because they are “trained on stolen work”.

@ianbetteridge @FediThing @tante

Performing mathematical analysis on large corpora of published work is not "stealing."

@pluralistic @ianbetteridge @FediThing @tante If that “mathematical analysis” regurgitates near verbatim works created by other people, it certainly is committing IP theft, and LLMs will happily do that. The “mathematical analysis” is effectively a form of lossy compression on its training data which a prompt can later extract.

@bjn @ianbetteridge @FediThing @tante

Once again, you're talking about *using* a model, not training a model.

Also "IP theft" isn't a thing. Perhaps you mean copyright infringement?

@pluralistic @ianbetteridge @FediThing @tante I’ll give you pedant points for copyright infringement, which is what most people mean by “IP theft”. As for training/using, the difference is somewhat moot. The models are trained to be used, and if trained on copyrighted data without a license, you’ve encoded that data into the model which might then regurgitate it thus facilitating copyright infringement.

@bjn @ianbetteridge @FediThing @tante it is a bedrock of copyright law that devices 'capable of sustaining a substantial non-infringing use' are lawful. Decided in 1984 (SCOTUS/Betamax) and repeatedly upheld.

It is categorically untrue that merely because a model's output can infringe copyright that the model is therefore illegal.

There's not much that's truly settled in American limitations and exceptions, but this is.

@bjn @ianbetteridge @FediThing @tante 'facilitating copyright infringement' just isn't a thing.
@bjn @ianbetteridge @FediThing @tante and as befits UK fair dealing (and related limitations and exceptions), we've had opinions from IPREG affirming that training a model doesn't infringe.
@pluralistic @ianbetteridge @FediThing @tante Then the laws are not fit for purpose. The whole point of copyright is to encourage people produce works by being sure they get the benefit of those works. If my works can be encoded into a bunch of matrix weights and reproduced without attribution let alone financial recompense, then why should I bother? Google is doing its best to effectively steal the bread out of creators mouths with its AI summaries. It may be legal, but it stinks.
@bjn @ianbetteridge @FediThing @tante by all means say 'i don't like this technology' but don't conflate that with 'therefore it is illegal'
@pluralistic @ianbetteridge @FediThing @tante Well apart from Anthropic having to pay $1.5B for copyright infringement, it’s all above board, 🙄. It’s not a matter of liking the technology or not, machine learning is capable of cool and useful things. However, how LLMs are being used and pushed is both immoral and culturally destructive. I’m surprised you are buying into it.

@bjn @pluralistic @ianbetteridge @FediThing @[email protected] I don’t like what Cory wrote, and I don’t think he’s “buying into it” either.

His explanation is like the one in treaties that prohibit the use of biological weapons, but not their research, development and storage.

Maybe a better question is “Can we protect the creation of cultural artifacts by copyright law?”

Not by these standards, it seems.

@wtrmt @bjn @ianbetteridge @FediThing @pluralistic As a personal example for using technology one prefers not to use, although I do everything else on Linux, I use a piece of proprietary Windows software for my FLOSS translation work, because it means that I can produce a massively bigger amount of UI translations in higher quality than I could produce with Linux tooling. So, I understand that part of the reasoning.
🧵
@wtrmt @bjn @ianbetteridge @FediThing The local LLM is doing things for @pluralistic that hunspell can't do. One way to fix that could be to publish new articles to pluralistic.net only and wait until Gregory and 9o6 are done nitpicking, then publish to the other channels 1 day later?
@wtrmt @bjn @ianbetteridge @FediThing @pluralistic There's definitely a distinction between what is legal and what is moral, and how we see those two things also depend on our culture and can evolve over time.

@gunchleoc @bjn @ianbetteridge @FediThing @pluralistic I studied illustration at college. I wouldn’t recommend any kid to major in that now, no matter how good they are. There are no entry or mid level jobs for them.

How much time till Miyazaki an Gibbli are made irrelevant by the sheer volume of slop, soon in movie length?

Big actors can protect their image, but how about entry level actors and extras?

Out with all of them.