Yesterday Cory Doctorow argued that refusal to use LLMs was mere "neoliberal purity culture". I think his argument is a strawman, doesn't align with his own actions and delegitimizes important political actions we need to make in order to build a better cyberphysical world.

EDIT: Diskussions under this are fine, but I do not want this to turn into an ad hominem attack to Cory. Be fucking respectful

https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/

Acting ethically in an imperfect world

Life is complicated. Regardless of what your beliefs or politics or ethics are, the way that we set up our society and economy will often force you to act against them: You might not want to fly somewhere but your employer will not accept another mode of transportation, you want to eat vegan but are […]

Smashing Frames

@tante

I really like and admire @pluralistic and have utmost respect for him, and that's why I'm totally baffled about why he is claiming "fruit of the poisoned tree" arguments as cause of LLM scepticism.

The objections to LLMs aren't about origins but about what they they are doing right now: destroying the planet, stealing labour, giving power over knowledge to LLM owners etc.

The objections are nothing to do with LLMs' origins, they're entirely about LLMs' effects in the here and now.

@FediThing @tante @pluralistic Some people - in fact quite a lot; if my reading is correct - do indeed argue that LLMs can *never* be ethically used because they are “trained on stolen work”.

@ianbetteridge @FediThing @tante

Performing mathematical analysis on large corpora of published work is not "stealing."

@pluralistic @ianbetteridge @FediThing @tante If that “mathematical analysis” regurgitates near verbatim works created by other people, it certainly is committing IP theft, and LLMs will happily do that. The “mathematical analysis” is effectively a form of lossy compression on its training data which a prompt can later extract.

@bjn @ianbetteridge @FediThing @tante

Once again, you're talking about *using* a model, not training a model.

Also "IP theft" isn't a thing. Perhaps you mean copyright infringement?

@pluralistic @ianbetteridge @FediThing @tante I’ll give you pedant points for copyright infringement, which is what most people mean by “IP theft”. As for training/using, the difference is somewhat moot. The models are trained to be used, and if trained on copyrighted data without a license, you’ve encoded that data into the model which might then regurgitate it thus facilitating copyright infringement.

@bjn @ianbetteridge @FediThing @tante it is a bedrock of copyright law that devices 'capable of sustaining a substantial non-infringing use' are lawful. Decided in 1984 (SCOTUS/Betamax) and repeatedly upheld.

It is categorically untrue that merely because a model's output can infringe copyright that the model is therefore illegal.

There's not much that's truly settled in American limitations and exceptions, but this is.

@bjn @ianbetteridge @FediThing @tante 'facilitating copyright infringement' just isn't a thing.
@bjn @ianbetteridge @FediThing @tante and as befits UK fair dealing (and related limitations and exceptions), we've had opinions from IPREG affirming that training a model doesn't infringe.
@pluralistic @ianbetteridge @FediThing @tante Then the laws are not fit for purpose. The whole point of copyright is to encourage people produce works by being sure they get the benefit of those works. If my works can be encoded into a bunch of matrix weights and reproduced without attribution let alone financial recompense, then why should I bother? Google is doing its best to effectively steal the bread out of creators mouths with its AI summaries. It may be legal, but it stinks.
@bjn @ianbetteridge @FediThing @tante by all means say 'i don't like this technology' but don't conflate that with 'therefore it is illegal'
@pluralistic @ianbetteridge @FediThing @tante Well apart from Anthropic having to pay $1.5B for copyright infringement, it’s all above board, 🙄. It’s not a matter of liking the technology or not, machine learning is capable of cool and useful things. However, how LLMs are being used and pushed is both immoral and culturally destructive. I’m surprised you are buying into it.

@bjn @pluralistic @ianbetteridge @FediThing @[email protected] I don’t like what Cory wrote, and I don’t think he’s “buying into it” either.

His explanation is like the one in treaties that prohibit the use of biological weapons, but not their research, development and storage.

Maybe a better question is “Can we protect the creation of cultural artifacts by copyright law?”

Not by these standards, it seems.

@wtrmt @bjn @ianbetteridge @FediThing @pluralistic As a personal example for using technology one prefers not to use, although I do everything else on Linux, I use a piece of proprietary Windows software for my FLOSS translation work, because it means that I can produce a massively bigger amount of UI translations in higher quality than I could produce with Linux tooling. So, I understand that part of the reasoning.
🧵
@wtrmt @bjn @ianbetteridge @FediThing The local LLM is doing things for @pluralistic that hunspell can't do. One way to fix that could be to publish new articles to pluralistic.net only and wait until Gregory and 9o6 are done nitpicking, then publish to the other channels 1 day later?
@wtrmt @bjn @ianbetteridge @FediThing @pluralistic There's definitely a distinction between what is legal and what is moral, and how we see those two things also depend on our culture and can evolve over time.

@wtrmt @bjn @ianbetteridge @FediThing @pluralistic Of course, the massive ingestion of other people's work isn't the only problem, and @pluralistic is already aware of this - the Reverse Centaur problem is mentioned in his article. We have unemployment, deskilling and pollution of our information space caused by the usage, but even more critically, accelerated environmental destruction caused by the training that will still be with us for centuries.

/🧵

Sparked by a discussion elsewhere on phone predictive texting, I think having predictive texting available on a PC in combination with a spell checker might fit Cory's needs even better than an LLM. This way, you can spot your mistakes immediately while you type.

@wtrmt @bjn @ianbetteridge @FediThing @pluralistic

I have this functionality available when translating with MemoQ and it saves so much time, especially as I translate software which has a lot of recurring phrasing. It will pop up a selection that I can choose from via mouse click or keyboard navigation, or I can just ignore the suggestion.

Wouldn't it be great to have this available in @libreoffice ?

@wtrmt @bjn @ianbetteridge @FediThing @pluralistic

@gunchleoc @bjn @ianbetteridge @FediThing @pluralistic I studied illustration at college. I wouldn’t recommend any kid to major in that now, no matter how good they are. There are no entry or mid level jobs for them.

How much time till Miyazaki an Gibbli are made irrelevant by the sheer volume of slop, soon in movie length?

Big actors can protect their image, but how about entry level actors and extras?

Out with all of them.

@gunchleoc @bjn @ianbetteridge @FediThing @pluralistic the same tools that are useful to you are used by big corporations to eliminate jobs.

Mercado Libre —a huge online retail in LATAM, owned by EBay— fired his entire team of User Experience Writers last month and replaced them with an LLM. The more than 120 UXW didn’t see it coming.

Will the LLM do a better job? Nope, and those designers will not find a job doing that again.

@wtrmt In the translation business, they squeeze the rates by pre-translating by LLM and turning everybody into proofreaders on text that looks right but is often slightly off.

This is not what I was talking about - I was talking about traditional Translation Memories. They get trained on your own, personal work or your team's work only. The translator is still doing the work, but I no longer get RSI from lots of manual copy/paste.

@bjn @ianbetteridge @FediThing @pluralistic

@gunchleoc @bjn @ianbetteridge @FediThing @pluralistic yes, LLMs can be used in many ways, and the impact on their application is wide ranging: my sister in law is a freelance certified legal translator, and she no longer has a job.

Who needs legal translator, movie extras, set decorators, designers, illustrators, technical writers, architects, middle managers, artists, writers?

All of them will become burger flippers for all we care.

@gunchleoc @bjn @ianbetteridge @FediThing @pluralistic this is impacting people all over the world, in many creative and technical fields. In other countries it does look like a new wave of colonialism, that now comes to eliminate your work and culture.

In the streets of my neighborhood in Santiago I see slightly different AI slop images promoting things on the sidewalks. Those used to be made by hand, on chalkboards, 2 years ago.

@wtrmt LLMs used for Legal translation? 😱

That's asking for real trouble.

@bjn @ianbetteridge @FediThing @pluralistic

@gunchleoc @bjn @ianbetteridge @FediThing @pluralistic poring over that boring legalese? who cares! It’s way cheaper! Instantaneous!

Ups! The document was badly translated and the visa was rejected. Who’s responsible? Not the LLM.

@bjn @pluralistic @ianbetteridge @FediThing Americans laugh at the legal efforts in France to preserve trades, and at the same time they have been trying to bring back manufacturing industries that required decades to build and that now are in China and other countries.

Those industries need people with knowledge and creativity that the US neither has nor care for.

Now chatGPT came for the service economy: they don’t care for that either.