@glyph I think "purity culture" is the best label for this type of moralism in front of LLMs because the moralising comes from people that can to afford to not use the tech, vs. Reverse Centaurs that have the tech forced onto them and have to grapple with its effects.
Someone getting on their high horse about not using LLMs and "pushing back" on the narrative or "keeping people in check" has no effects on the material realities that most gig-workers that act as de facto slave picker-upper, putter-downer for algorithms face.
It's like saying: "yeah bro, I'm also against AI, like totally bro, totally morally against it's use and deployment" to an Amazon workers that got told by an LLM they could increase their output by getting a colostomy bag.
Doctorow is right, the actual moral way forward is making AI economically unattractive, moralising AI use is just purity testing.
@glyph I'm not accusing you of that, I'm saying purity testing on AI use (between people that can afford the choice of using AI or not) has no material effect on people that are forced to be Reverse Centaurs and is mostly a position of privilege to have.
It's mental onanism disguised as social justice
@glyph my goal isn't to annoy you, but to me this was related to
> That's how we make good tech: not by insisting that all its inputs be free from sin, but by purging that wickedness by liberating the technology from its monstrous forebears and making free and open versions of it
I point at moralising because the core reason why AI is being pushed everywhere right now is because it promises growth in an environment of expensive capital (high interest rates). Most of this deployment is from knowledge work because the West has a) almost completely deindustrialised and b) has a high proportion of highly financialised by ultimately bullshit jobs.
To me, taking the fight to AI means making it economically unattractive, either by enshrining in law that human authorship is needed for copyright, or making models so efficient that large datacentres expenditure becomes foolish.
@glyph that's a good point, I mainly am against it because it's clearly a wedge issue in an otherwise quite Rainbow Coalition of progressives, e.g. I've noticed accounts take out pitchforks in response to the Ghostty dude saying he uses AI.
I mainly want to reach the critical mass to wield power as a collective, not endlessly criticise it.
@budududuroiu I have not carefully separated out the "local LLM" and "hosted LLM" problems but my own entry in this genre is here https://blog.glyph.im/2025/06/i-think-im-done-thinking-about-genai-for-now.html
it appears that you're arguing with Cory's silly strawman of a critic though, rather than any actual person who believes that these things are bad?
@glyph I'm arguing my own critique, which I've written about here before this Cory Doctorow row
I'm gonna go against the general grain on Mastodon and say it's futile to fight against AI. The cat's out of the bag, the genie out of the bottle. The fight we can, and must have is for the democratisation and use of AI for public good rather than at the behest of capital I'm not talking about LLMs here (though they can be helpful). Seizing the computational advances that this AI wave brings is a genuine huge opportunity for humanity in terms of drug discovery, advances in computational simulations that would make Soviet central planners jealous. It's not a panacea, but a lot of tasks around pushing the boundaries of what's possible are endlessly testing and disproving theories, where AI would be relatively helpful. I don't think blowing up data centres is the way to go, as it invites further brutalisation and restrictions on personal freedom to protect capital. I think Chinese Labs are an answer to this. I think making AI models more efficient and their use on consumer hardware tractable (Qwen3-Coder-Next is a great example that can run on Macbooks and has N-1 performance) is an answer to this. I think fighting tooth and nail for LLM work to never be copyrightable is an answer to this. I think new players like CXMT coming online with maybe less cutting edge, but mass affordable and accessible memory chips is an answer to this. I think DeepSeek, Z.AI, Mistral distilling frontier models is an answer to this. My ability to generate AI slop will inevitably outcompete your ability to shut it down. Your boss' ability to vibecode shit will outcompete your attempts to sandbox them or argue for proper due process. The fight can only be fought by making AI economically intractable, not via moralisation #AI #LLM #DataCentre #DeepSeek #OpenAI #Anthropic #ZAI