@glyph I'm with you on that, yeah. His whole job is being opinionated in public, we're bound to disagree on shit.
I'm mad about it anyway because it's not just him. If it was just him, I'd roll my eyes the way in the way I might if one of my friends says something embarrassing on main, then move on with my life.
@glyph I think "purity culture" is the best label for this type of moralism in front of LLMs because the moralising comes from people that can to afford to not use the tech, vs. Reverse Centaurs that have the tech forced onto them and have to grapple with its effects.
Someone getting on their high horse about not using LLMs and "pushing back" on the narrative or "keeping people in check" has no effects on the material realities that most gig-workers that act as de facto slave picker-upper, putter-downer for algorithms face.
It's like saying: "yeah bro, I'm also against AI, like totally bro, totally morally against it's use and deployment" to an Amazon workers that got told by an LLM they could increase their output by getting a colostomy bag.
Doctorow is right, the actual moral way forward is making AI economically unattractive, moralising AI use is just purity testing.
@glyph I'm not accusing you of that, I'm saying purity testing on AI use (between people that can afford the choice of using AI or not) has no material effect on people that are forced to be Reverse Centaurs and is mostly a position of privilege to have.
It's mental onanism disguised as social justice
@glyph my goal isn't to annoy you, but to me this was related to
> That's how we make good tech: not by insisting that all its inputs be free from sin, but by purging that wickedness by liberating the technology from its monstrous forebears and making free and open versions of it
I point at moralising because the core reason why AI is being pushed everywhere right now is because it promises growth in an environment of expensive capital (high interest rates). Most of this deployment is from knowledge work because the West has a) almost completely deindustrialised and b) has a high proportion of highly financialised by ultimately bullshit jobs.
To me, taking the fight to AI means making it economically unattractive, either by enshrining in law that human authorship is needed for copyright, or making models so efficient that large datacentres expenditure becomes foolish.
@glyph that's a good point, I mainly am against it because it's clearly a wedge issue in an otherwise quite Rainbow Coalition of progressives, e.g. I've noticed accounts take out pitchforks in response to the Ghostty dude saying he uses AI.
I mainly want to reach the critical mass to wield power as a collective, not endlessly criticise it.
@budududuroiu I have not carefully separated out the "local LLM" and "hosted LLM" problems but my own entry in this genre is here https://blog.glyph.im/2025/06/i-think-im-done-thinking-about-genai-for-now.html
it appears that you're arguing with Cory's silly strawman of a critic though, rather than any actual person who believes that these things are bad?
@glyph I'm arguing my own critique, which I've written about here before this Cory Doctorow row
I'm gonna go against the general grain on Mastodon and say it's futile to fight against AI. The cat's out of the bag, the genie out of the bottle. The fight we can, and must have is for the democratisation and use of AI for public good rather than at the behest of capital I'm not talking about LLMs here (though they can be helpful). Seizing the computational advances that this AI wave brings is a genuine huge opportunity for humanity in terms of drug discovery, advances in computational simulations that would make Soviet central planners jealous. It's not a panacea, but a lot of tasks around pushing the boundaries of what's possible are endlessly testing and disproving theories, where AI would be relatively helpful. I don't think blowing up data centres is the way to go, as it invites further brutalisation and restrictions on personal freedom to protect capital. I think Chinese Labs are an answer to this. I think making AI models more efficient and their use on consumer hardware tractable (Qwen3-Coder-Next is a great example that can run on Macbooks and has N-1 performance) is an answer to this. I think fighting tooth and nail for LLM work to never be copyrightable is an answer to this. I think new players like CXMT coming online with maybe less cutting edge, but mass affordable and accessible memory chips is an answer to this. I think DeepSeek, Z.AI, Mistral distilling frontier models is an answer to this. My ability to generate AI slop will inevitably outcompete your ability to shut it down. Your boss' ability to vibecode shit will outcompete your attempts to sandbox them or argue for proper due process. The fight can only be fought by making AI economically intractable, not via moralisation #AI #LLM #DataCentre #DeepSeek #OpenAI #Anthropic #ZAI
@glyph
Firstly, the whole purity culture thing is odd and I think the way Cory has framed the original post as well as his responses around that has definitely soured the entire discourse.
I don't really know how to think through this issue yet but I do appreciate reading your thoughts on the matter. You point toward a more comprehensive criticism of LLM usage which would makes more sense to me, as opposed to several people pointing out that spellcheck already exists. The thing is, Cory explicitly said that in his experience the LLM spellechecker works better.
@glyph
Of course LLMs have certain biases and hallucinations, but the pre-existing tech also has certain patterns of distrortions and false positives (Cory claims improvement here, that the LLM produces less false positives). The question seems to me to be if those kinds of problems inherent to LLMs, that is things like the tendancy to 'hallucinate' and the unreliability/undeterministic nature, do in fact justify the position that LLM use is fruitless or even harmful in those applications.
I think I do lean toward the position that there is something fundamental about LLM tech that applies in a locally hosted open-source context that does in fact cause problems, there's something about ceding human agency in such a way that disrupts our congitive abilties in a bad tradeoff. It's obvious with excessive use but I do wonder if it applies even in very minor cass. I look forward to reading your article which seeks to critique locally hosted LLMs specifically, if you do write it.
@glyph
Still, I think that despite the existence of an emerging (not the ones limited to the big tech use of LLMs) argument comphrensively against use of LLM tech, it's still based on what we know about the tech right now. Are we not ultimately limited by our lack of experience? We surely haven't exhausted the possibilities of experimentation with this still new technology, and furthermore our ability to experiment thus far is has been limited by the properatary status of the models and weights, as well as the hardware requirements inherent to the training process.
I get that a lot of people are uninterested in such experimentation or see it as fruitless, and they may end up being right. But why shouldn't Cory or others be able to experiment and experience different possibilities for their workflow, to see how it affects them?
@glyph
The idea of spellchecking and grammatical standards themselves could be criticised, the way LLM use affects our cognitive abilities could be criticised. But I'm not yet seeing a critque that totally eclipses the possibility that such experimentation might yield an unexpected result?
(of course I'm strictly assuming such experimentation can be done with reasonable sustainability and doesn't involve the current ethical issues in training new LLMs, and I think Cory's current usecase *probably* fits within those parameters)
@glyph
I wasn't sure about this thread at first but I think it actually has a fair point:
(https://hci.social/@fasterandworse/116104437434039067)
I do think Corys imperative to 'seize' the tech is dubious. Firstly because without open weights we're hardly there, I mean, any experimentation is very limited and the models we would use are permanently stuck in place (which poses concerns about wether embedding open models but closed weights tools in our workflows would induce a kind of material dependency on unethical LLM production after all). But more pressingly, it does imply a that LLMs are somehow inherently valuable and that controlling them would indeed allow us to leap ahead and replace the properietary models. What if there's nothhing to replace and we're better off rejecting the technology?
Still, I just think there might be edge cases, or just unexpected useful areas of application big or small. Like spellcheck and transcription, it has many issues but maybe it's the best we've got until something else comes along, maybe not.
Cory Doctorow's ultimate crooked point is that you're fucked if you don't embrace AI. Local models, whatever. He compares technologies that have been proven valuable with a product that is being predicted to be valuable. It's a variation of the assumption that everyone who hates AI hasn't *tried* it. If only they would give it a chance, they'd not be left behind.