I'm sorry to say that I actually wrote it:
"The pinnacle of enshittification, or Large Language Models"
https://blogs.gentoo.org/mgorny/2026/04/05/the-pinnacle-of-enshittification-or-large-language-models/
"""
Honestly, I hate that I read about LLMs all the time. I hate all the marketing bullshit, but also all the critical pieces. Not because the criticism is wrong. I hate them precisely because they’re right. And I hate the feeling that I have to write yet another piece on that same topic, to collect some of the thoughts I have had over the recent months.
Machine learning isn’t anything new. Neither is calling it “artificial intelligence”. Not only pop science writers and journalists, but even more technical folk have been using the term, and I never complained. I didn’t complain about games having “AI” either. It was always clear that this is a special use of “intelligence”, one far from what animals truly possess. This changed recently.
When LLMs enabled chatbots to use human language, the misuse of the term exploded. Obviously, the marketing people loved calling it “artificial intelligence”. The media, the users and the whole IT industry followed. Even people who knew better stopped bothering. On top of that, anthropomorphisms became commonplace. LLMs could be said to be “thinking”, “lying”, “hallucinating”, to “approve” or “disapprove”, “like” or “dislike”…
Perhaps it wouldn’t be so bad if not for the fact that LLMs are so good at imitating human intelligence. The problem is not really how people call them. The problem is that there is a number of people who start actually believing that their chatbots are conscious. And I can see why that would be happening…
"""
And perhaps the most important piece:
"""
You may have noticed that I didn’t talk of quality per se. I don’t think there’s a point in doing that. I believe that LLMs sometimes spit quality slop, and sometimes they don’t. People who claim that they are “getting better and better” are probably right. Perhaps they will continue getting better, or perhaps they’ll suddenly start collapsing after eating too much of their own shit. That’s beside the point.
The point is, however you look at it, LLMs are unethical. They may be useful, but they are poison — just like asbestos. They are trained in an unethical way, they are sold with immoral goals, and they are used to do a lot of evil. Yes, maybe they can make your life a little easier, a little more comfortable (just like cheap goods manufactured through slave labor). But is it something worth losing our humanity for?
You can just say “no”. Getting left behind can actually be a good thing.
"""
#AI #LLM #NoAI #NoLLM