Let's normalize calling anything output with an #LLM #slop.

It doesn't matter that you've only used an LLM to fix punctuation. It's slop.

It doesn't matter that you've spent an hour reviewing the slop to make sure it's good. It's still slop.

It doesn't matter that it's better than anything you wrote your entire life. It's slop.

If you didn't write it yourself, it's just a glorified LLM slop.

#AI #NoAI #NoLLM

Modern use of LLMs often involves giving them access to the local system: to read and write your project files, and to execute arbitrary commands, often unsupervised. So aren't people worried about a harness just doing what a remote #LLM tells it to do?

I think a statement I've heard lately summarizes the mindset well. It went something along the lines "I can't give you 100% guarantee, but I've noticed that LLMs are very good at following instructions, and they're getting better and better, so I don't worry about that anymore".

Like, it is completely fine to introduce a humongous security hole, because the probability that a model will *accidentally* do something horrible is decreasing.

#AI #NoAI #NoLLM #security

I truly believe that LLMs are the worst thing that happened in IT over the recent years (or well, the culmination of the worst thing that's been poisoning the IT world), and I wholeheartedly support all the subversive actions against it, ranging from poisoning the training data to abusing support chatbots to make them unprofitable. However, at the same time I realize that all these actions are increasing the environmental harm caused by the #LLM folk.

It's like true guerrilla warfare. We're metaphorically burning down buildings, and I hate that it had to come to that.

#AI #NoAI #NoLLM

Let me tell you a parable.

There was a student who was given as assignment of writing an essay. The student found 10 similar essays online. He copied selected bits of different essays. He tediously reworded the result, removed some sentences, added some adjectives and adverbs, shifted some more sentences, added some glue — all with the single-minded goal of covering up the tracks. Eventually, a voluminous essay was complete.

The student has put a lot of effort into this; possibly even more that if he had written it himself. He did learn a bit about essays, though he didn't really practice writing one. He did practice some skills that would be useful in a future bullshit job, though. The essay passes all #plagiarism checks, even though it immediately raises red flags to any human reading it: the sudden style changes, contradictory statements, sentences that don't make much sense in their context. And if he was asked to defend it, he might be in trouble.

So, the student put an effort (though not the right kind of effort), produced a mediocre essay and learned something (though bullshit skills rather than creative skills). Now let's consider a different situation: rather than doing all that himself, the student paid somebody else to do it; and not to *write* an original essay, but to do all the shenanigans described above.

That's precisely what using LLMs is. You tell them to write an essay, so they find and mix random stuff, and produce a mediocre essay. You don't put an effort, you don't learn anything, perhaps you don't even read "your" essay. And it passes all the plagiarism checks.

#AI #LLM #NoAI #NoLLM #chardet

The key takeaways from the early part of the #chardet thread (I didn't read beyond the ~30 first comments, I have my limits).

1. People there love cosplaying lawyers. Except when the other side also starts cosplaying lawyers, in which case they suddenly divert to suggesting asking professional lawyers.
2. Almost nobody there is concerned with ethics or morality.
3. There's a lot of GPL haters there. Like, they seem the kind of people who don't really care about licensing at all, just used MIT in their projects because it was cool and they heard something about license incompatibility and now bash at everything that's (L)GPL.
4. People don't get that LLMs are statistical models and can't build anything from the ground up. All they can do is remix, which implies they use existing code for inspiration.
5. The maintainer who did the rewrite is a total asshole, and is perfectly aware of it.

Honestly, I'm truly waiting for the subsidizing to end and companies start charging obscene amounts for the use of LLMs. Of course, the reality is that we're totally fucked. We have a lot of projects that adapted a lot of #slop, and people who are being increasingly addicted to this shit. The moment they can't afford it, we'd be left with lots of broken code nobody wants to maintain.

And I definitely don't want to put my effort into packaging crap if its maintainers don't even bother trying.

https://github.com/chardet/chardet/issues/327

#AI #LLM #NoAI #NoLLM

No right to relicense this project · Issue #327 · chardet/chardet

Hi, I'm Mark Pilgrim. You may remember me from such classics as "Dive Into Python" and "Universal Character Encoding Detector." I am the original author of chardet. First off, I would like to thank...

GitHub
CONTRIBUTING.md · master · redox-os / redox · GitLab

Redox: A Rust Operating System

GitLab

Ojalá pete de una maldita vez la burbuja de la “IA” y que se lleve además a las personas que para “programar” usa algún LLM

En programación, como en muchas otras cosas, debes saber lo que estás haciendo para funcionar, saber cómo funciona tu programa por algún comportamiento particular que se debe considerar, y no coger un código que puede tener errores y encima a saber de dónde viene ese trozo de código (tema licencias)

Se está yendo a unos puntos absurdos y de forma preocupante con proyectos como Vim o Python que hacen uso de Claude…

#NoLLM #NoAI

When you drop the dependency on #chardet over the #AI #slop release… and replace it with your own slop.

https://github.com/binaryornot/binaryornot/blob/main/CHANGELOG/v0.5.0.md

#Python #LLM #NoAI #NoLLM

binaryornot/CHANGELOG/v0.5.0.md at main · binaryornot/binaryornot

Binary file detection that actually works. 131 extensions, 55 magic-byte signatures, a trained decision tree, and zero dependencies. - binaryornot/binaryornot

GitHub

So how you'd feel if you learned that the guy from whom you've been copying all your homework recently, has been not-so-secretly helping fascist governments commit genocide? And he's quite proud of it too.

Oh right, you'd just say "it's not like doing my own homework will change anything". And then you'll give him your lunch money.

#AI #LLM #NoAI #NoLLM #Claude #Anthropic

New on #blog: "Money isn’t going to solve the #burnout problem"

"""
The xz-utils backdoor situation brought the problem of FLOSS maintained burnout into the daylight. This in turn lead to numerous discussion on how to solve the problem, and the recurring theme was funding maintenance work.

While I’m definitely not opposed to giving people money for their FLOSS work, if you think that throwing some bucks will actually solve the problem, and especially if you think that you can just throw them once and then forget, I have bad news for you: it won’t. Surely, money is a big part of the problem, but it’s not the only reason people are getting burned out. It’s a systemic problem, and it’s in need of systemic solution, and that’s involves a lot of hard work to undo everything that’s happened in the last, say, 20 years.

But let’s start at the beginning and ask the important question: why do people make free software?
"""

https://blogs.gentoo.org/mgorny/2026/03/07/money-isnt-going-to-solve-the-burnout-problem/

#FreeSoftware #OpenSource #AI #NoAI #LLM #NoLLM #Gentoo

Money isn’t going to solve the burnout problem

The xz-utils backdoor situation brought the problem of FLOSS maintained burnout into the daylight. This in turn lead to numerous discussion on how to solve the problem, and the recurring theme was …

Michał Górny