Автомонтаж с защитой видео и аудио стрима. Программа анализирует саундрек, нарезает по точкам монтирования клипы из набора роликов, в режиме "тиражирования" подписывает каждый уникальный ассиметричным ключом, добавляет хэш, подпись создателя и qr-ссылку на URL-адрес #NoAI #NoLLM

https://youtube.com/shorts/Yb5aR6dnfAI?si=dEUltkGXet_p8XOK

Aerosmith, «Crazy» — одно из первых музыкальных видео, поразивших лихорадкой подростков

YouTube
Автомонтаж с защитой видео и аудио стрима. Программа анализирует саундрек, нарезает по точкам монтирования клипы из набора роликов, в режиме "тиражирования" подписывает каждый уникальный ассиметричным ключом, добавляет хэш, подпись создателя и qr-ссылку на URL-адрес #NoAI #NoLLM

I responded to this survey about genAI although I don't use it, to give a more realistic view than the ones saying than almost all developpers use it.

https://survey.devographics.com/en-US/survey/state-of-ai/2026

The survey includes questions and answers for those who do not want to use it.

#noai #nollm

State of AI 2026

Take the State of AI survey

State of AI 2026

Yesterday, I've read a vibe coded script for the first time in my life, and I've cried.

It wasn't ugly. "Ugly" is not the right term. It was as if someone wasn't able to comprehend beauty, but badly tried to mimic it. It felt like "malicious compliance" to beauty. The kind of awful verbose pedantry that feels wrong every step of the way.

It's the kind of code you'd expect in a corporate environment when you know that the code would be read by the top suits who have no idea about coding, but judge it by the volume and expect science fiction level of make-believe.

It's the kind of code is abstracted away into the tiniest details. Every function returns a complex dataclass explaining precisely what it did, for no reason at all. What would be two lines of code is a function. What would be a function is a whole module. It's a caricature of good programming practices.

I was supposed to add modifying a second field on the same object via GitHub API. I've guessed it would take me about an hour to figure out the code enough to be able to do that — what ought to be 2-3 extra lines. I suspected I'd discover that most of the code does precisely nothing. Just meaningless API exchanges that are absolutely unnecessary. It felt like the kind of parody of bureaucracy where you have to file 10 forms to do something, and only one of them actually means anything.

What used to be "do one thing well" became "doing ten totally random things is fine, as long as one of them happens to be what I need, and the whole thing doesn't blow anything up in an obvious way".

Perhaps it's just because this way a throwaway script. Maybe "production" stuff takes more, err, prompt refining? Maybe it actually can produce stuff that's comprehensible.

But if that code was any indicator, then I'm not going to believe that any big LLM contributions are actually reviewed by humans. A review will take more time than rewriting from scratch. This is a ticking time bomb. That LLM-generated code isn't introducing exploits right now is either a statistical accident, or it's just that nobody bothers.

Clarification: I didn't "prompt" it or request one. I'm not a hypocrite.

#NoAI #NoLLM #AI #LLM

@EUCommission

Please pay attention to the reactions!

#NoAi #NoSlop #NoLLM

Good news, as of 27 March. (I had missed this when it came out.)

Wikipedia content guidelines now prohibit the use of genAI tools, with two well-defined minor exceptions. An important step.

quote
"Text generated by large language models (LLMs) such as ChatGPT, Gemini, Claude, DeepSeek, or Grammarly often violates several of Wikipedia's core content policies. For this reason, the use of LLMs to generate or rewrite article content is prohibited, save for [...] two exceptions."
end-quote

The two listed exceptions are (i) basic copyediting support, under human review, and (ii) translation into English.

The new policy applies specifically to the English-language Wikipedia.

https://en.wikipedia.org/wiki/Wikipedia:Writing_articles_with_large_language_models

#noAI #noLLM
#StopTheAICorruption

#Wikipedia #WikipediaContentGuidelines #WikipediaNoLLM

Wikipedia:Writing articles with large language models - Wikipedia

If you're looking for another thing to thank #LLM techbros for: #OpenAI is now acquiring #Cirrus Labs, and #CirrusCI is going to shut down in <2 months.

https://web.archive.org/web/20260407101724/https://cirruslabs.org/

#AI #NoAI #NoLLM

Cirrus Labs to join OpenAI

Cirrus Labs announces an agreement to join OpenAI as part of the Agent Infrastructure team.

Remember how people gave techbros the term "#AI" to use for their #LLM crap, and then started using "AGI" for the old AI?

Apparently techbros are now selling LLM crap as "AGI": https://futureagi.com/

Also, my "work on GitHub" is apparently "directly relevant to what they're building". Enough to justify a #spam mail anyway.

#NoAI #NoLLM

Future AGI | AI Agents hallucinate, fix it faster.

Build self-improving agents. Detect what broke, learn why, and feed the fix back so every version ships smarter.

Future AGI

I'm sorry to say that I actually wrote it:

"The pinnacle of enshittification, or Large Language Models"

https://blogs.gentoo.org/mgorny/2026/04/05/the-pinnacle-of-enshittification-or-large-language-models/

"""
Honestly, I hate that I read about LLMs all the time. I hate all the marketing bullshit, but also all the critical pieces. Not because the criticism is wrong. I hate them precisely because they’re right. And I hate the feeling that I have to write yet another piece on that same topic, to collect some of the thoughts I have had over the recent months.

Machine learning isn’t anything new. Neither is calling it “artificial intelligence”. Not only pop science writers and journalists, but even more technical folk have been using the term, and I never complained. I didn’t complain about games having “AI” either. It was always clear that this is a special use of “intelligence”, one far from what animals truly possess. This changed recently.

When LLMs enabled chatbots to use human language, the misuse of the term exploded. Obviously, the marketing people loved calling it “artificial intelligence”. The media, the users and the whole IT industry followed. Even people who knew better stopped bothering. On top of that, anthropomorphisms became commonplace. LLMs could be said to be “thinking”, “lying”, “hallucinating”, to “approve” or “disapprove”, “like” or “dislike”…

Perhaps it wouldn’t be so bad if not for the fact that LLMs are so good at imitating human intelligence. The problem is not really how people call them. The problem is that there is a number of people who start actually believing that their chatbots are conscious. And I can see why that would be happening…
"""

And perhaps the most important piece:

"""
You may have noticed that I didn’t talk of quality per se. I don’t think there’s a point in doing that. I believe that LLMs sometimes spit quality slop, and sometimes they don’t. People who claim that they are “getting better and better” are probably right. Perhaps they will continue getting better, or perhaps they’ll suddenly start collapsing after eating too much of their own shit. That’s beside the point.

The point is, however you look at it, LLMs are unethical. They may be useful, but they are poison — just like asbestos. They are trained in an unethical way, they are sold with immoral goals, and they are used to do a lot of evil. Yes, maybe they can make your life a little easier, a little more comfortable (just like cheap goods manufactured through slave labor). But is it something worth losing our humanity for?

You can just say “no”. Getting left behind can actually be a good thing.
"""

#AI #LLM #NoAI #NoLLM

The pinnacle of enshittification, or Large Language Models

Honestly, I hate that I read about LLMs all the time. I hate all the marketing bullshit, but also all the critical pieces. Not because the criticism is wrong. I hate them precisely because they&#82…

Michał Górny

#PythonPoetry is yet another project that disrespectfully treats human bug reporters with #slop:

https://github.com/python-poetry/poetry/issues/10796#issuecomment-4158910681

#NoAI #NoLLM

`test_list_poetry_managed[False]` test failing over whitespace differences · Issue #10796 · python-poetry/poetry

Description I'm seeing tests/console/commands/python/test_python_list.py::test_list_poetry_managed[False] failing over what seems to be an extra space: Full diff: { - '3.10.8 PyPy Poetry ' + '3.10....

GitHub