If you think #vibecoding is fine, let me ask you a single question: would you use a medical device whose software was vibecoded? And by "medical device" I mean something where a bug could literally kill you.

If you answered "oh, gawd, no!" then consider that anytime you use an #LLM to contribute to or develop an #OpenSource project, there's a chance that this code will end up powering such a device. And even if it doesn't, you're setting a trend, and it will be even more likely that the software used by these devices will be vibecoded.

I have type 1 #diabetes. I also lead a physically active life. This is both a blessing and a curse. My doctors keep suggesting Constant Glucose Monitoring systems and insulin pumps to me. And I do realize that such hardware would likely improve my blood glucose, and definitely make my life much easier (especially with a closed loop system).

So why do my fingertips look like crap, and I keep using a glucometer and insulin pens? Because I don't want to risk my life to an unnecessarily complex technology.

Admittedly, I occasionally get things wrong and suffer consequences. Or I suspect I got them wrong and worry. Or meet an unexpected situation and need to figure out a way out. Or even accept having elevated glucose levels (as in nearing 200 mg/dl) because there's just no way to safely fit insulin doses on a particular day.

But still, I prefer having control and risking my own mistakes to a device that could suddenly start pumping insulin because of a bug. And that was even before the story of the application that stripped the decimal point and gave people ten times the dose. Or the one about CGMs giving wrong high glucose alerts. Or the whole vibecoding fancy.

Back then, I could have considered such a device. Now, I'm more worried than ever. And honestly, I'm hoping that relatively simple glucometers will remain available. To think that my worst fear used to be of a mechanical fault…

#AI #NoAI #NoLLM

En lisant un article pro LLM, hier (publié en mars 2026), j'ai appris que selon Google, le gain de productivité moyen apporté par un LLM est de 10%.
Dix pourcents.

Eux-mêmes reconnaissent que des tâches sont plus longues à entreprendre avec un LLM que sans.

Pour le coup, j'aurais tendence à croire une boite qui fabrique un LLM lorsqu'elle annonce un pourcentage qui est très très en deça des pourcentages "à la louche" servis par les freelances et les pro-IA qu'on peut trouver en start-up ou sur Linkedin. Ces derniers annoncent des gains de productivité de 2000%.

Franchement, je suis OK pour continuer de faire mon métier "à l'ancienne" (c'est-à-dire de manière éthique) et d'être payé 10% de moins

Source : https://www.nytimes.com/2026/03/12/magazine/ai-coding-programming-jobs-claude-chatgpt.html

#NoAI #NoLLM #ResistanceNet

Coding After Coders: The End of Computer Programming as We Know It

In the era of A.I. agents, many Silicon Valley programmers are now barely programming. Instead, what they’re doing is deeply, deeply weird.

The New York Times

Some people may think of LLMs as the great equalizer. People who aren't programmers can vibecode working programs now. People who aren't artists can slop out something resembling art. However, it's the exact opposite.

When I was a kid, I also pretended to write programs. Of course, I didn't have such sophisticated toys ("kids could play with a stick for hours", as the hyperbole went). But then, I was fully aware that it's just make-believe and it didn't harm anybody.

#Vibecoding creates a horrible chasm of inequality. We have people who believe they're good programmers (even treating vibecoding as an enlightened religion) who shit tons of code at real human reviewers who now need to sift through. And then, we have projects embracing vibecoding and shitting new releases at unprecedented rate. And these releases again need to be reviewed by humans downstream.

#AI #LLM #NoAI #NoLLM

<+mgorny> that's gunicorn
<+mgorny> looks like vibecoding hard
<@sam_> sigh
<@sam_> https://github.com/benoitc/gunicorn/pull/3559
<@sam_> i agree it looks like it
<+mgorny> how else would a dead-so-far project suddenly make dozen commits in a day?
<@sam_> I really wish they'd leave projects "dead"
<@sam_> it's far more honest

#Python #NoAI #NoLLM #AI #LLM #OpenSource

fix: prevent HTTP/2 ASGI body duplication in receive() by benleembruggen · Pull Request #3559 · benoitc/gunicorn

Summary Fixes #3558 - HTTP/2 ASGI request body duplication. receive_data() stores every DATA frame in both _body_chunks (list) and request_body (BytesIO). The receive() closure in _handle_http2_req...

GitHub

My first instaban for #slop PR to #Gentoo.

Normally, we warn people first, but here it's clearly an untested (and obviously broken) slop contribution by non-Gentoo user trying to push their software all over the place.

https://github.com/gentoo/guru/pull/450

#NoAI #NoLLM

app-misc/tangi: add Tangi local AI assistant with RAG by mreinrt · Pull Request #450 · gentoo/guru

Add Tangi - hardware-aware local AI assistant with RAG for codebases This PR adds: app-misc/tangi: Tangi v1.0.0 dev-python/llama-cpp-python: 0.3.16 (with OpenBLAS support) Tangi is a local AI ass...

GitHub

Well, so much for #Astral. The post is saying "productive" 4 times which is saying a lot.

https://astral.sh/blog/openai

#NoAI #NoLLM

Astral to join OpenAI

Astral has entered into an agreement to join OpenAI as part of the Codex team.

Let's normalize calling anything output with an #LLM #slop.

It doesn't matter that you've only used an LLM to fix punctuation. It's slop.

It doesn't matter that you've spent an hour reviewing the slop to make sure it's good. It's still slop.

It doesn't matter that it's better than anything you wrote your entire life. It's slop.

If you didn't write it yourself, it's just a glorified LLM slop.

#AI #NoAI #NoLLM

Modern use of LLMs often involves giving them access to the local system: to read and write your project files, and to execute arbitrary commands, often unsupervised. So aren't people worried about a harness just doing what a remote #LLM tells it to do?

I think a statement I've heard lately summarizes the mindset well. It went something along the lines "I can't give you 100% guarantee, but I've noticed that LLMs are very good at following instructions, and they're getting better and better, so I don't worry about that anymore".

Like, it is completely fine to introduce a humongous security hole, because the probability that a model will *accidentally* do something horrible is decreasing.

#AI #NoAI #NoLLM #security

I truly believe that LLMs are the worst thing that happened in IT over the recent years (or well, the culmination of the worst thing that's been poisoning the IT world), and I wholeheartedly support all the subversive actions against it, ranging from poisoning the training data to abusing support chatbots to make them unprofitable. However, at the same time I realize that all these actions are increasing the environmental harm caused by the #LLM folk.

It's like true guerrilla warfare. We're metaphorically burning down buildings, and I hate that it had to come to that.

#AI #NoAI #NoLLM

Let me tell you a parable.

There was a student who was given as assignment of writing an essay. The student found 10 similar essays online. He copied selected bits of different essays. He tediously reworded the result, removed some sentences, added some adjectives and adverbs, shifted some more sentences, added some glue — all with the single-minded goal of covering up the tracks. Eventually, a voluminous essay was complete.

The student has put a lot of effort into this; possibly even more that if he had written it himself. He did learn a bit about essays, though he didn't really practice writing one. He did practice some skills that would be useful in a future bullshit job, though. The essay passes all #plagiarism checks, even though it immediately raises red flags to any human reading it: the sudden style changes, contradictory statements, sentences that don't make much sense in their context. And if he was asked to defend it, he might be in trouble.

So, the student put an effort (though not the right kind of effort), produced a mediocre essay and learned something (though bullshit skills rather than creative skills). Now let's consider a different situation: rather than doing all that himself, the student paid somebody else to do it; and not to *write* an original essay, but to do all the shenanigans described above.

That's precisely what using LLMs is. You tell them to write an essay, so they find and mix random stuff, and produce a mediocre essay. You don't put an effort, you don't learn anything, perhaps you don't even read "your" essay. And it passes all the plagiarism checks.

#AI #LLM #NoAI #NoLLM #chardet