systemd goes AI agent slopware https://github.com/systemd/systemd/blob/c1d4d5fd9ae56dc07377ef63417f461a0f4a4346/AGENTS.md

has slop documentation now too

EDIT: See later in thread, it seems like the good news is at least that it's not having auto-merging on, which is where the security risk comes in. I still have other concerns.

Looks like they're also using Claude for PR review https://github.com/systemd/systemd/commit/9a70fdcb741fc62af82427696c05560f4d70e4de

Which probably means systemd is now the most attractive target in FOSS for an AI prompt injection attack to insert a backdoor

EDIT: It does seem that they don't have auto-merging of PRs from the review bot, which is an improvement over the situation (and mitigates the primary security risk, hopefully it stays that way), and AI contributions are asked to be disclosed. That said, it seems like the issue is closed, and they are firmly in the "we will accept AI contributions, as long as disclosed" camp.

ci: Add one more mcp tool to claude-review workflow · systemd/systemd@9a70fdc

The systemd System and Service Manager . Contribute to systemd/systemd development by creating an account on GitHub.

GitHub

Poettering commented, the issue is now closed. https://github.com/systemd/systemd/issues/41085#issuecomment-4053443496

Asking for detection of security vulnerabilities from an LLM is one thing though, that one I could consider useful, but the real question is code and documentation generation. It does seem that for now, the bot usage isn't auto-merging PRs, which does alleviate some previous concerns of mine if reading that right.

But, in AGENTS.md it does mention "docs/CODING_STYLE.md — full style guide (must-read before writing code)". https://github.com/systemd/systemd/blob/main/AGENTS.md

They do require disclosure in the project also of LLM usage. But this does imply that LLM contributed changes are considered welcome, so we will probably see more of them, but I suppose at least they should hopefully be marked appropriately.

I will admit, I made this thread when pretty frustrated and upset about it. SystemD is so key to the security of many peoples' machines. I don't necessarily see having security reviews be a problem the same way that codegen and etc are. And I was wrong about the PR review vulnerability risk in that *for now* afaict the review bot is just performing read-only security review, is not taking auto-action on merging, which is the real risk.

So maybe I overreacted? But Poettering's comment reads the way that most comments I have read that have been drawn into AIgen code have gone, which is "you gotta admit that things are changing, these things are getting really good" and then opening the door to aigen contributions. Which I am very wary of...

@cwebber This. I do think that writing code oneself and running it through checkers (any, and the more the better, roughly, as long as they don't replace humans) is a good thing. But these checkers should run sandboxed, just flag issues -- as any linter. And if that stuff is LLM-powered, so be it. But agentic coding? LLM-driven suggestions/refactoring? I'm soooo weary of this.
@ljrk @cwebber If you are willing to burn money on output tokens, then let Claude be your slave, but under UK CDPA, document all system prompts and be very specific in them. Also, don't fall into the trap of using Co-Authored-By. Claude doesn't deserve it. https://bence.ferdinandy.com/2025/12/29/dont-abuse-co-authored-by-for-marking-ai-assistance/
Don't abuse Co-authored-by for marking AI assistance

Instead, use a dedicated `AI-assistant:` trailer with a more useful format.

Bence Ferdinandy
@bms48 @cwebber Eh, that effort isn't worth it for me, but I'm not coding much anyway, more of a code auditing person^^'
@cwebber Poettering's opinion is quite common AFAIK
But that doesn't mean it's good to let it be that way
We must be the change we want to see if we want improvements
@cwebber Unasked security reviews (audits) out of the blue can saturate the few human reviewers that a FLOSS project may have (see Open Source Security in spite of AI FOSDEM talk, by Daniel Stenberg (curl maintainer), around time 10:42).

AI assistance for PR reviews, though,
might have a point (as one more tool to use, like a linter, an automated coding style checker, CI unit testing or static analyzers) if one closes their eyes to avoid seeing the environmental impact and the source code stealing during the training stage.
FOSDEM 2026 - Open Source Security in spite of AI

@cwebber I keep being baffled by these folks just ignoring the code provenance and licensing issues.

@janl Indeed, people have gotten the mistaken impression that the licensing issues have been answered. THEY HAVEN'T YET! The US Supreme Court *declined to take on* a case which had ruled in a lower court that AI generated materials were in the public domain. And yet I am seeing *all over the place* people saying that the US Supreme Court said AI output is in the public domain. They didn't!

And outside the US, nothing is answered either! It's true that the US tends to set international precedent but we are *also* not in times where we can count on that, either.

@cwebber @janl On the legal side, I think folks are counting on the fact that so much money is behind the position that AI sufficiently launders copyright that there's little chance courts in the U.S. are going to rule otherwise. I don't *like* that position, because I think it's wrong on a number of levels -- but if I had to wager a paycheck on the outcome of a court case... that's the position I'd put the money on.

It seems unlikely that SCOTUS, for example, is ever going to rule against the monied class. The only way I see SCOTUS ruling the other way is if it's two money giants going toe-to-toe and the conservatives see some advantage in finding that AI-generated code infringes on copyright. Even then, I'd expect it to be a narrow, hard-to-generalize ruling.

But what do I know? I'm just trying to keep my head above water like most folks.

@jzb @cwebber @janl Pretty much this I suspect.

If the courts ever really rule against GenAI slop, whatever affects any given FLOSS project is really the least impactful outcome.

(Slightly) More likely is an outcome where this leads to some software being considered derivative work, but that's not really something someone writing GPL code needs to worry about.

Patents? Deal with it once someone raises the issue. Nobody ever actively goes looking for them.

@larsmb @jzb @cwebber Windows and Adobe source codes have been leaked. If substantial parts if that end up in a GPL project, they still have to worry.

@janl @jzb @cwebber Part of why enterprises pay for GH Copilot is the indemnification clause for such copyright violations.

(This should be particularly effective against the Windows sources ...)

@jzb @cwebber @janl I am tracking https://githubcopilotlitigation.com/ for these reasons and would sign up to similar class action in .uk readily. My doctoral thesis is CC-BY-SA and I would look very poorly upon license violations by the GenAI jokers.
GitHub Copilot litigation · Joseph Saveri Law Firm & Matthew Butterick

GitHub Copilot litigation

@cwebber "When someone shows you who they are, believe them the first time" - Maya Angelou
@cwebber actually, systemd has been a major conversation I've walked into time and time again, and now I'm about to be using it for booting.. so after I got triggered by the xz compromise, I'm ready to prefer changes a human can be held responsible for.
@cwebber the AI contributions will happen regardless. It's trivial to have e.g. opus 4.6 spit out prs that we would not be able to classify as being written by AI. In fact, by adding an AGENTS.md that instructs AIs to add disclosure, we probably make AI written prs more obvious. Anyway, if we know people are going to use AI to contribute in ways we cannot reliably detect, we may as well add instructions to make those prs as good as possible.
@daandemeyer @cwebber Um, it's called trust and human relationships. If you don't trust someone not to be lying about the provenance of code they send you, you shouldn't be entertaining accepting code from them in the first place.
@cwebber I would take anything Lennart Poettering says with a massive pinch of salt, given how often I run up against his broken monothic Windows Services imitation these days just trying to build a protocol lab.
@cwebber

I'm actually more apprehensive of llm-based security reviews than of code generation, because there are more problems with the former that I can't see a feasible way of solving in the current world.

Unreliable security reviews create the same kinds of issues that level 2 autonomous driving does. Even if we assume that this is not an issue during code reviews due to general poor quality of attempts to look at security during code reviews now, we have more problems caused by statelessness of the bot: adversaries can build a similar bot and search for vulnerabilities it won't complain about, and the cases where subsequent PRs worsen the situation incrementally (not necessarily due to malice: perhaps due to people doing the minimum amount of changes to make the bot "happy") are hard to deal with.

For code generation I can see areas (well specified problems or problems where we have a reliable way of scoring solutions against each other, and when the code will run sandboxed or in a way morally equivalent to sandboxing) where, with sufficient effort, one could use LLM generators safely (albeit the effort I foresee would probably negate many of the reasons people want to use them for).