systemd goes AI agent slopware https://github.com/systemd/systemd/blob/c1d4d5fd9ae56dc07377ef63417f461a0f4a4346/AGENTS.md

has slop documentation now too

EDIT: See later in thread, it seems like the good news is at least that it's not having auto-merging on, which is where the security risk comes in. I still have other concerns.

Looks like they're also using Claude for PR review https://github.com/systemd/systemd/commit/9a70fdcb741fc62af82427696c05560f4d70e4de

Which probably means systemd is now the most attractive target in FOSS for an AI prompt injection attack to insert a backdoor

EDIT: It does seem that they don't have auto-merging of PRs from the review bot, which is an improvement over the situation (and mitigates the primary security risk, hopefully it stays that way), and AI contributions are asked to be disclosed. That said, it seems like the issue is closed, and they are firmly in the "we will accept AI contributions, as long as disclosed" camp.

ci: Add one more mcp tool to claude-review workflow · systemd/systemd@9a70fdc

The systemd System and Service Manager . Contribute to systemd/systemd development by creating an account on GitHub.

GitHub

Poettering commented, the issue is now closed. https://github.com/systemd/systemd/issues/41085#issuecomment-4053443496

Asking for detection of security vulnerabilities from an LLM is one thing though, that one I could consider useful, but the real question is code and documentation generation. It does seem that for now, the bot usage isn't auto-merging PRs, which does alleviate some previous concerns of mine if reading that right.

But, in AGENTS.md it does mention "docs/CODING_STYLE.md — full style guide (must-read before writing code)". https://github.com/systemd/systemd/blob/main/AGENTS.md

They do require disclosure in the project also of LLM usage. But this does imply that LLM contributed changes are considered welcome, so we will probably see more of them, but I suppose at least they should hopefully be marked appropriately.

I will admit, I made this thread when pretty frustrated and upset about it. SystemD is so key to the security of many peoples' machines. I don't necessarily see having security reviews be a problem the same way that codegen and etc are. And I was wrong about the PR review vulnerability risk in that *for now* afaict the review bot is just performing read-only security review, is not taking auto-action on merging, which is the real risk.

So maybe I overreacted? But Poettering's comment reads the way that most comments I have read that have been drawn into AIgen code have gone, which is "you gotta admit that things are changing, these things are getting really good" and then opening the door to aigen contributions. Which I am very wary of...

@cwebber This. I do think that writing code oneself and running it through checkers (any, and the more the better, roughly, as long as they don't replace humans) is a good thing. But these checkers should run sandboxed, just flag issues -- as any linter. And if that stuff is LLM-powered, so be it. But agentic coding? LLM-driven suggestions/refactoring? I'm soooo weary of this.
@ljrk @cwebber If you are willing to burn money on output tokens, then let Claude be your slave, but under UK CDPA, document all system prompts and be very specific in them. Also, don't fall into the trap of using Co-Authored-By. Claude doesn't deserve it. https://bence.ferdinandy.com/2025/12/29/dont-abuse-co-authored-by-for-marking-ai-assistance/
Don't abuse Co-authored-by for marking AI assistance

Instead, use a dedicated `AI-assistant:` trailer with a more useful format.

Bence Ferdinandy
@bms48 @cwebber Eh, that effort isn't worth it for me, but I'm not coding much anyway, more of a code auditing person^^'