systemd goes AI agent slopware https://github.com/systemd/systemd/blob/c1d4d5fd9ae56dc07377ef63417f461a0f4a4346/AGENTS.md
has slop documentation now too
systemd goes AI agent slopware https://github.com/systemd/systemd/blob/c1d4d5fd9ae56dc07377ef63417f461a0f4a4346/AGENTS.md
has slop documentation now too
EDIT: See later in thread, it seems like the good news is at least that it's not having auto-merging on, which is where the security risk comes in. I still have other concerns.
Looks like they're also using Claude for PR review https://github.com/systemd/systemd/commit/9a70fdcb741fc62af82427696c05560f4d70e4de
Which probably means systemd is now the most attractive target in FOSS for an AI prompt injection attack to insert a backdoor
EDIT: It does seem that they don't have auto-merging of PRs from the review bot, which is an improvement over the situation (and mitigates the primary security risk, hopefully it stays that way), and AI contributions are asked to be disclosed. That said, it seems like the issue is closed, and they are firmly in the "we will accept AI contributions, as long as disclosed" camp.
Poettering commented, the issue is now closed. https://github.com/systemd/systemd/issues/41085#issuecomment-4053443496
Asking for detection of security vulnerabilities from an LLM is one thing though, that one I could consider useful, but the real question is code and documentation generation. It does seem that for now, the bot usage isn't auto-merging PRs, which does alleviate some previous concerns of mine if reading that right.
But, in AGENTS.md it does mention "docs/CODING_STYLE.md — full style guide (must-read before writing code)". https://github.com/systemd/systemd/blob/main/AGENTS.md
They do require disclosure in the project also of LLM usage. But this does imply that LLM contributed changes are considered welcome, so we will probably see more of them, but I suppose at least they should hopefully be marked appropriately.
I will admit, I made this thread when pretty frustrated and upset about it. SystemD is so key to the security of many peoples' machines. I don't necessarily see having security reviews be a problem the same way that codegen and etc are. And I was wrong about the PR review vulnerability risk in that *for now* afaict the review bot is just performing read-only security review, is not taking auto-action on merging, which is the real risk.
So maybe I overreacted? But Poettering's comment reads the way that most comments I have read that have been drawn into AIgen code have gone, which is "you gotta admit that things are changing, these things are getting really good" and then opening the door to aigen contributions. Which I am very wary of...