systemd goes AI agent slopware https://github.com/systemd/systemd/blob/c1d4d5fd9ae56dc07377ef63417f461a0f4a4346/AGENTS.md
has slop documentation now too
systemd goes AI agent slopware https://github.com/systemd/systemd/blob/c1d4d5fd9ae56dc07377ef63417f461a0f4a4346/AGENTS.md
has slop documentation now too
EDIT: See later in thread, it seems like the good news is at least that it's not having auto-merging on, which is where the security risk comes in. I still have other concerns.
Looks like they're also using Claude for PR review https://github.com/systemd/systemd/commit/9a70fdcb741fc62af82427696c05560f4d70e4de
Which probably means systemd is now the most attractive target in FOSS for an AI prompt injection attack to insert a backdoor
EDIT: It does seem that they don't have auto-merging of PRs from the review bot, which is an improvement over the situation (and mitigates the primary security risk, hopefully it stays that way), and AI contributions are asked to be disclosed. That said, it seems like the issue is closed, and they are firmly in the "we will accept AI contributions, as long as disclosed" camp.
Poettering commented, the issue is now closed. https://github.com/systemd/systemd/issues/41085#issuecomment-4053443496
Asking for detection of security vulnerabilities from an LLM is one thing though, that one I could consider useful, but the real question is code and documentation generation. It does seem that for now, the bot usage isn't auto-merging PRs, which does alleviate some previous concerns of mine if reading that right.
But, in AGENTS.md it does mention "docs/CODING_STYLE.md — full style guide (must-read before writing code)". https://github.com/systemd/systemd/blob/main/AGENTS.md
They do require disclosure in the project also of LLM usage. But this does imply that LLM contributed changes are considered welcome, so we will probably see more of them, but I suppose at least they should hopefully be marked appropriately.
I will admit, I made this thread when pretty frustrated and upset about it. SystemD is so key to the security of many peoples' machines. I don't necessarily see having security reviews be a problem the same way that codegen and etc are. And I was wrong about the PR review vulnerability risk in that *for now* afaict the review bot is just performing read-only security review, is not taking auto-action on merging, which is the real risk.
So maybe I overreacted? But Poettering's comment reads the way that most comments I have read that have been drawn into AIgen code have gone, which is "you gotta admit that things are changing, these things are getting really good" and then opening the door to aigen contributions. Which I am very wary of...
@janl Indeed, people have gotten the mistaken impression that the licensing issues have been answered. THEY HAVEN'T YET! The US Supreme Court *declined to take on* a case which had ruled in a lower court that AI generated materials were in the public domain. And yet I am seeing *all over the place* people saying that the US Supreme Court said AI output is in the public domain. They didn't!
And outside the US, nothing is answered either! It's true that the US tends to set international precedent but we are *also* not in times where we can count on that, either.
@cwebber @janl On the legal side, I think folks are counting on the fact that so much money is behind the position that AI sufficiently launders copyright that there's little chance courts in the U.S. are going to rule otherwise. I don't *like* that position, because I think it's wrong on a number of levels -- but if I had to wager a paycheck on the outcome of a court case... that's the position I'd put the money on.
It seems unlikely that SCOTUS, for example, is ever going to rule against the monied class. The only way I see SCOTUS ruling the other way is if it's two money giants going toe-to-toe and the conservatives see some advantage in finding that AI-generated code infringes on copyright. Even then, I'd expect it to be a narrow, hard-to-generalize ruling.
But what do I know? I'm just trying to keep my head above water like most folks.
@jzb @cwebber @janl Pretty much this I suspect.
If the courts ever really rule against GenAI slop, whatever affects any given FLOSS project is really the least impactful outcome.
(Slightly) More likely is an outcome where this leads to some software being considered derivative work, but that's not really something someone writing GPL code needs to worry about.
Patents? Deal with it once someone raises the issue. Nobody ever actively goes looking for them.