I saw a wild take where someone said distributions are fascist for using systemd because systemd now uses Claude for code review.

okay. fine, I guess.

but if we are rejecting dependencies that use AI tooling, where do we go?

seriously. where do we go?

if the Linux kernel is using AI tools for codegen, then where do we go?

FreeBSD? I would put money on it that they use AI tools.

OpenBSD? NetBSD? HURD?

do we hard fork every dependency that is now tainted? do we even have the resources to do it?

FreeBSD and Illumos are the only ones reasonably close in the tech tree and I suspect both use AI tools too, as their development, like Linux, is driven by capital.

@ariadne This one is all wack when like what 3~6 months ago there was a pro-systemd jerk being like "anti-systemd are all facists!"

Also yeah in terms of alternatives it's not great, so far I'm stuck with reducing as much as possible and planning to have more stuff like Plan9.
(Also pretty sure Hurd got LLM-tainted)

@lanodan @ariadne re Hurd: I only saw one person doing some LLM review (not of submitted patches but they took it upon themselves to submit its findings), I don't consider that tainted and I don't think it's some sort of official effort or anything, even if I don't like it.

systemd embracing it with a CLAUDE.md, using it in all PRs, commits co-authored-by it etc is different.

@thesamesam @lanodan @ariadne

Hurd using LLMs for reviews: perfectly ok
systemd using LLMs for reviews: TAINTED

DId I get this right?

@bluca @lanodan @ariadne Someone deciding to send ML output a handful of times an ML is different from it being an established part of the project, sure.

(I also didn't say "perfectly ok", it's just that it's clearly different, even if one does or doesn't like it?)

@thesamesam @lanodan @ariadne gotcha, rules for thee but not for me

@bluca @lanodan @ariadne If a contributor had copilot review their PR for systemd but systemd didn't have it as part of CI or as some regular part of contribution, I'd say the same thing.

But I'm not even making rules! I'm pointing out a distinction?

@thesamesam @bluca @lanodan personally, i don't even think i *care* about LLM-based reviews.

what i care about is LLM-based code generation because every time i've interacted with people using those tools to produce changesets, it's been fucking miserable

@ariadne @bluca @lanodan I've sort of come to this position as well, especially sympathising w/ what Lennart says about Bad Guys already using LLMs to find vulnerabilities, so may as well try to leverage them to do some good.

Don't love it still but I definitely feel warmer to it than the rest.

@thesamesam @bluca @lanodan i guess to me, it feels unnatural and jarring to argue with a chatbot in a code review.

but that is far less harmful than dealing with changesets where the author does not even fucking know what he is submitting and cannot defend his work.

*that* is true misery as a maintainer.

@ariadne @thesamesam @bluca @lanodan The end-user should always be responsible for what they deliver, no matter the tools. Then any excuses like "AI wrote it" would not have any rights to defend the user.

@aronowski @thesamesam @bluca @lanodan yes, that is basically the pkgconf contribution policy in a nutshell.

we have taken some steps to tell agentic tools to fuck off though, because i do not want to deal with it