I saw a wild take where someone said distributions are fascist for using systemd because systemd now uses Claude for code review.

okay. fine, I guess.

but if we are rejecting dependencies that use AI tooling, where do we go?

seriously. where do we go?

if the Linux kernel is using AI tools for codegen, then where do we go?

FreeBSD? I would put money on it that they use AI tools.

OpenBSD? NetBSD? HURD?

do we hard fork every dependency that is now tainted? do we even have the resources to do it?

FreeBSD and Illumos are the only ones reasonably close in the tech tree and I suspect both use AI tools too, as their development, like Linux, is driven by capital.

@ariadne well, as a developer who has been writing linux kernel code since back in about 2001 or so (actually I think it was something alsa/bluetooth related so probably user space at that point, but … I remember digging deep) - I don’t think it’s feasible to continue OSS without making use of gen AI in development.

Its like saying we can’t use C, everything has to be ASM.

That doesn’t mean developers don’t need to read or understand the code anymore before committing. But a hard ban? Idk.

@distractions why is it infeasible to continue OSS without using GenAI?

that seems like an absolutely *wild* claim.

@ariadne well, because the world already has been changed. That’s a historic hard fact. Pretending it hasn’t won’t stop the wheel from turning. Anyone can set up a new project on GitHub (or CodeBerg for that matter) and put anything up there, and if it somehow does the trick, people won’t care how it does. It’s sad, but that’s how things progress.

I believe it more worthwhile to harden our processes **around** and with gAI, not against it. Because the train will roll.

@distractions @ariadne My experience with genAI (about a year and a half now involving code) is that it's hardly inevitable. It sounds like it is, because the one thing genAI is good at is creating plausible, highly believable text without regard to facts or reality. It's _really_ good at that. So are salesweasels, BTW, and we know all about trusting _them_. But when it comes to code, it falls flat at the "copy & paste" stage.
@tknarr @ariadne I agree with the copy and paste limit; but, how much of the code we *have* to write is copy and paste, and can we afford to write it over and over again? I can’t. I am happy with looking over the code, having tests for the code, but reiterating over the same basics over and over again… I don’t believe that will be affordable in the future. Or now. That’s why I was talking about setting up our processes and guidelines around gAI, not against it.
@distractions @ariadne If the code can be copy-and-pasted, we won't write it over. We'll just grab it from our library and paste it in. Hells, I have a complete framework for applications at work that I use to start out just so I don't have to write it over and over. Boilerplate code, I use tools to generate it that don't depend on genAI. They're faster _and_ they're deterministic so I don't have to check the code every time to make sure nothing bad's crept in.