I saw a wild take where someone said distributions are fascist for using systemd because systemd now uses Claude for code review.

okay. fine, I guess.

but if we are rejecting dependencies that use AI tooling, where do we go?

seriously. where do we go?

if the Linux kernel is using AI tools for codegen, then where do we go?

FreeBSD? I would put money on it that they use AI tools.

OpenBSD? NetBSD? HURD?

do we hard fork every dependency that is now tainted? do we even have the resources to do it?

FreeBSD and Illumos are the only ones reasonably close in the tech tree and I suspect both use AI tools too, as their development, like Linux, is driven by capital.

@ariadne well, as a developer who has been writing linux kernel code since back in about 2001 or so (actually I think it was something alsa/bluetooth related so probably user space at that point, but … I remember digging deep) - I don’t think it’s feasible to continue OSS without making use of gen AI in development.

Its like saying we can’t use C, everything has to be ASM.

That doesn’t mean developers don’t need to read or understand the code anymore before committing. But a hard ban? Idk.

@distractions why is it infeasible to continue OSS without using GenAI?

that seems like an absolutely *wild* claim.

@ariadne @distractions i feel like the decades we've managed already are worth something.

@ariadne well, because the world already has been changed. That’s a historic hard fact. Pretending it hasn’t won’t stop the wheel from turning. Anyone can set up a new project on GitHub (or CodeBerg for that matter) and put anything up there, and if it somehow does the trick, people won’t care how it does. It’s sad, but that’s how things progress.

I believe it more worthwhile to harden our processes **around** and with gAI, not against it. Because the train will roll.

@distractions
this is the same carryon claiming its inevitable. this is functionally the same as saying bitcoin and its various scams are "hard historic fact[s]" as if to ignore the reality that one can reject it outright because of its negative externalities.

@ariadne

@zardoz03 @ariadne time will tell. But it’s pretty obvious that even in case the AI bubble bursts and take every of those crazy people with them - which I personally would prefer - local models will still be able to code faster then… well, me for example.
@distractions @zardoz03 @ariadne are the local models in the room with us right now? No? Didn't think so. Absolutely absurdly hill to die on.
@distractions @ariadne My experience with genAI (about a year and a half now involving code) is that it's hardly inevitable. It sounds like it is, because the one thing genAI is good at is creating plausible, highly believable text without regard to facts or reality. It's _really_ good at that. So are salesweasels, BTW, and we know all about trusting _them_. But when it comes to code, it falls flat at the "copy & paste" stage.
@distractions @ariadne Watching it work, it follows a pattern I use myself when I'm starting on an unfamiliar codebase using unfamiliar tech: find code that does something similar to what I want, copy it and start tweaking it until it does what I need. You can do a lot with that, but you're limited by what you can find examples of. And inevitably the most pressing problems are ones you _can't_ find examples of, if you could they'd be common enough they wouldn't be pressing now would they?

@distractions @ariadne I'm watching this happen at work, and I'm seeing more and more cases where the resulting code just isn't very good, or contains subtle killer bugs that the authors don't understand. I'm also seeing devs stuck when genAI fails, they just stop instead of doing the work on their own.

This is _not_ the kind of environment we need in FOSS, nor one that's going to be successful long-term.

@distractions @ariadne Honestly, this is about the fifth attempt in my lifetime at "don't need developers to create applications", and I doubt it's going to end any better than the previous iterations.
@tknarr @ariadne I hate to say it; that’s a nice strawman. At no point did I claim we „don’t have to write code anymore“. It changes how we write code. My claim is that gAI is a tool that helps us to solve one part of coding work, and if used correctly, we can make that beneficial. I can’t proof that obviously; but what I am saying is: condemning a tool because it can’t solve everything, or can be misused, doesn’t make it a bad tool. Every tool needs correct usage.
@tknarr @ariadne I agree with the copy and paste limit; but, how much of the code we *have* to write is copy and paste, and can we afford to write it over and over again? I can’t. I am happy with looking over the code, having tests for the code, but reiterating over the same basics over and over again… I don’t believe that will be affordable in the future. Or now. That’s why I was talking about setting up our processes and guidelines around gAI, not against it.
@distractions @ariadne If the code can be copy-and-pasted, we won't write it over. We'll just grab it from our library and paste it in. Hells, I have a complete framework for applications at work that I use to start out just so I don't have to write it over and over. Boilerplate code, I use tools to generate it that don't depend on genAI. They're faster _and_ they're deterministic so I don't have to check the code every time to make sure nothing bad's crept in.
@distractions @ariadne Also, note that in my use the copy-and-paste isn't being used as-is, merely as the starting point to get around the fact that I'm unfamiliar with the tech. Note the problems inherent in modifying code that way: bugs you don't realize exist because you don't really understand the code. That's the killer with the code genAI generates: the amount of time and expertise needed to verify it really does what's needed. That negates any time savings from using it.