I saw a wild take where someone said distributions are fascist for using systemd because systemd now uses Claude for code review.

okay. fine, I guess.

but if we are rejecting dependencies that use AI tooling, where do we go?

seriously. where do we go?

if the Linux kernel is using AI tools for codegen, then where do we go?

FreeBSD? I would put money on it that they use AI tools.

OpenBSD? NetBSD? HURD?

do we hard fork every dependency that is now tainted? do we even have the resources to do it?

FreeBSD and Illumos are the only ones reasonably close in the tech tree and I suspect both use AI tools too, as their development, like Linux, is driven by capital.

i guess my point here is that reactionary behavior does not really benefit anyone and just leads to bad decisions

@ariadne it's protestantism but swapping the god from the ethereal one to "reason". if you are bad you are tainted permanently and must stone; if they stopped using AI tools it would also not be enough because they are "tainted".

this pattern repeats over and over from people who unlearned one piece but didn't deprogram the religious dogmatic patterns, and you end up here.

is Linux foundation funding the destruction of jobs, removing human contributions, destroying the world with debt, any of that? of course not! but it's still dogma.

I don't have a good answer to this, just to remind people what the actual goals and actions of orgs are and hope they listen.

@ariadne I don't want to see the world eaten by AI but people use the tool and it drives results for them. There's nowhere much else to go.
It's like Stallman arguing for owning every piece of your machine - eventually, you have some closed source firmware blob. Purity vs reality.

@ariadne also, you should be more concerned about whether you are actually doing fascism (i.e. snitching on your neighbors, working for the actual fascist goon army) versus vague ideological debates that the people doing Real Fascism will never even give a second thought to.

if systemd is actually fascist. You Will Know.

@omnirabbit @ariadne (let's say that the age thing doesn't shine a positive light on systemd either)

@oblomov @omnirabbit what "age thing"

it's a fucking optional field in a user database for birthdate

they aren't enforcing anything or anything like that.

it is a field in a schema.

vcard also has a field for birthdate. is it also fascist?

@ariadne

why was the field added?

(VCARD has a lot of field for PII. Heck's, it's basically just PII> That doesn't mean that systemd should have that same information. They are different tools for different purposes.)

@omnirabbit

@omnirabbit @ariadne I'm a pragmatist, but do appreciate it there being an effort to denounce and fight these kinds of involutions, even when it is taken to extremes I disagree with. It's the most practical example of the Overton window. Without the shift to the opposite extreme, the situation would devolve much faster.
@ariadne redox-os has a no llm policy. But i am not sure how close it is at being production ready
@voided @ariadne it's still alpha-grade, "do not trust with important data" level :\
@ariadne freebsd’s gpu driver code is also either imported from the linux kernel (for the open source drivers) or written by an ai company (nvidia binary driver)
@ariadne I can almost certainly say that Illumos doesn't, but I take your point

@ariadne Bryan Cantrill, considered important by illumos devs, is an AI booster I think

#lang_en

@ariadne Microsoft are heavily using AI internally too. Not to mention they are one of the largest financial backers of AI.

I don't see any mass boycott of Windows.

@ariadne Not to mention browsers...
@ariadne there's no "if", the kernel does use LLMs extensively, right now: https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/ precisely and exactly the same policy as systemd has. LIterally the same. And yet these lunatic takes never demand Linux distros drop Linux. I wonder why ¯\_(ツ)_/¯
AI bug reports went from junk to legit overnight, says Linux kernel czar

Interview: Greg Kroah-Hartman can't explain the inflection point, but it's not slowing down or going away

The Register
Thom, exceedingly pure (@[email protected])

Both the Linux kernel and systemd now contain slop or are open to adding slop. I have nowhere to go.

Exquisite.social
@ariadne I'm (somewhat unwillingly) slowly drifting to "ok, use AI for review, then"… people generally understand and accept you can't have AI both write and review the same code, and between these two choices one is a massive dick move on FOSS maintainers while the other is vaguely stomachable if you don't think about it too much 🫤
@ariadne nah, not fine, actually. It's a complete warping of reality that removes all meaning from the word "fascist" and turns it into nothing but a generic insult - probably not intentionally, but definitely as a means to personally get attention.
@ariadne we'll see what happens when the bubble bursts and prices start reconciling with reality
@ariadne This one is all wack when like what 3~6 months ago there was a pro-systemd jerk being like "anti-systemd are all facists!"

Also yeah in terms of alternatives it's not great, so far I'm stuck with reducing as much as possible and planning to have more stuff like Plan9.
(Also pretty sure Hurd got LLM-tainted)
@lanodan @ariadne oh, is that why hurd just suddenly pushed out amd64 support recently, only a cool 25 years late?
@astraleureka @ariadne IIRC it's SMP rather than 64-bit but same sort of "Huh? Everyone got that stuff, come on"
@lanodan @ariadne if it's smp that's actually even sadder than just now getting amd64 support. its a microkernel. supporting multiple cpus is a pretty major win, lol
@astraleureka @ariadne Yeah, checked and it's SMP

Which yeah seems quite ridiculous for a microkernel to only get it now but well Hurd is a zombie project that aged decades.
@astraleureka @lanodan @ariadne No, it was a lot of work by a handful of people over many years. It has nothing to do with LLMs.

@thesamesam @astraleureka @lanodan @ariadne yeah sure, if you exclude some tiny details like, er, SMP support https://lists.gnu.org/archive/html/bug-hurd/2026-02/msg00133.html

Enjoy your single-core UNTAINTED systems forever, I guess?

Re: [PATCH 0/4 gnumach] Working SMP 64b

@bluca @astraleureka @lanodan @ariadne I don't think their work was used at all. But I'm not arguing everyone should switch to Hurd, I'm just saying I don't think it's tainted, and I think some random person (same person each time) sending LLM content a handful of times to an ML isn't the same thing?

@lanodan @ariadne re Hurd: I only saw one person doing some LLM review (not of submitted patches but they took it upon themselves to submit its findings), I don't consider that tainted and I don't think it's some sort of official effort or anything, even if I don't like it.

systemd embracing it with a CLAUDE.md, using it in all PRs, commits co-authored-by it etc is different.

@thesamesam @lanodan @ariadne

Hurd using LLMs for reviews: perfectly ok
systemd using LLMs for reviews: TAINTED

DId I get this right?

@bluca @lanodan @ariadne Someone deciding to send ML output a handful of times an ML is different from it being an established part of the project, sure.

(I also didn't say "perfectly ok", it's just that it's clearly different, even if one does or doesn't like it?)

@thesamesam @lanodan @ariadne gotcha, rules for thee but not for me

@bluca @lanodan @ariadne If a contributor had copilot review their PR for systemd but systemd didn't have it as part of CI or as some regular part of contribution, I'd say the same thing.

But I'm not even making rules! I'm pointing out a distinction?

@thesamesam @bluca @lanodan personally, i don't even think i *care* about LLM-based reviews.

what i care about is LLM-based code generation because every time i've interacted with people using those tools to produce changesets, it's been fucking miserable

@ariadne @bluca @lanodan I've sort of come to this position as well, especially sympathising w/ what Lennart says about Bad Guys already using LLMs to find vulnerabilities, so may as well try to leverage them to do some good.

Don't love it still but I definitely feel warmer to it than the rest.

@thesamesam @bluca @lanodan i guess to me, it feels unnatural and jarring to argue with a chatbot in a code review.

but that is far less harmful than dealing with changesets where the author does not even fucking know what he is submitting and cannot defend his work.

*that* is true misery as a maintainer.

@thesamesam @bluca @lanodan basically the problem is AI as force multiplier for charlatanism.

claude making it miserable for charlatans to get their PRs merged actually seems like a positive use of the technology...

@ariadne @thesamesam @lanodan of course and stuff like that gets shot into the sun with a rocket without mercy.

But you don't argue with chatbots in reviews - these days claudebot is about 90% signal-to-noise ratio. The 10% noise you just dismiss, there's no arguing involved. But that 90% of signal has got really good in the past ~3 months, and there's no point denying it. This stuff was mostly crap until end of last year, but things change, and there's nothing wrong with changing views

@bluca @thesamesam @lanodan oh yes, we have been experimenting with it at work for reviews.

it has indeed gotten pretty good.

but i hesitate becoming dependent on it as a FOSS maintainer because while the first hit is free, when the economic reality catches up... it will probably be quite expensive.

@ariadne @thesamesam @lanodan yeah that's obviously the end goal of all this wild and absurd speculation, but capitalism gotta capitalism. At some point the bubble will pop and then we'll see what's left standing
@ariadne @thesamesam @bluca @lanodan The end-user should always be responsible for what they deliver, no matter the tools. Then any excuses like "AI wrote it" would not have any rights to defend the user.

@aronowski @thesamesam @bluca @lanodan yes, that is basically the pkgconf contribution policy in a nutshell.

we have taken some steps to tell agentic tools to fuck off though, because i do not want to deal with it

@thesamesam @ariadne @bluca Kind of still feels bad given how overblown a lot of security vulnerabilities are (I guess ICANN and registries will get more money from website-logo vulns), plus imagine getting a big wave of low-impact security vulnerabilities.

But well that's roughly the same issues as with fuzzers, except it's combined with codegen this time.
@lanodan @bluca @ariadne Yes, exactly, it really is fuzzers all over again, just the problem is you now have this script-kiddy enabling tech on top.
@thesamesam @lanodan @bluca yes, but script kiddies also figured out how to use the fuzzers and submit slop to us with "can you tell me about your bug bounty program?"
@ariadne @thesamesam @bluca I think it's the kind of thing where I could end up replying "Here's my hourly rate for support requests"
@lanodan @ariadne @thesamesam our security bug bounty in systemd was 99.99% garbage until end of last year. Since then these tools have got way better, and I'd say there's a ~10% valid security bugs, ~70% valid bugs but not security relevant, and ~20% garbage. I'll happily take the 10% of real, valid issue found for the price of having to shoot down ~20% of garbage. The key is to have no mercy - there's no arguing or bargaining involved, a crap report gets binned, end of, no discussions
@lanodan @ariadne @thesamesam the 70% of valid-bugs-but-not-vulnerabilities is kinda 50-50 our fault and the bots fault. The bots fault because it's a dumb LLM in the end, it doesn't understand the big picture (well doesn't "understand", full stop). Our fault because a lot of the security models are pretty much implicit, and scarcely documented if at all, so the bot has nothing to keep it grounded to reality

@bluca @lanodan @thesamesam yes, in our own experiments at work, we are having to write a lot into the system prompt in order to inform claude about the threat model.

otherwise it does silly things like "zones have device nodes in them that allow accessing hypervisor services"

well, yes.

i would hope so.

considering that it's running in a hypervisor, and you need those services to access secure enclaves, for example.

@ariadne @lanodan @bluca yeah, and even before fuzzers with any sort of security tooling actually ("hello your CSP policy is missing on ur static website")
@thesamesam @lanodan @ariadne and I'm pointing out that the distinction is specious and a glaring case of double standards. Everyone uses who uses these tools does so in different ways, and you don't get to do moral grandstanding just because you arbitrarily drew a line in the sand where it's most convenient for you, and not a millimeter further. Doesn't work that way, sorry
@thesamesam @ariadne Ah so not yet tainted, but still meh social wise that I guess could be addressed via policy/guidelines.
@ariadne that seems like a stretch.
Ideally, yeah, sure, let’s hard fork everything tainted by LLMs. Practically? Shit if I know. When will people stop using these stupid things?

@ariadne I see it as a "pick your fight"-thing: before LLMs, most users (me definitely included) had the same problem: we have to trust the maintainers. From my perspective, whether they use such tools or not doesnt really matter, since I cant review 95% of my tech stack anyway ,simply due to lack of time.

i think software will deteriorate in general by using llms extensively, tho proprietary even more so than free software. Unless I find years to spare, the choice is easy.

@ariadne there really is no ethical computer use under capitalism.

@ariadne One idea would be to stick to using older systems, perhaps with older hardware, from the times when AI usage wasn't as widespread.

Of course they will have their vulnerabilities, so I'd use them only for processing my own trusted data, and not e.g. executing JavaScript from random websites.

Though let's keep in mind that it applies to personal computers not relying on software delivered with AI. Even if someone was to not use personal computers at all and live an analog life, the exposure to AI-delivered software and machines would still be present, e.g. when having one's sensitive medical data stored on a doctor's computer.

@ariadne we stay on the last non slip taunted release and wait a year or two while it all burns down?

@ariadne ^^ using fashware is only dubious if it’s in spite of viable alternatives

if there are no viable close alternatives the first reaction should be empathy