Systemd has denied to revise their policy in regards to AI.

they've also marked evidence that people gave regarding its effectiveness as off-topic, then locked the conversation.

I believe the authors have not understood the weight of the issue.
Later this day, I will begin drafting an open letter to Systemd's authors under the Starlight Network umbrella of projects. EDIT: or perhaps I will take a different approach. There's many more issues I want to talk about.
Disallow usage of generative AI to write code · Issue #41085 · systemd/systemd

Component No response Is your feature request related to a problem? Please describe Generative AI is actively killing people, driving up costs, and plagiarizing work from many open source developer...

GitHub
@alexia genuine question, why is this such a big problem?
@alreadydeadxd There are many reasons, but to cut it short, the effects of the continued use of Generative AI go far beyond just code quality. It introduces licensing ambiguity, contributes to devaluing software development as a skill, and encourages increased energy use and expansion of data centers which has shown to negatively affect marginalized communities.

@alexia
ah, i see, the code quality argument is really starting to fall apart in 2026, haha

i don't know that much about the license ambiguity, but to be honest i don't care that much either, maybe it's because i think all code should be open source, and copyright shouldn't exist, lol. nevertheless, i'm aware things don't work like that in our current dystopia, which sucks

i've always took the devaluation of software development as a skill to be just marketing and tech bro hype, to do ai-assisted programming properly you still need to have all the knowledge a software developer that doesn't use ai would have in the first place, you can't just be like "claude, build systemd or go to jail" or whatever

the energy consumption and data center situation is worrisome, it could be done in a way more sustainable manner, but when did big corporations care about the environment and marginalized communities, am i right? i also feel like people tend to overemphasize this issue in particular, it's not the only case of corporations and/or government being evil at the expense of others, not the biggest such issue by far, and not the only problem in which the population/consumers are supporting bad things happening through their behaviour. for example, if you complain about how data centers are bad for the local communities and the environment, but then go to the supermarket and buy some beef, i don't even want to talk to you, haha

@alreadydeadxd
you can't just be like "claude, build systemd or go to jail" or whateverYes, true! however... the thing that is actually devaluing this work is that those who pay us value us less, because why would they pay us a high salary if they can pay 20 bucks a month for claude? Same goes for artists, writers and lots of creative jobs.

The fact that you still need to understand what you're doing
doesn't matter to those we develop for.
if you complain about how data centers are bad for the local communities and the environment, but then go to the supermarket and buy some beef, i don't even want to talk to you, hahaI think there's nuance here, every single person cannot be absolutely perfect, and especially not every single person has reflected on every little thing they do; We can only care about so many things before we collapse. That said, the reason people care about these data-centers is because they went digging for reasons not to use AI, not the other way around. If AI hadn't been, issues with large datacenters would've probably gone undiscussed for many more years. Which...I guess is a good thing?

@alexia i don't think the issue you pointed out with managers being sold on ai and devaluing workers is going to be made any better by all the ai hate we see online, either the tech bros are right and coding is going to be obsolete in 5 years, in which case management is going to continue to seek profits as always, or everything will come crumbling down in the next years, and the people who bought into the ai hype are going to have to come back to earth with the rest of us

yes, it's like you said, people hate ai, so they started looking for reasons not to use it, that's why they are not consistent with their values

i'm personally kinda bothered by how the conversation around ai is going, on one side there is almost this religious fanaticism, and on the opposite side, where most people on fedi seem to be, there is a lot of hate, as if the technology itself is evil. most negative commentary i've seen around ai is just pointing out issues with state capitalism, and then blaming them on ai in particular. like, if wage slavery wasn't a thing, we wouldn't be talking about the devaluation of the worker. if people were more environmentely conscious, we would figure out how to make the tech even more efficient, and how to run it as sustainably as possible long term, not rushing directly into building massive data centers all over the world. if people were generally opposed to the centralisation of power, we wouldn't have only a couple of very big genai competitors that instill their own biases into the models, and so on

@alreadydeadxd I agree with basically everything you said, but to be pragmatic, I must work from inside the system and tear it down from there

I unfortunately cannot act as if all of these are non-issues due to the world we live in so this will have to do as the next-best thing
@alexia sorry for the late response. i do understand where you're coming from, but taking it back to the original conversation, meaning the systemd situation, and other such projects for that matter, i can't help but feel that this is a losing battle. imagine i'm a systemd contributor who wasn't polarized against llms already, there is practically nothing you could say that would change my mind. and from the perspective of the project itself, they can't tell if contributions are hand-written or ai generated, if someone chooses to use ai, they might just as well try to provide support to improve the code quality. from the user standpoint, i don't want to run poorly written or vulnerable software, but not all llm generated code is slop, you can totally use ai agents to write decent quality software, so this falls back to a case-by-case basis. do i have ethical concerns related to the use of ai as it exists today? totally. but i wouldn't go as far as switching from systemd to something else though. there's probably so much code that runs right now on my system, which was written by people with whom i don't share all my moral values. i think we need broader systemic and cultural change, the current version of genai is only a symptom of deeper issues

@alexia @alreadydeadxd

It's like people had an emotional response when their very source of work is being threatened and the skill they developed their whole life is rapidly devaluing. That irremediably leads to radicalization (in fanatism or hate) when not depression.

And a very important detail about this and the one that is the most infuriating is that the tech itself can be good. But the people in charge (your bosses, AI companies, economic powers) won't let it. They only want to use it extract more wealth and waste resources. It's one of the newest biggest technology innovation in years, corrupted only to make stupid business people think they are the next rockstar X10 developer.

A word manipulation machine that can seemingly do good, and by the actions of their users is instead destroying the very value of tech work itself.