It’s already tempting notably for smallish projects to resort to genAI:
https://toot.aquilenet.fr/@civodul/116132543248503962

But I think a race to the bottom has started in #FreeSoftware, with this rationale: if “we” don’t use genAI in our project, then we will lose to the competition, whether free slopware or proprietary.

Ludovic Courtès (@[email protected])

I think these two factors—lack of humanpower and a “big” vision—coupled with the passion for technicalities typical of such projects make them particularly vulnerable to genAI. Because yes, “we” want SMP support in Mach and it’s not been happening until this contributor achieved something with the help of genAI.

Aquilepouet

… which is short-sighted and loses track of the whole user empowerment goal that free software is supposedly about.

But the “economic” incentives are here.

@civodul I'm working on a glibc (and jointly a gcc) LLM policy which I'll propose for public review, and the difficulty is in threading the needle between technology that we could use ourselves, and user freedoms. My position ends up being that I want to define a policy that allow the projects the to outright reject *or* accept such changes as they see fit, within certain constraints that support user freedom e.g. either you understand the code or it is reproducible with a tool.
@codonell @civodul that speaks to the validity of the code, and maybe empowers/includes more people (maybe the opposite too, if LLM use discourages those ideologically opposed?) in the development community for the project. What does it do for the bigger software freedom picture though? What does it do for copyleft code bases? Is the move towards all FOSS code becoming public domain (given that US courts are leaning towards LLM generated code not being copyrightable) a net positive one?
@codonell @civodul I mean I know there are caveats and it's not necessary *today* that all LLM generated code is non-copyrightable (e.g. if a developer uses it for scaffolding and then injects their own creativity in there, making the code copyrightable) but it's something to think about when creating an LLM policy that doesn't just reject or quarantine legally significant contributions.
@siddhesh_p @codonell @civodul I expect that very soon AI tools will be available to rebuild sources from binaries: I don't see a particular reason why machine code would be harder to process than source code.
So, how will that change the value of proprietary software distributed as binaries?
Which hidden secrets will be revealed from closed firmwares?
I see a coming revolution in taking back control of hardware. Much earlier than AGI or quantum computing.

@siddhesh_p @codonell @civodul Just because the code is public domain doesn't mean the companies won't still find ways to keep it proprietary. It will be asymmetrical, they will take our code because it is in public, and just refuse to share their "public domain" code (public domain doesn't force you to share code.).

So no, this won't be a net positive. There will be new legal mechanisms to shackle the users.

@siddhesh_p @civodul You can only control your own actions, and I would continue to contribute creatively to copyleft projects, and I would encourage others to do the same. Even if someone else, who I don't control, uses an LLM to create a clone, they could always have done that with a fork. They will still not have my time or my attention.
@codonell @civodul yes but someone having an LLM fork the project is not a concern when it comes to drafting LLM policies for projects. That's a separate dumpster fire.
@codonell @civodul as maintainer btw, you control not only your actions, but also the actions of your project community and an LLM policy is exactly that :)
@siddhesh_p @civodul For clarity, I don't control anyone's actions except my own (and even then my body doesn't always comply). As a GNU Project maintainer I am responsible for a package, and I'll work to support that package in the best interest and the ideals of the project. People can fork. People can developer alternative projects. People can contribute to bionic. I see a path that, while it might not line up exactly ethically with what I believe, is maximally freedom respecting.
@siddhesh_p @civodul The existence of public domain contributions in our projects does not directly weaken our copyleft positions. For example glibc considers all locale data to be public domain, and the FSF claims no copyright on that data. Yet we're still an LGPL2+ project. We generate boiler plate all the time that is not novel or expressive, and it doesn't undermine our ideals. There are extremes here that carry risk, and I think a good policy should express those risks.
@codonell @civodul that's only because today, public domain contributions are quarantined to specific, strategic areas (like locales). LLM contributions will change that.
@siddhesh_p @civodul Any contributor to the GNU Project can go through a disclaimer process putting their works in the public domain and contribute them to glibc. It is one of the currently valid processes. It's not the ideal case, and does not support my copyleft ideals, but I respect the wishes of the contributor, and they are furthering the project goals. We should not operate under the slipper slop fallacy that we are heading towards 100% public domain.
@siddhesh_p @civodul LOL "slipper slop" ... I'll leave my typo there because it makes me laugh 😃
@codonell @civodul what I'm arguing is that it's not just a theoretical slippery slope, it's real this time. There's also the question of what the project goals are after all, are they simply to achieve technical goals and solve difficult computer science problems?
@siddhesh_p @civodul I'd say the GNU Project. the GNU Toolchain, and glibc have broader free software goals that include collaboration with all FOSS projects, and supporting user freedoms. How is the slippery slope not theoretical? How does a single step of possibly accepting public domain code (however it is generated) in a Makefile trigger the eventual removal of my freedoms?

@codonell @civodul the pre-LLM possibility of copyleft code being replaced by public domain code relies on there being a set of motivated individuals who are prolific in their contributions to the project and at the same time, want their contributions to be under the public domain.

In contrast, with LLM usage, simply allowing LLM contributions has a tangible risk of any and all contributions that come in, to be in the public domain. The realm of possibility expands quite greatly.

@codonell @civodul of course with your example of "makefile patch" I assume you're thinking of the possibility of LLM use in a restricted area of sources, which is a different thing from someone coming along with optimized implementations of string functions for all architectures.
@siddhesh_p @civodul While you write "tangible risk" this still follows a slippery slope fallacy. What is the risk exactly? We've always allowed public domain, and we still do today. We will always keep mixing in LGPLv2+ code in glibc, since we are adding festures, fixing bugs, and refactoring as developers supporting a copyleft project. This resulting work remains LGPLv2+. The act of accepting these works does not in and of itself cause risks to the 4 freedoms except indirectly.
@codonell @civodul increased adoption of LLMs will drive up contributions that are public domain? Do you think it's not something that will happen?

@siddhesh_p @codonell We don’t know yet if LLM output will be considered public domain, and in which jurisdictions.

If it turns out to be the case, will it be a win? Eventually all software would be public-domain?

My guess is that much software would be private. With fewer people mastering software development, the power in the hands of LLM-operating companies would be huge.

But this is pure speculation.

@civodul @siddhesh_p LLM output is already being considered public domain in the U.S. and while other jurisdictions matter, the FSF is based there and for copyright assignment purposes U.S. law is relevant.

I have a case today with localedata where a contributor claims copyright and a license in the Netherlands for unique and novel expression, but the FSF in the U.S. does not, so the project files have a disclaimer.

There are LLM cases winding through the courts today... I'm curious 🤔

@codonell @civodul this is essentially why I'd like projects (at least the ones I'm personally involved in) to take a conservative position (disallow or quarantine LLM contributions) until there's a clearer picture and not try to "get in the game" for fear of missing out.
@siddhesh_p @civodul My position is that the projects should default to rejecting LLM contributions unless they can meet a set of restrictions that reduce risk. For example I don't think we can accept an LLM contribution that implements a standards conforming feature. The likelihood we get a look-a-like from llvm or msvc is very high and that risk is too high. I want to see unique and novel implementations of standard features.
@siddhesh_p @civodul If we can use the llvm version, then we do so by copying the sources, giving attribution, and maintaining a relationship with the project where we sync sources e.g. sanitizers, libffi, gnulib, etc.
@siddhesh_p @civodul To that end we would automatically reject LLM contributions to anything glibc's SHARED-FILES list (which is quite a lot), including CORE-MATH contributions where I expect an LLM would be unable to reason correctly.

@codonell @siddhesh_p I’m aware of a report suggesting that LLM output be considered public domain in the US:
https://copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-2-Copyrightability-Report.pdf

But it’s not the same as this being a settled matter, AIUI.

Also, there’s for instance this class action against Anthropic that could challenge this:
https://www.anthropiccopyrightsettlement.com

@siddhesh_p @codonell Side note: when Copilot was released a couple of years ago, everyone in free software understood that it was trained on tons of copyleft source code and was thus infringing on “our” copyright.

How in so little time did we get to swallow that LLM output could be considered public domain, after all?

@civodul @siddhesh_p You are mixing two distinct issues. Firstly there is a question of infringing the licenses during training, which today is being argued fair use, but I don't expect this is settled. Second is infringing output when regurgitation happens, and when it doesn't happen there is the legal and ethical question of copyrightability of the ouptut. The questions asked as going to take time to answer. Todays answer can still meaningfully be that LLM outputs for now are public domain.
@civodul @siddhesh_p I agree it is not a settled matter. What are the consequences upon our actions? My goal is to write the best possible policy today with the given knowledge, risks, and community goals in mind. If things change then I'll change the policy.

@codonell @siddhesh_p The practical consequence is that accepting “legally significant” code in a project is risky.

Gnulib only accepts up to 5 lines of LLM output, citing the risk of LLMs regurgitating copyrighted material:
https://lists.gnu.org/archive/html/bug-gnulib/2026-02/msg00064.html

an LLM policy

@civodul @siddhesh_p Accepting only 5 lines is the equivalent of accepting nothing. I'd be willing to accept any number of Makefile lines generated by LLM because they are boilerplate for glibc, gcc, binutils and gdb. Likewise an LLM writing a glibc test that uses the "support/" framework to verify ISO C fprintf() compliance is very unique to glibc. However, implementing fprintf() runs a high risk of infringing on training data, and I'd reject LLM submissions for new standard features.

@civodul @siddhesh_p Risk tolerances are per individual, per project, and subjective. We should be empathetic towards each other as we each feel these risks subjectively differently. One might keep me up at night, and you might sleep well.

I strongly agree with your opinion that the habits and behvaiours we are encouraging here run the risk of isolating community members from eachother. Policy won't solve that.

@siddhesh_p @codonell I think many discussions miss the social aspects of free software: knowledge sharing, mutual aid, building a community around a shared goal. Software for the people, by the people.

And also: Why bother talking to these glibc folks if I can pay 10k–20k to get the machine to produce a C library just for me?

@civodul @siddhesh_p I agree there are "isolating" social issues. I am concerned about a new developer who finds it lower personal cost to ask the LLM to write something than to reach out to our community to learn, grow, and expand the FOSS ecosystem. Likewise writing new code with an LLM instead of growing the FOSS ecosystem. I have my doubts that a company can justify having a private C library because of the cost of compliance e.g. security, regulation (EU CRA, FIPS 140-2, SSDLCs) etc.
@civodul @siddhesh_p My position is that policy won't solve these problems. These problems are foundational. Either you value collective action or you don't. Education is paramount. We retread age old problems.
@codonell @civodul can you elaborate on what the intersection between user freedoms and a project-scoped LLM policy is? Such a policy would seem to me to govern what changes the project accepts and their provenance. I'm not clear where that inpinges on user freedoms.
@kevingranade @civodul Two issues. There is a continuum between something a person can understand, and for which the 4 freedoms makes sense, and something you can't understand. Consider https://www.sollya.org/, and the inputs used to automatically generate libm functions, and sufficiently edited LLM code no human has read or understands. My position is that user freedom requires we contribute something that can be understood, particularly without requiring proprietary tools or undue cost. 1/2
Sollya software tool

@kevingranade @civodul Second. There are network and social effects. This is where I think Ludovic is correct. We are being isolated in ways that mean we are less likely to exercise our freedoms. Why read, edit, and remix copyleft code to create new derivative works if the LLM creates the code. Why reach out to other copyleft authors to learn and grow, a high friction high cost activity, when we can ask the LLM? Policy can address code sharing and collaboration. 2/2.
@codonell @civodul oh I'm actually coming at it from the other side, in what way do they intersect in such a way that LLM use is remotely on the table? As far as I can see, in practice a LLM tool anywhere in the process for generating a change destroys its provenance and renders it ineligible for inclusion. Where's the other side of that?
@kevingranade @civodul Is your position rooted in legal or ethical foundations? Do we consider the contributors freedoms e.g. free for any purpose? What does it mean to contribute to the project vs. the community? I think the answer is different depending on the position you have to these questions. The GNU Project has a clear philosophical position on the 4 freedoms, and that doesn't include the ethics of the contributor. As individuals we can reject contributions based on our own ethics.

@codonell @civodul Is this a policy for use by a project, a meta-policy for building a policy, or "guidance" rather than a policy and you're leaving it up to individual project members to make the call on their own?

This is concerning; most of these alternatives resolve to "yes please use LLMs" policy in practice, because a large number of participants in these projects are beholden to companies that are all-in on AI and unless each project presents a united front they WILL jam in LLM outputs.

@kevingranade @civodul I'm working on an LLM policy for glibc to use.

@civodul IMHO, the adoption by fellow hackers who have resisted years after years against non-free but “economic” incentives doesn’t come from an “economical” pressure and instead comes from the “efficiency” pressure.

Not because it’d be a competition that “we” could loose, but because we prefer ((on average) the immediate “easiness“ of something done. The “worse is better” always wins.

That’s why I think all is already doomed.

Time for farming tutorial 101. 🤪

@civodul If you haven’t scrolled these logs yet, I would recommend you to give a look.

Because they show that ChatGPT is here used to tackle tasks and thus user is acquiring new skills. Somehow it’s user empowerment. 😉

And it’s not so different from the usual loop on trial-and-error one might runs. The difference between manuals, search engines, or LLMs is the kind of mirror one is using for self-reflecting. I agree the nature of such mirror is fully different and the implications are thus radically different.

But, when the criteria is about an individual “efficiency”, then the user empowerment looks the same.

https://chatgpt.com/share/698449a8-80ac-8011-8f3c-16f4b6b2c709
https://chatgpt.com/share/69844da8-58bc-8011-84d3-cfecf7ae2215
https://chatgpt.com/share/69844b1c-0a10-8011-8308-333ed22e3ed2
https://chatgpt.com/share/69844e45-8a0c-8011-8a11-5b5d45cd8547

ChatGPT - Debugging gzip Exec Error

Shared via ChatGPT

ChatGPT

@zimoun These transcripts look like a conversation one can choose to have with human beings—on IRC, mailing lists, Zulip, StackOverflow—in a spirit of mutual aid.

They illustrate how these commercial chatbots are already destroying the social fabric was built around free software over decades.

@civodul About destroying the social, yes! As all the other technologies, no? Any technology often destroys some good and introduces some bad, for sure then it transforms the social into something unpredictable, maybe a better, maybe a worse.

Personal washing machines destroyed all the life (mutual aid?) around the washing house.

The history follows the same slope as the 70+ past years. This slope that values more the “efficiency” than everything else.

@civodul Don’t take me wrong, as you, for me all these LLMs are a strong pity! The balance is just bad and I personally don’t see any good.

It hurts us – as least it hurts me, hardly! – because we were feeling safe in a protected land. And then bang! this beloved land is invaded by the “Technical System”.

The engine of this “Technical System” is indeed the “efficiency”.

https://fr.wikipedia.org/wiki/Le_Syst%C3%A8me_technicien

Le Système technicien — Wikipédia

@civodul We like it or not: LLMs is just yet another search for “efficiency” – worse because here such “efficiency” is rationally inefficient! – and users of LLM find more “efficient” to loop over it than depend on someone.

The story is already doomed, IMHO, by the very existence of LLM.

As the story of personal washing machines was already doomed when the first people had only running water at home.

To me, the story of LLMs will be over once we’ll collectively refuse to pay the costs of its resource. Sadly not in the near future when all the governments invest a lot of money… Another story.

@zimoun Defeatism and “we knew since Ellul in the 70s” is harmful for those of us who live here and now.

Hopefully this story can fuel a broader discussion of “technology” and its social impact. But first we need to resist the immediate threat.

@civodul This is not “defeatism” but the choose of fights.

I think that whatever we do at the level of our small project about LLM is like an invisible pink spoon in front of an ocean to empty.

And for sure, it’s not because we cannot do anything that we cannot do nothing.

So if we believe so hard in the human-being and that Free Software is about connecting people, we must act collectively!

My point is that the fight is for instance here https://www.europarl.europa.eu/news/en/agenda/plenary-news/2026-03-09/9/protecting-copyrighted-creative-work-in-the-age-of-ai and put all the Free Software projects and their collective pressure we are able to in this kind of direction.

Protecting copyrighted creative work in the age of AI | 09-03-2026 | News | European Parliament

On Tuesday, MEPs are expected to call for measures to protect the EU’s creative sector against exploitation by artificial intelligence.

@zimoun I agree that regulation fora are a key place to make things change.

But I also think that our small projects and our small movement can help shift the Overton window in this domain, provided they display a clear stance.

@civodul After the defeatist, you'll tell about me that I'm the pessimist. 😁 Or the both. 😉

Bah I could provide a long list of projects that made strong stances about this or that. And nothing's changed.

Well, I'm very doutful that the window will change a pixel of its iota because of some of our per-project stances. Maybe a couple of microscopic traces on the glass?

For sure, it's not because a small project stance has no impact that we must not make it.

Yes, we must be ethically right for our grandchildren. 😀

For me, the story is already doomed at the scale of one project. That's why it hurts me so much.

Powerless in front of the killer clowns from outer space.

It's not a resignation but the lucidity of a (indeed necessary but) useless stance and the recognition that such stance is a desperate cry in the void.

Remember: the true pessimist knows it's already too late to be pessimist. 😉

That's why for me, something on this kind of ''Overton window'' would be a collective stance gathering hundreds (thousands?) of Free Software and Open Source projects, distros, etc. Maybe?

https://en.wikipedia.org/wiki/Killer_Klowns_from_Outer_Space

Killer Klowns from Outer Space - Wikipedia

@[email protected]

A war is won only when the last enemy believes there's no reason to fight.

We can fight on legal fora but we should be aware that the power dynamic are heavily against us, given we lost the second world war and were colonized by the USA ally^W invader.

But we have tons of contents and codebase to weaponize, data poisoning these tools with subtle vulnerabilities and non-sense.

And by taking strong and technically motivated stances against #LLM and #AI, we could fasten the bubble burst.

And I'm sure we can think more ways to fight back if we accept this is a war between few powerful companies and everybody else.

A war that our adversaries are fighting with all their resources and respecting no rule: why should we use "fair play"?

Don't forget these companies are vampire that suck immense amounts of money and elettricity, so the are basically giants with feet clay.

@[email protected]

@giacomo I strongly disagree to frame the issue at hand about LLMs & co. as “war”. It’s disrespectful for the people who are living themselves the war: bombs, guns, the fear that liquefies the innards, the concrete death that shots you here and right now. The words have a meaning!

Well, if you still want to slip onto some “just war theory”, you need to precisely answer who is the “competent authority”, what is the “probability of success“, is it the “last resort”. And you also need to precisely clarify who are the combatants and the non-combatants, what are the “legitimate military objective”. Last, you need to precisely define what is the justice after the war, how do we restore the peace.

Otherwise it’s not some “just war” but yet another attempt for dominating by some violence. Sadly, we have too many examples in the world.

The end does not justify all the means, IMHO.

I strongly oppose to be embarked in this vicious circle.

@civodul

@[email protected]

First, no war is just.

It's always an attempt to dominate the enemy through violence.

And if you get attacked, you can only fight back or surrender. There is no alternative.
And in both cases, any "just war theory" is just a way to legitimize the aggressor.

On the other hand, fighting back is always legitimate as long as you are victim of an aggression. It might be pointless, if you die, but it's always legitimate.

Now the war we are fighting against #BigTech is exactly the same war that the #USA are fighting in #Iran, in #Gaza and even in #Ucraine.

It's a war for resource control and geopolitical power.

Yes, the USA's elites are more brutal there (#Trump even jokes about the people they kill "for fun") but people die of hunger and perfectly curable diseases here, in the third world and in the USA too, sacrified to the woreship of US capitalism.

So it is a war.

But it's not a #ColdWar like the aftermath of WWII.

It's a war between the world richest and everybody else. They kill with different weapons according to the terrain, be it the #CIE guns or whe drones in Iran. They do psyops wherever they are technically able to.

Once you realize you are under attack, once you realize who is attacking you, you face a choice: fight back or surrender.

Pretending it's not a war means surrending.

Is it "just" to fight back?

No.

Is it "just" to surrender?

No.

Still you have to pick an option.

Do you see any alternative?

@[email protected]

@giacomo I only agree with the two first lines: “no war is just” and “it’s always an attempt to dominated the enemy through violence”.

Then I disagree with all of what you write. And there is nothing to discuss, I guess.

Framing LLM as “it is a war” is factually incorrect, IMHO. At best, one could consider it as a vague guerilla but comfortably installed in our protected cosy couch.

IMHO, there is a difference between soft power and war, no?

Whatever.

Good luck for “your war”. I wish you all the best and I hope you’ll not die because one of the BigTech attacks.

@[email protected]

Fine, your call.

But think about this: the exact same companies and software that #USA "soft power" (aka #imperialism) impose us are being used to coduct a genocide in Gaza and to pick target in #Iran right now.

The exact same companies.
The exact same servers and software.

They are literally war weapons.

And you know what is even more confy than fighting back with our tools from our "protected cozy couch"?

Pretending they are not weapons and doing nothing of what you can do.

It's either surrending... or waiting to jump on the winner bandwagon.