It’s already tempting notably for smallish projects to resort to genAI:
https://toot.aquilenet.fr/@civodul/116132543248503962

But I think a race to the bottom has started in #FreeSoftware, with this rationale: if “we” don’t use genAI in our project, then we will lose to the competition, whether free slopware or proprietary.

Ludovic Courtès (@[email protected])

I think these two factors—lack of humanpower and a “big” vision—coupled with the passion for technicalities typical of such projects make them particularly vulnerable to genAI. Because yes, “we” want SMP support in Mach and it’s not been happening until this contributor achieved something with the help of genAI.

Aquilepouet

… which is short-sighted and loses track of the whole user empowerment goal that free software is supposedly about.

But the “economic” incentives are here.

@civodul I'm working on a glibc (and jointly a gcc) LLM policy which I'll propose for public review, and the difficulty is in threading the needle between technology that we could use ourselves, and user freedoms. My position ends up being that I want to define a policy that allow the projects the to outright reject *or* accept such changes as they see fit, within certain constraints that support user freedom e.g. either you understand the code or it is reproducible with a tool.
@codonell @civodul can you elaborate on what the intersection between user freedoms and a project-scoped LLM policy is? Such a policy would seem to me to govern what changes the project accepts and their provenance. I'm not clear where that inpinges on user freedoms.
@kevingranade @civodul Two issues. There is a continuum between something a person can understand, and for which the 4 freedoms makes sense, and something you can't understand. Consider https://www.sollya.org/, and the inputs used to automatically generate libm functions, and sufficiently edited LLM code no human has read or understands. My position is that user freedom requires we contribute something that can be understood, particularly without requiring proprietary tools or undue cost. 1/2
Sollya software tool

@kevingranade @civodul Second. There are network and social effects. This is where I think Ludovic is correct. We are being isolated in ways that mean we are less likely to exercise our freedoms. Why read, edit, and remix copyleft code to create new derivative works if the LLM creates the code. Why reach out to other copyleft authors to learn and grow, a high friction high cost activity, when we can ask the LLM? Policy can address code sharing and collaboration. 2/2.
@codonell @civodul oh I'm actually coming at it from the other side, in what way do they intersect in such a way that LLM use is remotely on the table? As far as I can see, in practice a LLM tool anywhere in the process for generating a change destroys its provenance and renders it ineligible for inclusion. Where's the other side of that?
@kevingranade @civodul Is your position rooted in legal or ethical foundations? Do we consider the contributors freedoms e.g. free for any purpose? What does it mean to contribute to the project vs. the community? I think the answer is different depending on the position you have to these questions. The GNU Project has a clear philosophical position on the 4 freedoms, and that doesn't include the ethics of the contributor. As individuals we can reject contributions based on our own ethics.

@codonell @civodul Is this a policy for use by a project, a meta-policy for building a policy, or "guidance" rather than a policy and you're leaving it up to individual project members to make the call on their own?

This is concerning; most of these alternatives resolve to "yes please use LLMs" policy in practice, because a large number of participants in these projects are beholden to companies that are all-in on AI and unless each project presents a united front they WILL jam in LLM outputs.

@kevingranade @civodul I'm working on an LLM policy for glibc to use.