It’s official! elementary does not accept AI-generated code or design contributions as a matter of policy across any of our over 150 source code repositories. Our operating system is made and shared by real people

https://github.com/elementary/.github/blob/master/CONTRIBUTING.md

.github/CONTRIBUTING.md at master · elementary/.github

Default files for elementary repos. Contribute to elementary/.github development by creating an account on GitHub.

GitHub
@elementary Is it even possible to enforce this? If yes, how you can pick up AI generated code?
@lf_araujo There’s probably not a way to reliably detect AI-generated code 100% of the time, but we feel it’s important to make a strong statement and set a policy anyways. Contributors often do want to be respectful of project guidelines and in cases where folks are honest about their use of AI we can point to this policy. We can only demonstrate a will and intent not to make use of this harmful technology and hopefully contribute to the greater conversation and consensus about it
@elementary Even though I am not a user of elementary, I very much support this decision and this definitely moves you upwards on my list of recommendable distributions!
@elementary If only github agreed with you and didn't just use your repository to train its own AI. @forgejo doesn't do that.
@RandamuMaki at the moment Forgejo is missing some important features we rely on and there’s some questions around accessibility. But we have our eyes on alternatives :)
@elementary @RandamuMaki what are those features? i'm curious, because it's fine for me
@tauon We use GitHub Actions quite extensively for CI and CD, localization workflows, etc. Actions in Codeberg are considered “open alpha” at the moment. We’re looking forward to seeing things progress there though! More info from Codeberg here: https://codeberg.org/actions/meta
meta

Information and discussions around hosted Forgejo Actions at Codeberg

Codeberg.org
@elementary I have been using Elementary OS for several years now. Glad to hear this.

@elementary tl;dr I support your objectives, and kudos on the goal, but I think you should monitor this new policy for unexpected negative outcomes. I take about 9k characters to explain why, but I’m not criticizing your intent.

While I am much more pragmatic about my stance on #aicoding this was previously a long-running issue of contention on the #StackExchange network that was never really effectively resolved outside of a few clearly egregious cases.

The triple-net is that when it comes to certain parts of software—think of the SCO copyright trials over header files from a few decades back—in many cases, obvious code will be, well…obvious. That “the simplest thing that could possibly work” was produced by an AI instead of a person is difficult to prove using existing tools, and false accusations of plagiarism have been a huge problem that has caused a number of people real #reputationalharm over the last couple of years.

That said, I don’t disagree with the stance that #vibecoding is not worth the pixels that it takes up on a screen. From a more pragmatic standpoint, though, it may be more useful to address the underlying principle that #plagiarism is unacceptable from a community standards or copyright perspective rather than making it a tool-specific policy issue.

I’m a firm believer that people have the right to run their community projects in whatever way best serves their community members. I’m only pointing out the pragmatic issues of setting forth a policy where the likelihood of false positives is quite high, and the level of pragmatic enforceability may be quite low. That is something that could lead to reputational harm to people and the project, or to community in-fighting down the road, when the real policy you’re promoting (as I understand it) is just a fundamental expectation of “original human contributions” to the project.

Because I work in #riskmanagement and #cybersecurity I see this a lot. This is an issue that comes up more often than you might think. Again, I fully support your objectives, but just wanted to offer an alternative viewpoint that your project might want to revisit down the road if the current policy doesn’t achieve the results that you’re hoping for.

In the meantime, I certainly wish you every possible success! You’re taking a #thoughtleadership stance on an important #AIgovernance policy issue that is important to society and to #FOSS right now. I think that’s terrific!

@elementary great to hear that elementary isn't into vibe coding
@elementary
Your are good humans. Thanks.
@elementary don't understand why people are so against LLM generated code, it's literally getting better than most human written code. If, big if, things keep advancing, it could help accelerate code quality, and project completion, reviewing, etc...
@Thijzer @elementary studies show that this is not true but of course a fosstodon user would think this
@elementary based, not a user of elementary but glad i'm following.
@elementary I get the aspect of copyright problems, but I don’t get the quality aspect. Are you saying, that you ban ai-generated code, because it might be of less quality than human-generated code? Isn’t it a goal of the software supply chain to keep code quality up, being blind to/ignoring/not discriminating the contributor?
@goern both formal studies and anecdotal experience have continually shown that LLMs cannot reason and the code they produce is often wrong and contains hard-to-spot bugs. That’s really just a cherry on top of the legal and ethical concerns though. The environmental impact of training models alone is extremely problematic. No matter which way you look at it, using LLMs to write code doesn’t align with our values
@elementary so I’m really interested in the quality aspect, not arguing if your decision is good or bad, or trolling!
Could you share the studies?
@goern you’re in luck, a new one just came out! https://garymarcus.substack.com/p/a-knockout-blow-for-llms
A knockout blow for LLMs?

LLM “reasoning” is so cooked they turned my name into a verb

Marcus on AI

This is a very important paper, no because it completely crushes and neglects the point of AI, but because it keeps our minds thinking, evaluation technologies rather than following a hype train. From my point of view, the article you mentioned, doesnt even go into the details of quality of code generated by AI

@elementary

@elementary
Have you considered moving away from github or found a way to deal with the unblockble AI contributions from github themselves by change?
@elementary @agowa338 that makes me wonder, what do those missing features look like?

@alexia @elementary
See linked post? Or do you mean in even more detail?

Down that very same thread they posted this: https://mastodon.social/@elementary/114642619775381204

@agowa338 @elementary ah that post did not federate for one reason or another so I didn't see it

but yes, I was looking for
even more detail