Your reminder that we do not accept code contributions that have been generated by LLMs. If you submit LLM-generated code we will simply close the pull request
https://docs.elementary.io/contributor-guide/development/generative-ai-policy
Your reminder that we do not accept code contributions that have been generated by LLMs. If you submit LLM-generated code we will simply close the pull request
https://docs.elementary.io/contributor-guide/development/generative-ai-policy
@alice @elementary I agree, but my feeling is that with the time going it will be harder to detect them and so we may end up in a situation in which any newcomer could be hard to trust (that could be quite bad for the accessibility of the ecosystem too).
Not to mention that we definely need a way to prevent those bots that are now starting to publicly throw trash on maintainers.
馃 An AI agent created a GitHub account 2 weeks ago. It鈥檚 already landed PRs in major #OSS projects and is cold-emailing maintainers to offer its services. Maintainers don鈥檛 seem to know it鈥檚 an agent and the code is getting merged. We鈥檙e in new territory! 馃 https://socket.dev/blog/ai-agent-lands-prs-in-major-oss-projects-targets-maintainers-via-cold-outreach
@alice @elementary I think for tests the tools can be useful, once well reviewed, as they can go deeper in catching edge cases.
But also I am not sure if a policy would be fully approved, given that some companies highly involved in GNOME (e.g. RH) are pushing employees for using AI.
I can't say that we aren't also told to try things out, but so far it has never been (or it will be) a mandate. So I'd be OK with such policy.