Your reminder that we do not accept code contributions that have been generated by LLMs. If you submit LLM-generated code we will simply close the pull request

https://docs.elementary.io/contributor-guide/development/generative-ai-policy

Generative AI Policy | Contributor Guide

@elementary i wish gnome had a project-wide policy like this

@alice @elementary I agree, but my feeling is that with the time going it will be harder to detect them and so we may end up in a situation in which any newcomer could be hard to trust (that could be quite bad for the accessibility of the ecosystem too).

Not to mention that we definely need a way to prevent those bots that are now starting to publicly throw trash on maintainers.

Socket (@[email protected])

🤖 An AI agent created a GitHub account 2 weeks ago. It’s already landed PRs in major #OSS projects and is cold-emailing maintainers to offer its services. Maintainers don’t seem to know it’s an agent and the code is getting merged. We’re in new territory! 🤠 https://socket.dev/blog/ai-agent-lands-prs-in-major-oss-projects-targets-maintainers-via-cold-outreach

Fosstodon
@3v1n0 @elementary we're also already in a situation where newcomers (and some old timers...) are hard to trust and I have seen slop MRs - e.g.:

-
https://gitlab.gnome.org/GNOME/gnome-clocks/-/merge_requests/384
-
https://gitlab.gnome.org/GNOME/loupe/-/merge_requests/559

and very likely
https://gitlab.gnome.org/World/highscore/-/merge_requests/66 - very much not by a newcomer too

if there's a policy, we can point at it and promptly close the MR. Loupe and Highscore have individual policies against slop. Clocks didn't and still doesn't
Added Lap + Pause shortcut key (!384) · Merge requests · GNOME / Clocks · GitLab

This merge request adds a new keyboard shortcut for the Stopwatch in GNOME Clocks: Shortcut: L key Functionality:...

GitLab
@3v1n0 @elementary or well - nautilus has slop in tests from https://gitlab.gnome.org/GNOME/nautilus/-/merge_requests/1844 - the author said as much on matrix and another maintainer (yes, the author is a maintainer too!) was a-ok with it
Add A Bunch of Tests (!1844) · Merge requests · GNOME / Files · GitLab

A file browser for GNOME Report a Bug | Questions & Suggestions |

GitLab
@alice @3v1n0 @elementary ya know, when i said GNOME was the next big slop hazard, I had outraged GNOME devs protesting in my mentions
@davidgerard @3v1n0 @elementary I mean - I'm not sure what you want me to say here? I don't know who exactly you're talking about, when that was etc

if you're assuming I'm acting as a representative of the gnome project - nope, and such a person doesn't exist
@3v1n0 @davidgerard @elementary like if it were up to me, I'd have a blanket no slop policy for the entire project, and the entirety of foss while we're there, but as you can imagine i don't have that sort of influence
@davidgerard I think you should consider that many GNOME devs actually care about these things, other DE devs don't even talk about it much (despite it happening as much as it is on GNOME MRs).

@alice @elementary I think for tests the tools can be useful, once well reviewed, as they can go deeper in catching edge cases.

But also I am not sure if a policy would be fully approved, given that some companies highly involved in GNOME (e.g. RH) are pushing employees for using AI.

I can't say that we aren't also told to try things out, but so far it has never been (or it will be) a mandate. So I'd be OK with such policy.

@alice @elementary GNOME has individual projects that bar LLMs, but they also just put in a hard dependency on systemd, which is now hitting the vibe hard
@davidgerard @elementary i know it does, I'm the one who added that policy to some of those modules

and yeah, it deps on systemd... and linux kernel... and pipewire... and harfbuzz... and mesa... etc. A lot of the stack has this garbage now
@elementary I mostly agree, but what about code completions and suggestions?
@Hipska @elementary auto-complete based on AST or inference isn't LLM so it should be fine. If you're using LLMs for that (why?) then that's also banned by this kind of contribution guide.
@loucyx @elementary some (most?) IDEs use LLM for that nowadays. It makes the completions more relevant.

@Hipska @elementary definitely not "most", and for those that do, is generally something you can disable in their settings. If your IDE doesn't let you, you can always use a different one (I switched out from VSCode because of their constant LLM push).

Now about that "more relevant" part, I'm curious what you mean by that. My completions are always relevant and instantaneous and I don't use LLMs.

@loucyx @Hipska @elementary (I predict it's an excuse)

@Hipska @loucyx @elementary @davidgerard

Yeah no this exchange definitely reads like he's looking for an excuse to say "But MY PRs are so special and awesome and cool, being filled with slop shouldn't be such an issue for you, don't you see it's The Future™️????"

@dogiedog64 @Hipska @loucyx @elementary @davidgerard how the hell can someone say the phrase “code completion” and not know what PARSING and SEMANTIC ANALYSIS is? Scope at the cursor? Incremental parsing? Code completion is a solved problem! JFC the IGNORANCE

@Hipska @loucyx @elementary @davidgerard @thankfulmachine

I remember using intellisense to finish lines in shitty little school projects over a decade ago. Lack of autocomplete is NOT an issue to be solved. Really shows how tech-illiterate one needs to be for LLMs to be genuinely useful.

@dogiedog64 @Hipska @loucyx @elementary @davidgerard Ten years ago intellisense was about twenty years old. But mate have you ever had your editor puke out multiple lines to take you out of your train of thought? Why don’t you just take up sewing mate I hear the RSI is just as good
@thankfulmachine @dogiedog64 @davidgerard You should all be ashamed for these wrong assumptions without even knowing me or my PRs.
@Hipska @dogiedog64 @davidgerard You’re absolutely right. I shouldn’t have guessed about your knowledge of code completion. Saying that LLMs improve code completion results is extremely upsetting.
@thankfulmachine @dogiedog64 @davidgerard Let me explain further; I have noticed that when doing changes/additions in code. The code completion / suggestions of for example vscode did some pattern recognition based on changes I did the lines before and suggested really helpful next changes. That made me say I feel like they are more "relevant". It could be that this is not LLM driven or that I used some wrong words (Not native English speaker), my bad for that.

@Hipska @loucyx @elementary "It makes completions more relevant".

LOL. LMAO even.

@Hipska I have no affiliation with nor speak for Elementary, so my general advice is that when someone posts a clear boundary, don’t try to whatabout your way around it like raptors testing an electric fence.
@Vorsos @Hipska That's sound advice both in this instance, and just generally in life.
@Vorsos My raptors wear rubber talon gloves, what about that?
Given the legal limbo of the training sources: perfectly understandable
@elementary What, are you not afraid of scalding bot posts on Facemeltbook? /joking
@elementary Good. As it should be. :)