You know, I don't actually agree that using an LLM to edit code should be banned by a project. Sure, running it on autopilot, yes. But as an editor? No. Being able to say search for a symbol without knowing it's name exactly or start a big semantic search and replace or copying and filling out a template with my own values with an LLM instead of wasting my limited time on earth with it is great.

https://github.com/marketplace/actions/no-autopilot

Something like this and the Fedora LLM policy is a much better approach to making sure good MRs/PRs get sent than banning specific tools completely IMHO and I'm tired of not speaking up about this earlier. This all just reminds me of the anti-LSP/IDE, anti-Electron and anti-cloud native rants of 2010s. Care and attention is what matters here ultimately, not whether someone uses the 2026 equivalent of Vim vs. Eclipse.

No Autopilot - GitHub Marketplace

CI for mindless PRs. Save time by filtering low-quality PRs

GitHub

Anyways, ultimately it's up to the project maintainers to see what they are comfortable with, not me as a random bystander. I guess I would at least ask for some kind of periodic review process of policies that ban tools based on their current state.

Would I have banned LLM-generated code in my projects in mid-2025? Yup.

Would I do so in early 2026? Nope. Things have fundamentally changed.

For puregotk there have been more than one time where it's been enormously useful to have the LLM look at the GIR file and the generated code, then look at the generator, and find what's wrong. That would have taken literal hours otherwise (I know, because it did!). Doesn't mean the solution it comes up with is good (e.g. it doesn't find where it's best to add the fix/at which layer directly), but for analysis? It's so, so helpful.
Same for stuff like making sure your whole project uses writing guidelines correctly. Or to see whether you've missed adding _() to something. Are people just ignoring this? Does that not count under these LLM bans? I don't know but sometimes it feels like people on here live in a different reality where LLMs are stuck in 2023 eternally

@pojntfx for me it's more about an ethical issue; Claude and all the others stole a countless amount of data to be where they are right now... and are now asking to be viewed as legitimate for patents and copyright... We should always keep that in mind.

But eh once I put that behind I agree with you. There are good and useful things to do with LLVM. Those things are sadly shadowed by the massive amount of bad they are doing.

Anyway; once they get corrupted by ads, I wonder how it will end up.

@me I don't actually agree with the "stole data" argument. There is in my eyes (and legally, at least in the US and EU) no difference between creating a search index and training a model on it. There are also completely free and open models (Apertus) that are built only from publicly available info.

I'm personally much more worried about the secondary effects from LLM use in the wrong places (healthcare, military, writing articles on autopilot, subtle errors in transcription etc.

@me And I mean yes, the big proprietary models from Anthropic and OpenAI are made by morally corrupt corporations I don't doubt that at all. The models already have a big bias today and ads will only make that worse
@me And also re:patents and copyright at least right now code generated by an LLM on autopilot is not valid for any kind of copyright protection in any jurisdiction I'm aware of. Using it in non-autopilot is I think a different story?
@me Oh and I just want to be clear when I say "stolen data" I don't mean like the absurd scraping that the big model labs are doing, I agree that this is very harmful. Just specifically whether it's legally to destill data available under any license into a model and redistribute that model under say Apache-2.0