You know, I don't actually agree that using an LLM to edit code should be banned by a project. Sure, running it on autopilot, yes. But as an editor? No. Being able to say search for a symbol without knowing it's name exactly or start a big semantic search and replace or copying and filling out a template with my own values with an LLM instead of wasting my limited time on earth with it is great.

https://github.com/marketplace/actions/no-autopilot

Something like this and the Fedora LLM policy is a much better approach to making sure good MRs/PRs get sent than banning specific tools completely IMHO and I'm tired of not speaking up about this earlier. This all just reminds me of the anti-LSP/IDE, anti-Electron and anti-cloud native rants of 2010s. Care and attention is what matters here ultimately, not whether someone uses the 2026 equivalent of Vim vs. Eclipse.

No Autopilot - GitHub Marketplace

CI for mindless PRs. Save time by filtering low-quality PRs

GitHub

Anyways, ultimately it's up to the project maintainers to see what they are comfortable with, not me as a random bystander. I guess I would at least ask for some kind of periodic review process of policies that ban tools based on their current state.

Would I have banned LLM-generated code in my projects in mid-2025? Yup.

Would I do so in early 2026? Nope. Things have fundamentally changed.

For puregotk there have been more than one time where it's been enormously useful to have the LLM look at the GIR file and the generated code, then look at the generator, and find what's wrong. That would have taken literal hours otherwise (I know, because it did!). Doesn't mean the solution it comes up with is good (e.g. it doesn't find where it's best to add the fix/at which layer directly), but for analysis? It's so, so helpful.
Same for stuff like making sure your whole project uses writing guidelines correctly. Or to see whether you've missed adding _() to something. Are people just ignoring this? Does that not count under these LLM bans? I don't know but sometimes it feels like people on here live in a different reality where LLMs are stuck in 2023 eternally

@pojntfx It’s been really interesting to see how different communities react to AI at different times. It’s a huge mental leap we need to take, to figure out how to integrate into our mental models. It’s much easier to reject the tool entirely.

Hopefully we can help others by connecting the dots and paving the way, ultimately everyone just wants good code and sustainable projects.

great thread!

@eljojo I mean yeah that's def. the case. The whole "OpenClaw sends PRs to random projects full of slop" approach is obviously terrible, same with all of the autopilot "here is the logs, here is what's failed, here is the codebase, fix, and also open the PR" approaches that run on autopilot because those usually just create fresh bugs. But idk to me that is just a problem with attention and care not with the actual LLM