You know, I don't actually agree that using an LLM to edit code should be banned by a project. Sure, running it on autopilot, yes. But as an editor? No. Being able to say search for a symbol without knowing it's name exactly or start a big semantic search and replace or copying and filling out a template with my own values with an LLM instead of wasting my limited time on earth with it is great.

https://github.com/marketplace/actions/no-autopilot

Something like this and the Fedora LLM policy is a much better approach to making sure good MRs/PRs get sent than banning specific tools completely IMHO and I'm tired of not speaking up about this earlier. This all just reminds me of the anti-LSP/IDE, anti-Electron and anti-cloud native rants of 2010s. Care and attention is what matters here ultimately, not whether someone uses the 2026 equivalent of Vim vs. Eclipse.

No Autopilot - GitHub Marketplace

CI for mindless PRs. Save time by filtering low-quality PRs

GitHub

Anyways, ultimately it's up to the project maintainers to see what they are comfortable with, not me as a random bystander. I guess I would at least ask for some kind of periodic review process of policies that ban tools based on their current state.

Would I have banned LLM-generated code in my projects in mid-2025? Yup.

Would I do so in early 2026? Nope. Things have fundamentally changed.

For puregotk there have been more than one time where it's been enormously useful to have the LLM look at the GIR file and the generated code, then look at the generator, and find what's wrong. That would have taken literal hours otherwise (I know, because it did!). Doesn't mean the solution it comes up with is good (e.g. it doesn't find where it's best to add the fix/at which layer directly), but for analysis? It's so, so helpful.
Same for stuff like making sure your whole project uses writing guidelines correctly. Or to see whether you've missed adding _() to something. Are people just ignoring this? Does that not count under these LLM bans? I don't know but sometimes it feels like people on here live in a different reality where LLMs are stuck in 2023 eternally
@pojntfx I have the feeling this is a bit of a reaction to the pmOS policy. If that is the case, few thoughts:
* Indeed all policies are up for review after some time. What we got now is an update after review following lots of discussion with community and team members
* You can use AI tools to debug, find issues, and understand your code. What is explicitly forbidden by the policy are code contributions recommending it in our community due to the sustainability aspect that is at the core of what we do
* The whole thing is tricky, due to the project values. It's not about a technical decision. The main part of the rationale in the policy is, in fact, social. From resource consumption, to copyright, to the DDOS attacks that forced all of use to deploy Anubis

@pabloyoyoista Oh yeah sorry, it's about that policy but also others like it I've seen (I'm reminded of Zig, for example).

1) That makes sense and is good to know

@pabloyoyoista 2) That makes a lot of sense to me but really wasn't clear from reading the policy to me. "Submitting contributions fully or in part created by generative AI tools to postmarketOS." - if I only ever let it run to analyze but make the change it recommends myself, how is that functionally different? Maybe something like "LLMs can only be used in a read-only fashion" might be more clear here?
@pabloyoyoista 3) I understand, and yes a lot of today's LLMs run on dirty grids for sure esp. OpenAI and Anthropic ones. And yes, even if you run an open-weight model locally, that model was still trained on a dirty grid most likely. I will also mention though that doing the actual _inference_ itself is really no different from running any other program on a GPU IMHO. Mine for example runs in a local DC here in Vancouver that's 100% hydro, and the grid is 99% clean. (1/2)
@pabloyoyoista 3) Re:DDOS attacks yes, 100%, I agree with you there. That's obviously terrible for so many reasons. Blaming a user of an LLM who is not doing a DDOS attack themselves though to me is not different from blaming them from using say a computer made in an unethical way to contribute (which probably applies to a lot if not almost all of them), IMHO. Yes, there is probably some layer of responsibility there, esp. if you pay the model provider, but if you (2/3)

@pabloyoyoista use an open-weight model there is no exchange of goods or even signal you're using it that would further such behaviour in my personal opinion.

Ultimately, I'm not a contributor myself except for like updating some packages one time, so I really don't have any kind of say in this and don't want to pretend I do. I guess I am however worried about the impacts of a policy like this if I, as a daily user of postmarketOS, (3/4)

@pabloyoyoista ran into a problem but couldn't fix it (or well, could fix it, but using way more time then I would need usually, and don't have) because the tools I use are banned by the project and could even get me reported to a CoC committee. (4/4)

@pojntfx thanks a lot for the thoughts. I take note of the feedback, as I did when we got feedback on our initial policy, there are surely improvements in wording and clarification that can be made, even if those always take time due to the nature of something like this.

Regarding 3. I can understand your position, but I will be clear (and this is strictly a personal position) that as a project whose goals and mission are clearly around sustainability, it has different consequences. A policy that would allow contributing with a technology that is actively harmful for our mission (if this changes, surely we can adapt, but that is the overwhelming reality right now) might make it hard for people to take our mission seriously. To me personally, that is a much greater structural risk as a project than some potential coding slowness. That said, if you ever are in a situation where you would be unable to contribute due to any policy in place, please reach out, I'm sure we can find solutions. The goals are not to stop people from contributing.

@pabloyoyoista Thanks for the response, I really appreciate it. I think I understand the ethical concerns re:using a tool from companies that harm the project you're contributing better now. I'll def. reach out in case I run into issues in the future :)