You know, I don't actually agree that using an LLM to edit code should be banned by a project. Sure, running it on autopilot, yes. But as an editor? No. Being able to say search for a symbol without knowing it's name exactly or start a big semantic search and replace or copying and filling out a template with my own values with an LLM instead of wasting my limited time on earth with it is great.

https://github.com/marketplace/actions/no-autopilot

Something like this and the Fedora LLM policy is a much better approach to making sure good MRs/PRs get sent than banning specific tools completely IMHO and I'm tired of not speaking up about this earlier. This all just reminds me of the anti-LSP/IDE, anti-Electron and anti-cloud native rants of 2010s. Care and attention is what matters here ultimately, not whether someone uses the 2026 equivalent of Vim vs. Eclipse.

No Autopilot - GitHub Marketplace

CI for mindless PRs. Save time by filtering low-quality PRs

GitHub

Anyways, ultimately it's up to the project maintainers to see what they are comfortable with, not me as a random bystander. I guess I would at least ask for some kind of periodic review process of policies that ban tools based on their current state.

Would I have banned LLM-generated code in my projects in mid-2025? Yup.

Would I do so in early 2026? Nope. Things have fundamentally changed.

For puregotk there have been more than one time where it's been enormously useful to have the LLM look at the GIR file and the generated code, then look at the generator, and find what's wrong. That would have taken literal hours otherwise (I know, because it did!). Doesn't mean the solution it comes up with is good (e.g. it doesn't find where it's best to add the fix/at which layer directly), but for analysis? It's so, so helpful.
Same for stuff like making sure your whole project uses writing guidelines correctly. Or to see whether you've missed adding _() to something. Are people just ignoring this? Does that not count under these LLM bans? I don't know but sometimes it feels like people on here live in a different reality where LLMs are stuck in 2023 eternally

@pojntfx It’s been really interesting to see how different communities react to AI at different times. It’s a huge mental leap we need to take, to figure out how to integrate into our mental models. It’s much easier to reject the tool entirely.

Hopefully we can help others by connecting the dots and paving the way, ultimately everyone just wants good code and sustainable projects.

great thread!

@eljojo I mean yeah that's def. the case. The whole "OpenClaw sends PRs to random projects full of slop" approach is obviously terrible, same with all of the autopilot "here is the logs, here is what's failed, here is the codebase, fix, and also open the PR" approaches that run on autopilot because those usually just create fresh bugs. But idk to me that is just a problem with attention and care not with the actual LLM

@pojntfx for me it's more about an ethical issue; Claude and all the others stole a countless amount of data to be where they are right now... and are now asking to be viewed as legitimate for patents and copyright... We should always keep that in mind.

But eh once I put that behind I agree with you. There are good and useful things to do with LLVM. Those things are sadly shadowed by the massive amount of bad they are doing.

Anyway; once they get corrupted by ads, I wonder how it will end up.

@me I don't actually agree with the "stole data" argument. There is in my eyes (and legally, at least in the US and EU) no difference between creating a search index and training a model on it. There are also completely free and open models (Apertus) that are built only from publicly available info.

I'm personally much more worried about the secondary effects from LLM use in the wrong places (healthcare, military, writing articles on autopilot, subtle errors in transcription etc.

@me And I mean yes, the big proprietary models from Anthropic and OpenAI are made by morally corrupt corporations I don't doubt that at all. The models already have a big bias today and ads will only make that worse
@me And also re:patents and copyright at least right now code generated by an LLM on autopilot is not valid for any kind of copyright protection in any jurisdiction I'm aware of. Using it in non-autopilot is I think a different story?
@me Oh and I just want to be clear when I say "stolen data" I don't mean like the absurd scraping that the big model labs are doing, I agree that this is very harmful. Just specifically whether it's legally to destill data available under any license into a model and redistribute that model under say Apache-2.0
@pojntfx I have the feeling this is a bit of a reaction to the pmOS policy. If that is the case, few thoughts:
* Indeed all policies are up for review after some time. What we got now is an update after review following lots of discussion with community and team members
* You can use AI tools to debug, find issues, and understand your code. What is explicitly forbidden by the policy are code contributions recommending it in our community due to the sustainability aspect that is at the core of what we do
* The whole thing is tricky, due to the project values. It's not about a technical decision. The main part of the rationale in the policy is, in fact, social. From resource consumption, to copyright, to the DDOS attacks that forced all of use to deploy Anubis

@pabloyoyoista Oh yeah sorry, it's about that policy but also others like it I've seen (I'm reminded of Zig, for example).

1) That makes sense and is good to know

@pabloyoyoista 2) That makes a lot of sense to me but really wasn't clear from reading the policy to me. "Submitting contributions fully or in part created by generative AI tools to postmarketOS." - if I only ever let it run to analyze but make the change it recommends myself, how is that functionally different? Maybe something like "LLMs can only be used in a read-only fashion" might be more clear here?
@pabloyoyoista 3) I understand, and yes a lot of today's LLMs run on dirty grids for sure esp. OpenAI and Anthropic ones. And yes, even if you run an open-weight model locally, that model was still trained on a dirty grid most likely. I will also mention though that doing the actual _inference_ itself is really no different from running any other program on a GPU IMHO. Mine for example runs in a local DC here in Vancouver that's 100% hydro, and the grid is 99% clean. (1/2)
@pabloyoyoista 3) Re:DDOS attacks yes, 100%, I agree with you there. That's obviously terrible for so many reasons. Blaming a user of an LLM who is not doing a DDOS attack themselves though to me is not different from blaming them from using say a computer made in an unethical way to contribute (which probably applies to a lot if not almost all of them), IMHO. Yes, there is probably some layer of responsibility there, esp. if you pay the model provider, but if you (2/3)

@pabloyoyoista use an open-weight model there is no exchange of goods or even signal you're using it that would further such behaviour in my personal opinion.

Ultimately, I'm not a contributor myself except for like updating some packages one time, so I really don't have any kind of say in this and don't want to pretend I do. I guess I am however worried about the impacts of a policy like this if I, as a daily user of postmarketOS, (3/4)

@pabloyoyoista ran into a problem but couldn't fix it (or well, could fix it, but using way more time then I would need usually, and don't have) because the tools I use are banned by the project and could even get me reported to a CoC committee. (4/4)

@pojntfx thanks a lot for the thoughts. I take note of the feedback, as I did when we got feedback on our initial policy, there are surely improvements in wording and clarification that can be made, even if those always take time due to the nature of something like this.

Regarding 3. I can understand your position, but I will be clear (and this is strictly a personal position) that as a project whose goals and mission are clearly around sustainability, it has different consequences. A policy that would allow contributing with a technology that is actively harmful for our mission (if this changes, surely we can adapt, but that is the overwhelming reality right now) might make it hard for people to take our mission seriously. To me personally, that is a much greater structural risk as a project than some potential coding slowness. That said, if you ever are in a situation where you would be unable to contribute due to any policy in place, please reach out, I'm sure we can find solutions. The goals are not to stop people from contributing.

@pabloyoyoista Thanks for the response, I really appreciate it. I think I understand the ethical concerns re:using a tool from companies that harm the project you're contributing better now. I'll def. reach out in case I run into issues in the future :)
@pojntfx the generator I wrote is definitely not code I am proud of
@jwijenbergh It works great fwiw! I might take a look at a larger refactor (maybe we can construct the Go code with smth like github.com/dave/jennifer to make it more reliable? I used it in https://github.com/pojntfx/html2goapp a whole ago and it worked well!) at some point in the future when there is time!
GitHub - pojntfx/html2goapp: CLI and web app to convert HTML markup to go-app.dev's syntax.

CLI and web app to convert HTML markup to go-app.dev's syntax. - pojntfx/html2goapp

GitHub

@pojntfx as much as I hate it, using LLMs as a smart search/diagnostics tool is a valid and useful case.

"Why isn't this code giving me the results I expect?" along with a sanitized snippet has saved me hours of manual debugging, and I've learned more things quicker as a result.

@swordgeek Yeah I feel the same way. I've had this happen sometimes in places were docs weren't great as well. Being able to point an LLM at source code and ask it a question about why something I've changed in a config file doesn't work the way I expect (I had this happen with Garage, the S3-compatible store, lately, and also with CRIU) is really, really valuable.
@swordgeek It doesn't replace actually engaging with the project to build your own mental model of how things work ofc but if used in the right context it can really lead to improvements in lots of ways I didn't expect
@pojntfx one example: I have a $PATH sanitization command that I've used forever. We wanted to use it for templated images, and I asked an LLM if it was a good idea. It replied with an improvement, a way of capturing a corner case I'd missed, and an admonition that the path should ve built from scratch correctly, not fixed after a potential race condition window. (We actually already did that - this was belt-and-suspenders safety.)
@pojntfx those listed features do not need gen AI / LLM for exist