https://github.com/marketplace/actions/no-autopilot
Something like this and the Fedora LLM policy is a much better approach to making sure good MRs/PRs get sent than banning specific tools completely IMHO and I'm tired of not speaking up about this earlier. This all just reminds me of the anti-LSP/IDE, anti-Electron and anti-cloud native rants of 2010s. Care and attention is what matters here ultimately, not whether someone uses the 2026 equivalent of Vim vs. Eclipse.
Anyways, ultimately it's up to the project maintainers to see what they are comfortable with, not me as a random bystander. I guess I would at least ask for some kind of periodic review process of policies that ban tools based on their current state.
Would I have banned LLM-generated code in my projects in mid-2025? Yup.
Would I do so in early 2026? Nope. Things have fundamentally changed.
@pojntfx Itβs been really interesting to see how different communities react to AI at different times. Itβs a huge mental leap we need to take, to figure out how to integrate into our mental models. Itβs much easier to reject the tool entirely.
Hopefully we can help others by connecting the dots and paving the way, ultimately everyone just wants good code and sustainable projects.
great thread!
@pojntfx for me it's more about an ethical issue; Claude and all the others stole a countless amount of data to be where they are right now... and are now asking to be viewed as legitimate for patents and copyright... We should always keep that in mind.
But eh once I put that behind I agree with you. There are good and useful things to do with LLVM. Those things are sadly shadowed by the massive amount of bad they are doing.
Anyway; once they get corrupted by ads, I wonder how it will end up.
@me I don't actually agree with the "stole data" argument. There is in my eyes (and legally, at least in the US and EU) no difference between creating a search index and training a model on it. There are also completely free and open models (Apertus) that are built only from publicly available info.
I'm personally much more worried about the secondary effects from LLM use in the wrong places (healthcare, military, writing articles on autopilot, subtle errors in transcription etc.
@pabloyoyoista Oh yeah sorry, it's about that policy but also others like it I've seen (I'm reminded of Zig, for example).
1) That makes sense and is good to know
@pabloyoyoista use an open-weight model there is no exchange of goods or even signal you're using it that would further such behaviour in my personal opinion.
Ultimately, I'm not a contributor myself except for like updating some packages one time, so I really don't have any kind of say in this and don't want to pretend I do. I guess I am however worried about the impacts of a policy like this if I, as a daily user of postmarketOS, (3/4)
@pojntfx thanks a lot for the thoughts. I take note of the feedback, as I did when we got feedback on our initial policy, there are surely improvements in wording and clarification that can be made, even if those always take time due to the nature of something like this.
Regarding 3. I can understand your position, but I will be clear (and this is strictly a personal position) that as a project whose goals and mission are clearly around sustainability, it has different consequences. A policy that would allow contributing with a technology that is actively harmful for our mission (if this changes, surely we can adapt, but that is the overwhelming reality right now) might make it hard for people to take our mission seriously. To me personally, that is a much greater structural risk as a project than some potential coding slowness. That said, if you ever are in a situation where you would be unable to contribute due to any policy in place, please reach out, I'm sure we can find solutions. The goals are not to stop people from contributing.