https://github.com/marketplace/actions/no-autopilot
Something like this and the Fedora LLM policy is a much better approach to making sure good MRs/PRs get sent than banning specific tools completely IMHO and I'm tired of not speaking up about this earlier. This all just reminds me of the anti-LSP/IDE, anti-Electron and anti-cloud native rants of 2010s. Care and attention is what matters here ultimately, not whether someone uses the 2026 equivalent of Vim vs. Eclipse.
Anyways, ultimately it's up to the project maintainers to see what they are comfortable with, not me as a random bystander. I guess I would at least ask for some kind of periodic review process of policies that ban tools based on their current state.
Would I have banned LLM-generated code in my projects in mid-2025? Yup.
Would I do so in early 2026? Nope. Things have fundamentally changed.
@pojntfx for me it's more about an ethical issue; Claude and all the others stole a countless amount of data to be where they are right now... and are now asking to be viewed as legitimate for patents and copyright... We should always keep that in mind.
But eh once I put that behind I agree with you. There are good and useful things to do with LLVM. Those things are sadly shadowed by the massive amount of bad they are doing.
Anyway; once they get corrupted by ads, I wonder how it will end up.
@me I don't actually agree with the "stole data" argument. There is in my eyes (and legally, at least in the US and EU) no difference between creating a search index and training a model on it. There are also completely free and open models (Apertus) that are built only from publicly available info.
I'm personally much more worried about the secondary effects from LLM use in the wrong places (healthcare, military, writing articles on autopilot, subtle errors in transcription etc.