After receiving the first LLM-generated pull requests, I have decided to blanket no longer look at those. When studying a PR, I take into account who made it, and if they've previously been careful developers. LLM-generated code I have no idea about, and the amount of scrutiny required is just too much. Because I have to assume you have no idea what you are doing.
This in addition to the many other reasons not to accept LLM-generated code, btw.
@bert_hubert Computer programming is a social activity.
@bert_hubert personally i would add “did they send me cat memes?” But i am that way
@bert_hubert
So how do you "resolve" those pull requests? Just close with a remark?
@mboelen yes. I need to put up a warning so people don’t waste their time.
@bert_hubert
Makes sense. So far I didn't get LLM output as a pull request. Or at least not that I recognized as such. Probably a good idea indeed to share within the project if such contributions are welcomed (or not) and how they will be dealt with.
@bert_hubert @mboelen are they wasting any time? (Other than yours if you review it, which is enough reason to discourage it. Or maybe the notice just means they'll hide it)
@bert_hubert given that they don't seem to mind wasting your time with slop PRs, I wouldn't be in any great hurry to put up warning signs.
@bert_hubert i fully agree.
when I started playing around with llm generated code after a couple of decades of being afk i appreciated the fact that I could build out ideas by myself .
I also thought that I could use this new found capability to give back to ppen source projects that i admired.
Now, after several weeks of playing around I am convinced that submitting my llm generated code would be counter productive instead of helpful to those projects.
@bert_hubert some of these are made by bots run by AI companies trying to use your free labor to train their product.

@bert_hubert @thomasjwebb

Speaking of training Ai...
... wait till you use office 365 or Google Mail 😑

@bert_hubert I think mastodon's LLM policy is a good one, iiuc: when you submit a PR you are responsible for it.

If you produce good code with an LLM and review it, that's fine. If you submit LLM generated slop you get ignored.

A good LLM-generated PR should be indistinguishable from a good human one.

@riffraff I think you missed the point of my post. “Looks good” does not tell me everything.

@bert_hubert perhaps I did :)

My response goes from: how do you evaluate "this PR is LLM generated so I will reject it a priori"?

What I'm saying is that if it appears LLM generated than it does not "look good" and so it's fine to reject it.

Otherwise, I think if a dev took time to edit it, it's just like another one.

@riffraff @bert_hubert the only good LLM policy is to outright ban anyone dumb enough to use them 🤷‍♀️
@bert_hubert I recently came to the same conclusion. The company I work for is starting to roll out an AI-policy (which is a good thing), and one of my requirements was that AI-assisted PRs would mention they're AI-assisted.
The basis for this was that I have various levels of trust of my colleagues. So when I review a PR, I take the knowledge of the creator into account. I cannot do that with an AI, because I don't trust the knowledge of the AI.

@bert_hubert If I were still working in software development I think I’d be right there with you on that one.

But I did find the Register’s recent interview with Greg Kroah-Hartman interesting:

https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/

Obviously, blatant AI/LLM slop deserves to go straight in the bin but I’m curious what has caused a seeming step change in the quality of AI bug reports, etc. Have the LLMs suddenly improved or are people finally actually checking what they produce?

AI bug reports went from junk to legit overnight, says Linux kernel czar

Interview: Greg Kroah-Hartman can't explain the inflection point, but it's not slowing down or going away

The Register
@bert_hubert Why is code quality not important anymore?