After receiving the first LLM-generated pull requests, I have decided to blanket no longer look at those. When studying a PR, I take into account who made it, and if they've previously been careful developers. LLM-generated code I have no idea about, and the amount of scrutiny required is just too much. Because I have to assume you have no idea what you are doing.
@bert_hubert If I were still working in software development I think I’d be right there with you on that one.
But I did find the Register’s recent interview with Greg Kroah-Hartman interesting:
https://www.theregister.com/2026/03/26/greg_kroahhartman_ai_kernel/
Obviously, blatant AI/LLM slop deserves to go straight in the bin but I’m curious what has caused a seeming step change in the quality of AI bug reports, etc. Have the LLMs suddenly improved or are people finally actually checking what they produce?
