Before AI, I felt like I could review PRs and, based on whose code I was reviewing, I could be more trusting and casual with my review, since I knew they usually wrote good code and paid attention to details. Now, with AI, no one pays attention to details, so I have to deeply scrutinize every PR, which takes twice or 3x the effort.

AI agents almost always add things that aren’t necessary, or they follow anti-patterns based on the surrounding legacy code context.

@ramsey Yeah, I’ve noticed a distinct drop in quality since coworkers started using LLMs.

It’s hard to be sure, because it’s made me judge every PR more harshly anyway. But I notice some distinct patterns, like specifying default values unnecessarily.

And usually a failure to notice appropriate opportunities for abstraction (because hey, the LLM is so good at boilerplate that we don’t need to worry about that any more, right?)

We had a couple of big PRs of ‘nice-to-have’ features that were too low-priority ever to get done. Vibecoding meant they were initially written in a couple of hours — but then they took literal weeks of back and forth in code review to get them into a tolerable state.

@benjamineskola @ramsey I have tried myself, supressing my opinions as an experiment and got assistance to prompting (as this was the main argument against my LLM responses were hallucinated and full of BS), as LLM usage had become the norm in one of the workplaces. After two months I gave up, and I was cleaning up the generated garbage. Most of it was even produced by senior engineers clearly blinded by the hype and pressured by managers. So therefore i do not fall for the 'AI is just a tool', or 'seniors know what to include in their PRs' as well, it all falls to the ground in the pressure and feedback loops of effectivisation and capital