Before AI, I felt like I could review PRs and, based on whose code I was reviewing, I could be more trusting and casual with my review, since I knew they usually wrote good code and paid attention to details. Now, with AI, no one pays attention to details, so I have to deeply scrutinize every PR, which takes twice or 3x the effort.

AI agents almost always add things that aren’t necessary, or they follow anti-patterns based on the surrounding legacy code context.

@ramsey Yeah, I’ve noticed a distinct drop in quality since coworkers started using LLMs.

It’s hard to be sure, because it’s made me judge every PR more harshly anyway. But I notice some distinct patterns, like specifying default values unnecessarily.

And usually a failure to notice appropriate opportunities for abstraction (because hey, the LLM is so good at boilerplate that we don’t need to worry about that any more, right?)

We had a couple of big PRs of ‘nice-to-have’ features that were too low-priority ever to get done. Vibecoding meant they were initially written in a couple of hours — but then they took literal weeks of back and forth in code review to get them into a tolerable state.

@benjamineskola @ramsey true. What I notice is that LLMs introduces a lot of duplicated code. Making the mental model harder and harder to understand, in case an actual human needs to read the code.

So a good example might be to define one static constant value and reuse that value across the file. Ai will duplicate that constant for each method or worse, create magic values (aka without any variable definition).

So Ai loves generating the same code.

@melroy @ramsey Yeah. People keep saying “it’s great at handling boilerplate”. The flip side of that is that it creates boilerplate where none was necessary.

I think part of the value in doing code review for my teammates is that I start to see repeated patterns where we could abstract things out. An LLM never notices that; and automating the repetition shelters the developer from noticing it, too.