I will never understand why we looked at modern programming, saw that there is a good bit, which is programming, and a bad bit, which is code review… and decided to automate the good bit at the expense of having to do a lot more of the bad bit.
Also, LLMs are designed to fool us. That's their core essential feature. That's what they're designed to be best at.
So they not only increase the volume (both frequency and size) of code reviews, they make the process much more difficult and error-prone, as they're *designed* at their core to produce plausible-looking code that will fool code reviewers.