@glyph I like point 2.
> Second — and this is actually its more important purpose — code review is a tool for acculturation.
And your points about automatic checks being the primary filters on code properties are well taken and worth repeating.
Code review is a social process, and you should treat it as such.
This is why we also like to include a "distant voice" in code review, someone within the big org who is less intimately familiar with the code/code base, just to notice things the local folk have perhaps become blind to, and to socialize more broadly coding practices and organizational affordances. (Also catches integration bugs and interface confusion early, which is a plus.)
@glyph To check the "I use AI" box at work, I've started using Copilot to "review" the code.
I kick it off at the very end: either before approving the PR when I'm a reviewer, or before requesting a review for a peer.
The goal is to keep my reviewing skills sharp by making sure that I find all the issues and the Copilot run is clean.
Is it useful? I'm not sure. I definitely think AI reviews don't worth burning the planet.
Copilot is very good at correcting natural English. This is neat, but should be easily caught by human.
Copilot might be a mix of static code analysis tools mixed together with LLMs (is it what agents are basically are), because it found some inconsistencies.
I would rather configure dedicated tools to do this and use them as I'm writing the code.