But you can’t just not review things!

Actually you can. If you shift the reviews far to the left, and call them code design sessions instead, and you raise problems on dailys, and you pair programme through the gnarly bits, then 90% of what people think a review should find goes away. The expectation that you'll discover bugs and architecture and design problems doesn't exist if you've already agreed with the team what you're going to build. The remain 10% of things like var naming, whitespace, and patterns can be checked with a linter instead of a person. If you can get the team to that level you can stop doing code reviews.

You also need to build a team that you can trust to write the code you agreed you'd write, but if your reviews are there to check someone has done their job well enough then you have bigger problems.

I've seen engineers I respect abandon this way of working as a team for the productivity promise of conjuring PRs with a coding agent. It blows away years of trust so quickly when you realize they stopped reviewing their own output.
I’m so disappointed to see the slip in quality by colleagues I think are better than that. People who used to post great PRs are now posting stuff with random unrelated changes, little structs and helpers all over the place that we already have in common modules etc :’(

> little structs and helpers all over the place that we already have in common modules

I've often wondered about building some kind of automated "this codebase already has this logic" linter

Not sure how it would actually work, otherwise I'd build it. But it would definitely be useful

Maybe an AI tool could do something like that nowadays. "Search this codebase for instances of duplicated functions and list them out" sort of thing

>this codebase already has this logic

At first glance this looks like it might be the halting problem in disguise (instead of the general function of the logic, just ask if they both have logic that halts or doesn't halt). I think we would need to allow for false negatives to even be theoretically possible, so while identical text comparison would be easy enough, anything past that can quickly becomes complicated and you can probably infinitely expand the complexity by handling more and more edge cases (but never every edge case due to the underlying halting problem/undecidability of code).

You only need to detect structuraly similar code.
You absolutely do not need AI for that. You need ASTs.

In Ruby we have the Flay gem, for example.

https://github.com/seattlerb/flay

GitHub - seattlerb/flay: Flay analyzes code for structural similarities. Differences in literal values, variable, class, method names, whitespace, programming style, braces vs do/end, etc are all ignored.

Flay analyzes code for structural similarities. Differences in literal values, variable, class, method names, whitespace, programming style, braces vs do/end, etc are all ignored. - seattlerb/flay

GitHub