Seems painfully obvious that, whatever you think about #genai code, anyone using it is heading for a code-review logjam. Assuming that the org requires code review; if yours doesn’t, nothing I can say will help you. Anyhow, Rishi Baldawa writes smart stuff about the problem and possible ways forward, in ˚The Reviewer Isn't the Bottleneck”: https://rishi.baldawa.com/posts/review-isnt-the-bottleneck/

[My prediction: A lot of orgs will *not* do smart things about this and will suffer disastrous consequences in the near future.]

The Reviewer Isn't the Bottleneck

AI tools are flooding PR queues and the instinct everywhere is to call review the bottleneck. I think that’s the wrong question. The reviewer is the last sync point before production changes. The goal shouldn’t be how to remove the gate, but how to make it cheaper to operate.

Rishi Baldawa

@timbray I am hearing peers in other companies being pushed by executives to abandon code review completely.

If you’re wondering how deep the psychosis goes.

@petrillic @timbray That's certainly my concern. I'm in security, so mostly watching this from the sidelines, as I listen to execs essentially pushing for "vibe-code to prod".

It can only become worse when making changes to programs, since it won't be incremental change. Instead, it'll be, "no, do the thing, but not like that." The old fluff will be thrown away, and so the new version will be whatever it is, possibly completely different from last time. How do you review that?

@tim_lavoie @petrillic @timbray

And in this flood of unintelligible slop, all eagerly serving under the commandment "move fast and break things", somebody starts dropping malicious code and big chunks of it just roll right over the broken safety net.

@violetmadder

@tim_lavoie @petrillic @timbray

There just IS NO compelling argument to exec's who want more code shipped yesterday. code review or not ai or not.. it doesn't matter.

We p2p code review. Mandatory. Automated testing, all of the best practices.
We found bug in testing akin to "car stalls when turning left and windshield wiper is on." We told to _ship it anyways_. Maybe its only for UPS drivers.

So if having an ai.code.review means having a code.review _at all_.... ¯\_(ツ)_/¯

@tezoatlipoca @violetmadder @petrillic @timbray
Well, you found an issue, and raised it! The approach we tend to take is that the security team identifies risks, and it's up to the business to decide from there. At least our process puts an executive sign-off in the way with risk assessments. I suspect though that the vibe-code stuff is going to change too fast for us to test it at scale, and it won't get those resulting risk assessments.

@timbray FWIW the LLM knows a lot of best practices and bad-things-you're-not-supposed-to-do, that it doesn't pay attention to well when coding. It likes to do the minimum that meets what was asked with bad results for maintainability.

But it has seen, does know, what 'good code' would look like, so the exact same LLM that did the coding, can very usefully itself assess, critique and fix what it just emitted.

@hopeless Well, the LLM is trained on all the code, good & bad, no? Not sure how it knows which is which.

@timbray Same as a human, it has also read all the blog posts on "best practices" and "code smells".

Anyway don't take my word for it, open a fresh context and ask the LLM to assess the ways that (the code it wrote in a previous context) falls short of being maintainable and high quality, and to patch it to be better in those cases.

@hopeless @timbray

If what it emitted needs fixing, why did it emit it in the first place instead of getting it right the first time? How many times do you pull the lever on this self-regurgitating slot machine before you proudly announce it came out right this time? Do we roll dice, or pull some floaty balls out of a globe or what?

It doesn't KNOW what good code looks like. It imitates whatever is associated with markers that indicate "good code", spewing similar patterns based on guesstimating what had positive feedback, in general averaged across who knows what criteria that's processed and judged some particular way, with occasional direct human input attempting to correct its course and it all sounds like pachinko balls falling.

@[email protected] @timbray

While it's engaged writing code, same as a human, its context is full of all the apis and surrounding code and language rules, and task definition from the human, it needs to do that, it literally lacks the ability to give attention to things that are not directly needed to do what's in front of it at that time.

In another context though, it's not coding and has the attention free to use its existing knowledge to do the critique, and after setting the task, the rewrite.

@[email protected] @timbray

In the sense you can ask it what good code looks like, and it can tell you, not only can it tell you what you need to do to make specific bad code good, it can enact its advice and give you the better code, isn't it meaningless to talk about what it "knows" being inferior to the sense a meatbag thinks it knows something?

You can leverage its knowledge to do stuff faster and better than you can. You can either use 1-pass slop or refine it iteratively.

@timbray

Curl shut down their bug bounty after six years

Huh? I often see Daniel ranting about this, sure, but I haven’t seen what they’re saying here, and their link doesn’t say that either

[edit] I was wrong, they stopped in January
https://www.theregister.com/2026/01/21/curl_ends_bug_bounty/

Curl shutters bug bounty program to remove incentive for submitting AI slop

: Maintainer hopes hackers send bug reports anyway, will keep shaming ‘silly' ones

The Register

@GuillaumeRossolini @timbray https://curl.se/dev/vuln-disclosure.html

> There is no bug bounty and the curl project never offers rewards for reported vulnerabilities.

More in https://daniel.haxx.se/blog/2026/01/26/the-end-of-the-curl-bug-bounty/

curl - Vulnerability Disclosure Policy

@nikclayton yes I completely missed that, my bad
@timbray If not reviewing is bad, and waiting for reviewers is infeasible, perhaps we could take another look at pairing?

@cford @timbray

Every developer I've ever known (which is a LOT): NO

Listen, we all read XP from the trenches in the mid 00s and rushed off to try pair programming, and it lasted... like a week (the rest stuck). Noone likes having coworkers that close for that much (and thats not an indictment of developer hygene, its a personal space thing).

@tezoatlipoca @timbray It's pretty pervasive in my workplace, but I totally get it's not a universal preference.
@timbray
This is what I wonder about when it comes to some of the FOSS slopifications. Because sometimes Devs turn to AI to manage their workload being way too high, suffering from lack of volunteers and so forth, but will it actually help outside of the short term or will it end up making things worse?
@timbray i don't think writing quality production level code is going to get faster any time soon. Sure you can write 1000s of lines of terrible code with LLMs in 10 minutes but so what that doesn't hep anybody.