If you replace a junior with #LLM and make the senior review output, the reviewer is now scanning for rare but catastrophic errors scattered across a much larger output surface due to LLM "productivity."

That's a cognitively brutal task.

Humans are terrible at sustained vigilance for rare events in high-volume streams. Aviation, nuclear, radiology all have extensive literature on exactly this failure mode.

I propose any productivity gains will be consumed by false negative review failures.

@pseudonym I have posed this conundrum before and the answer I received is that there is also an opportunity cost to not moving faster and the risk of a catastrophic bug may not outweigh the risk of being overtaken by competitors, especially since that was already happening before LLMs anyway.

Also, it *seems* models are improving at detecting these bugs, so they are being used to review changes, which, for the reasons you point out, they might be better at than people.

@toldtheworld @pseudonym I didn't think I'd see the day when I'd want to ask CEOs "If all your friends jumped off a cliff, would you do it too?"

Overtaken by competitors how? How is it "overtaken by" when what is actually happening is "my competitors are introducing fundamental flaws into their business model that will completely vitiate it as a workable product so all I have to do is wait for them to fail"?

Apparently the free market doesn't turn people into money-making machines that build products other people want, it turns CEOs into lemmings. Who knew?