If you replace a junior with #LLM and make the senior review output, the reviewer is now scanning for rare but catastrophic errors scattered across a much larger output surface due to LLM "productivity."

That's a cognitively brutal task.

Humans are terrible at sustained vigilance for rare events in high-volume streams. Aviation, nuclear, radiology all have extensive literature on exactly this failure mode.

I propose any productivity gains will be consumed by false negative review failures.

@pseudonym and because the high volume consists of what I’ve dubbed “plausible bullshit”, reviewers will have to battle a plethora of their biases as well.

There are fields (I’ve heard stories about protein and material design, and vulnerability discovery) where filtering the BS for real discoveries can be worth it. I’m guessing it works because there is a reality to test against.

But for the love of humanity, don’t use it for anything descriptive or abstract.

@avuko @pseudonym The main reason that machine learning works so well with material and protein design, weather forecasting, and such, is that there is good data available to “train” the model. The internet is the source of LLM training. It is full of garbage and LLMs are filling it with more garbage. The rule is the same as in 1970: GIGO (garbage in, garbage out). Only the scale is different.

@ELS @avuko @pseudonym Exactly this. The #AI_Slop is growing exponentially which in turn increases the slop bucket depth and size which in turn has already degraded the quality and validity of search engine results. Some estimates have put the accuracy and degradation at 20-35% *worse*. So having the exponential growth of #AI_Slop is in turn DEcreasing the accuracy and value of *search* exponentially as well. Doing all of that on *bigger and faster* machines and #LLMs will only hasten the processes in play and dramatically increase the probability of truly catastrophic outcomes and consequences.

And that is the case already in play, without bringing in all the issues raised in Bender and Hanna's recent book (mandatory reading)

https://www.amazon.com.au/AI-Fight-Techs-Create-Future/dp/1847928625

My first encounter with so-called "artificial intelligence" was in 1964-5 as an undergrad psychology student in an (snail mail) exchange with one of the pioneer researchers at Stanford. I've been involved in parts of it and tracked it ever since. It is critical to understand that it has taken OVER 60 YEARS to get to the mediocre state we are now in. It didn't happen "yesterday" or even in "the last 2 years" as some snake oil #AI_Salesmen would have everyone believe.
Time to #BeCarefulWhatYouWishFor

And its now 2026...

The AI Con: How To Fight Big Tech's Hype and Create the Future We Want : Bender, Emily M.: Amazon.com.au: Books

The AI Con: How To Fight Big Tech's Hype and Create the Future We Want : Bender, Emily M.: Amazon.com.au: Books