I think what a lot of AI critics are missing is that they're judging an LLM by its first draft. This is *not* what terrifies me about these machines.

What terrifies me is that you can ask them "find bugs in this PR." Or "find performance flaws." Or really anything.

Then have 3 agents (with different models ideally) vote on the result. Then have another fix it. Repeat until all bugs are clean.

If you haven't tried this experiment then you haven't reached the dark night of the soul that I have.

@nolan What you're describing is not a skill or craft, it's a gacha machine. It's gambling. You're hitting spin until you win something. And it relies on similar code and programs being in its training data. It's copying from Stack Overflow with extra steps. It won't solve novel problems.
@Gargron @nolan yeah, i think that's what's bumming him out
@bea @nolan Bea?!
@Gargron @nolan lmao yeah, hi, i'm still alive isn't that wild?
@bea @Gargron Exactly, yeah. A lot of software is not terribly novel. That's exactly the weakness that these tools are exploiting.
@nolan @bea The problem is of course the deskilling. Once there *is* a novel problem that's important to solve, the world will be in a heap of trouble. There is this miconception that knowledge is on a constant upward trajectory, but actually it gets lost all the time. Nobody knows how to make cassette players as advanced as we used to because this technology was not considered important enough to be kept around.

@Gargron @bea Yeah, this reminds me of this excellent talk: https://www.youtube.com/watch?v=pW-SOdj4Kkk

I see it less as a tragedy to be prevented, though, and more just an inevitability of how human society accumulates complexity. Joseph Tainter's "Collapse of Complex Societies" also touches on this.

Jonathan Blow - Preventing the Collapse of Civilization (English only)

Jonathan's talk from DevGAMM 2019.https://www.youtube.com/c/DevGAMMchannel

YouTube