I think what a lot of AI critics are missing is that they're judging an LLM by its first draft. This is *not* what terrifies me about these machines.

What terrifies me is that you can ask them "find bugs in this PR." Or "find performance flaws." Or really anything.

Then have 3 agents (with different models ideally) vote on the result. Then have another fix it. Repeat until all bugs are clean.

If you haven't tried this experiment then you haven't reached the dark night of the soul that I have.

@nolan What you're describing is not a skill or craft, it's a gacha machine. It's gambling. You're hitting spin until you win something. And it relies on similar code and programs being in its training data. It's copying from Stack Overflow with extra steps. It won't solve novel problems.
@Gargron @nolan yeah, i think that's what's bumming him out
@bea @nolan Bea?!
@Gargron @nolan lmao yeah, hi, i'm still alive isn't that wild?
@bea @Gargron Exactly, yeah. A lot of software is not terribly novel. That's exactly the weakness that these tools are exploiting.
@Gargron @nolan @bea Having a good validation mechanism lets you stop gambling when you finally land on a good roll of the dice. And software engineering has a lot of those thanks to unit/integration tests, linting/complilers, and just plain checking to see if the code outputs something that you want.
@danny @nolan @bea It would be funny if it wasn't tragic that the idea of modern software development is to hit the randomizer button until something vaguely correct comes out instead of just knowing how to do something and doing it intentionally.
@Gargron @danny @nolan @bea yeah. I remember my university professors talking about how stupid and laughably inefficient bogosort is, and now everyone's in my feed saying how much they love bogosort and how it saves them time 🤡