I think what a lot of AI critics are missing is that they're judging an LLM by its first draft. This is *not* what terrifies me about these machines.

What terrifies me is that you can ask them "find bugs in this PR." Or "find performance flaws." Or really anything.

Then have 3 agents (with different models ideally) vote on the result. Then have another fix it. Repeat until all bugs are clean.

If you haven't tried this experiment then you haven't reached the dark night of the soul that I have.

I've definitely had PRs that required like 5-10 rounds of reviews like this until the PR was sparkling clean. But the scary part is that it didn't really need me in the loop at all. Honestly you just needed a monkey to keep typing "make it better" and eventually it would.

If that entire process burns a ton of energy and water then that's all atrocious, but it's likely still less than an engineer's salary. And my body requires energy and water too. This is what scares me ultimately.

@nolan I think the 'humans require energy too' is kind of a dark train of thought. And we're making ourselves dependant on corporations that will happily explore that fully, because Sam Altman made that same exact comment recently.

In this specific context, I see the positives, but I want control and transparency of environmental costs.

@kosinus You're right, the better path is to focus on one's humanity. I've been reading more fiction and going to more live performances recently. I'm finding less and less art and humanity through coding.