I 100% understand and support an anti-AI coding stance, but I'm seeing more and more people assert that everyone hates it and it never works. Unlike gen-art, unlike generated legal opinions, generated code is actually starting to produce good results, and more and more of my colleagues are using it, and as I review the code they produce, I can't just dismiss it as slop.

I'm not asking anyone to change their opinion or abandon the fight against AI. I'm just warning that asserting that "everyone hates it and it doesn't work" is ... increasingly incorrect. Effective arguments need to speak to the reality of the situation.

@huxley my mental sketch of coding agents is that they're a semirandom walk through "plausible sentences" space, which can work! especially if there's something like a testsuite that can "independently" evaluate the agent's result, which lets us automatically retry the agent until it "succeeds"

what makes me wary is when agents generate code with uncanny, unhuman errors that sneak past testsuites and code review, because the code is *plausible*, but subtly wrong

@gray17 100%. All the research shows if you let them go off on their own, code gets worse and worse. They need regular, careful oversight