I 100% understand and support an anti-AI coding stance, but I'm seeing more and more people assert that everyone hates it and it never works. Unlike gen-art, unlike generated legal opinions, generated code is actually starting to produce good results, and more and more of my colleagues are using it, and as I review the code they produce, I can't just dismiss it as slop.

I'm not asking anyone to change their opinion or abandon the fight against AI. I'm just warning that asserting that "everyone hates it and it doesn't work" is ... increasingly incorrect. Effective arguments need to speak to the reality of the situation.

@huxley If it really is getting "better" at generating functional code, then its legal & ethical issues are only getting worse.
@jackemled You've got it, those are the issues to concentrate on. Legal, ethical, environmental. But not baseline functionality. (This is in contrast to blockchain, which never developed an actual use outside of crime.)

@huxley Blockchain is decent at it's only actual application (keeping sequence of a ledger & preventing double spending of money), but it's simultaneously really bad at it, just look at "block reorganization attacks" & "selfish mining". It's really sad but also funny because that's literally the only thing it has any use at all in.

I'm sure procedural code generation would be way more reliable than random code generation like what LLMs do, & way more ethical & with much more concrete legal situations, but it would also probably be way more difficult to make. I guess that would basically be a natural language to programming language cross compiler though.