“People use AI to code and the outcome is shit so we need to improve AI” is such a weird take. Just do not use “AI” and do stuff yourself and you’ll see how much you learn and how independent you’ll get.

The goal of “AI” coding tools is to show you code that you don’t identify as wrong. That’s the successful case for it. It doesn’t matter if the code is wrong – there is no way for an LLM to test that. (1/2)

And then slowly and steadily your ability to identify code that is bad is eroded from your memory until your job is just gone (because you identifying an issue is just a coin flip) or given to someone who can still do it. Supposedly someone who did not work with “AI”. (2/2)