@matt many self reported claims of something working is not the same as evidence of it working.
AI research is (obviously, the field is evolving fast) lagging far behind. Even ignoring external harms, it's unclear whether it causes personal long term harm and what it means for the maintainability of projects.
It's entirely possible that it causes *important* skill atrophy (every new tool causes skill atrophy of some kind, many are often irrelevant). Of course last year's studies on this topic doesn't apply to the current models. Just like studies done on these models won't apply to next year's models. But patterns are appearing.
It's also possible that large projects where everyone uses AI heavily won't have anyone understand the details anymore, only the broad designs. Of course an AI can always explain those to you or you just regenerate them with a bigger model in the future.
But all that said... If we had a performance enhancing drug that allowed ppl to be x times more productive, would we really be that uncareful with it just because it's chemical makeup hasn't been restricted yet?
GenAI is basically a nonchemical drug (listen to the LLM maximalists, not me), and I am worried about heavy users, I am worried about companies forcing employees to use them everywhere, I am worried about ppl getting addicted to them and frying their brains (burnout as a service)
And at this point we haven't even talked about the damage all the mis-use (be it malicious or ignorant) has caused or the training causes to the world and to ppl in the training mines.