@Loosf This tracks with what I've thought all along, and I've heard others like Primeagen echoing.
Those two big concerns to me were: do the programmers understand their code well enough to be able to spot errors in the code the AI is producing, and how will the coder be able to debug and/or maintain the code that is being produced if they relied on the AI to produce code they don't completely understand. (And it gets deeper when you start looking at side effects and other interactions.)
I've had this concern since I tried having ChatGPT write a profile of an artist that is lesser known, but has a high-enough profile that it would have information about him available. The result: about 50-60 percent of the profile was decent... But it started creating works that the artist hadn't created, and started listing collaborators that the artist didn't know -- much less collaborate with.
In that case, I was able to tell that the AI was wrong because I was the master, I already had the knowledge and was just using it as a tool to try to shorten my workload.
But for people doing so-called vibe coding this could be quite disastrous as they generally haven't actually mastered the language(s) and coding practices, and therefore don't have the skills to correct the AI, much less the code it is producing.