The question is not whether you can create software using LLMs - you can (most software is just boring CRUD shit).
But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.

It's not "can 'AI's create software" but "are we willing to accept worse software running more and more of our lives?"

@tante Thinking a lot about this. To me it boils down to code ownership. Which is yet another kind of responsibility/liability that is offloaded to machines that by definition can't be.
The Final Bottleneck

AI speeds up writing code, but accountability and review capacity still impose hard limits.

Armin Ronacher's Thoughts and Writings