The question is not whether you can create software using LLMs - you can (most software is just boring CRUD shit).
But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.

It's not "can 'AI's create software" but "are we willing to accept worse software running more and more of our lives?"

@tante Good take! But also, like "can you create software" is not really an accurate framing of what the hard part of software was.

Most people could "create software" by looking up a Hello world example. That wouldn't help them solve amy real problems tho.

LLMs produce software that *looks more like* it solves problems... but security, integrity, legality were kind of always implied parts of the problem.

Like, it takes a weird subtle reframing of the goal to make LLMs look at all useful.