The question is not whether you can create software using LLMs - you can (most software is just boring CRUD shit).
But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.

It's not "can 'AI's create software" but "are we willing to accept worse software running more and more of our lives?"

@tante I don't believe that to be universally true. I *wish* it was, because it'd be so much easier to argue against them.

Unfortunately, the "mere" fact that all currently existing incarnations are fundamentally evil does not mean they must lead to lower quality software.

A velocity-first mindset has *always* led to lower quality, regardless of GenAI. And they make that rush accessible to everyone, regardless of expertise/skill.

[1/2]

@tante I'm also unsure skill decay is real as such. I also would struggle for a few moments before I could do long division again, or implementing a sorting algorithm.

We get the lower quality not because people use LLMs.

But because they are pressured to ever faster velocity by capitalism/fascism that wants to deregulate everything.

LLMs, used right, can be *useful*.

The problem is they are currently a) evil, b) used badly at scale.

One *can* use them for high quality results. [2/3]

@tante One can - and probably should - argue that one *shouldn't* with the current systems (see "evil, fascism" above), and also not make them so widely / forcibly available. That they need bette regulation, oversight, ... And that the software produced must be held to the same if not higher quality standards.

Sure.

But that's a different take on "any software created with LLMs must be and will be lower quality."

[3/3]

@larsmb the skill decay thing has been shown over and over in studies, even by Microsoft. The canonical defense is: "Yeah but we have always lost skills, it's just normal"

@tante Yes, but is that actually untrue? I know that even Anthropic has shown that people learn less (of what they'd have learned via the traditional method) when completing a task using GenAI, sure.

But are they maybe learning *other* things? Is their use of that tool/method improving, for example? e.g., the Anthropic paper showed that this varied widely for different Usage Patterns.

IDNK. I think it's truly too early to truly understand those mid- to long-term effects.

@larsmb People are so much worse at assembler ever since compilers came along.
@tante