The question is not whether you can create software using LLMs - you can (most software is just boring CRUD shit).
But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.

It's not "can 'AI's create software" but "are we willing to accept worse software running more and more of our lives?"

Here's the thing: I believe that you deserve to have access to high quality products and services. You deserve to use products and services that are safe, secure, well-designed and not destroying the ecological, informational or social environment.
@tante yeah but what if Some Guy's bonus depends on making it all shittier?
@pikesley *points at everything* then we get this
@pikesley @tante The Problem exists as long this "guys" are believing, that they can buy themselves a better world.
@tante I mean… people accepted that for transports, agriculture, entertainments, even education and healthcare. Why stop here?
@tante Then there’s the thing that we never had that software. Business has always accepted low quality products and services. So while I do agree with you, I’m afraid the people who run the software companies simply don’t care.

@tante

+1

> "destroying the ecological, informational or social environment"

As good as this generated code may be, it remains unacceptable because of that. And that should be the ultimate reason (as the code quality may rise, but at the cost of more destruction)

@tante LLM means the tyranny of shit.
A very mediocre dystopia indeed.

@Nausipoule @tante I'd argue that instead (or, if you'd like: additionally), it is the terminal form of stochastic terrorism:

You will be randomly denied services, participation and dignity. Now isn't that quite a future.

@tante Thinking a lot about this. To me it boils down to code ownership. Which is yet another kind of responsibility/liability that is offloaded to machines that by definition can't be.
@map exactly. In a way accepting responsibility for the code one puts in front of people is accepting the connected care duties towards these people.
@tante @map LLMs are another way to avoid putting skin in the game. Which is the whole point, if you’re a sociopath (or play one on TV). Privatize gains, socialize losses.
The Final Bottleneck

AI speeds up writing code, but accountability and review capacity still impose hard limits.

Armin Ronacher's Thoughts and Writings
@tante @map That's just part of the truth. You can make wonderful, creative, unique software using AI. The thing is you have to specify what you want to achieve. If you don't give these goals to the AI then it will come up with some mediocre generic solution.
@gklka @tante @map
I've a bridge over the river Shannon you can buy.
"make wonderful, creative, unique software using AI."
No. An LLM can't create at all and if it actually works and meets the spec it's likely copied.
@raymaccarthy @tante @map AI can't create. You create, AI just implements it. I know it is hard to digest but this is everyday work for a lot of us now.
@gklka @tante @map
A compiler implements it. The LLM/Gen is a rubbish search engine, database and statistical engine. It regurgitates based on prompts, not formal specifications.
@raymaccarthy @tante @map Ok, feel free to think whatever you want.
@gklka @raymaccarthy @tante @map I agree with GK on this. Not all AI is the same, and it's definitely not black and white. With the right expertise and detailed specs, you can achieve great results while keeping the code maintainable and retaining ownership. I really dislike the mindset that everything has to be either absolutely good or 100% bad.

@tante And I cannot even begin to emphasize how much *it will cost about the same despite it being of lower quality*.

Once credit entities realize that GPUs get obsolete very fast and five years down the line the early-mover needs to buy again just as much new processing hardware as a late-comer, they will stop subsidizing today's AI as a gamble to capture the market for tomorrow.

And then your 45-minutes-saving boilerplate machine will cost 5$ per run.

@tante Good take! But also, like "can you create software" is not really an accurate framing of what the hard part of software was.

Most people could "create software" by looking up a Hello world example. That wouldn't help them solve amy real problems tho.

LLMs produce software that *looks more like* it solves problems... but security, integrity, legality were kind of always implied parts of the problem.

Like, it takes a weird subtle reframing of the goal to make LLMs look at all useful.

@tante By now I’m pretty convinced llms can make it easier to produce high quality code then writing high quality code manually. Particularly because the AI is willing to do all the tedious, borring tasks that most developers are often to lazy for. Yes, it also makes it much easier to produce shittier code as well. (1/2)
Right now we are seeing way more of the latter because most people haven't learned yet how to produce good AI code and because the bad code sticks out while the good code blends in. But I'm convinced your underlying assumption “AI code = shitty" isn't correct. (2/2)
@tante

@343max @tante I think that's kind of the wrong question. Skill degradation and the moral implications (crawling of copyrighted material, climate, etc.) don't go away just because the generated code is good.

But I'm pretty sure you are aware 🙂

@cjk @tante Honestly I'm not sure about the skill degradation. I think there is a very high chance this is the same “new technology will make the youth stupid" panic that we have seen for centuries with every new technology. Also I really would like to see a deep analysis how much AI is hurting the environment. I don't trust Sam Altmans numbers but I also don’t buy the “every prompt is burning down a small forrest" hyperbole.
@343max @cjk @tante Given the huge investment in data centres I can imagine that the environmental impact is not negligible. But since only players with a lot of money can afford this nowadays, a push for more efficient (and possibly restricted) implementations from others (Chinese?) are not unlikely.
@stralau @cjk We will see how much of this will actually become a reality. Right now I think the discussion should be how we can democratize this technology instead of fighting it. I’m worried about a future where all of this tech is owned by only a very few companies and we all depend on them. I sure hope we will all be able to run something good on personal hardware or at lesst make sure AI becomes a commodity.
@343max @tante
Then, Max, you have no understanding of LLM/Gen AI, or maybe of specifying requirements, designing system (modules, APIs etc) and then writing the code test & debug. If it's any size of project you need a team & management.
There is also documentation.
Actually writing the code is the easiest bit & the only bit the current LLM/Gen AI does, and does badly as it relies on code scraped from elsewhere & statistical shuffling of fragments.
Can't work. It's a technological dead end.
@raymaccarthy @tante Oh the “someone disagrees with me so they must be stupid" argument! Amazing. Please go away now.
@343max @tante
I've designed & written SW for decades and done physical AI courses as well as studying it.
What's your qualification for your amazing claims Max?
Expert systems was AI in 1980s and relied on good design and curation of the knowledge of experts. It was too expensive to build and fragile.
I forecast the the idea of LLM 20+ years ago. Chatbots then had data encoded in the program (Eliza, ALICE etc). I suggested a statistical engine and using the Internet as data. A toy.
@raymaccarthy You are absolutely right, I really shouldn’t trust my own day to day experience and the experience of all the people that I trust over your 20 year old predictions. We are all wrong, our eyes betrayed us, please help us see! Oh please!
@raymaccarthy @343max @tante AI has, in theory, the same potential as a calculator. It can make tasks easier, but it actually does imply skill degradation in a certain field. Solving n-th grade differential equations back in Blaise Pascals time was frustrating. So since Schickard, automating such tasks helped humanity spend their time on the bigger picture instead of grinding through repetitive tasks. New technologies always shifted human skills to a new domain.
@tante Yes, if I had Root Cause Analysis training - #BehavioralScience - in earlier education, I and more like me would've made more of a difference.
@tante No truly Germany has managed to give us great software over the past decades without LLMS.
@tante
Is that running or ruining?
@raymaccarthy that is the way I read it first time through

@tante

Entropy is somebody else's problem - they'll be comfortable on their yacht. We can't expect sociopaths to care about others, else they wouldn't be sociopaths.

@tante Well, well done for admitting that demonstrably, the dog can play the piano. Now we are just talking about how well it plays.

FWIW these LLMs have no need for being consistent with what happened in a previous context. The same LLM, in a new context, will usefully critique and find and fix flaws in what it itself did in the previous context.

The "slop" aspect of LLM output seems to come from just going with what one context produced blind, when it can iterate as, eg, QA manager.

@tante

I mean, you can also "build a house" by using deck screws to connect some wet doug fir 2x4s into a "frame" and then staple on some drywall and siding and drape the whole thing in a plastic tarp.

You will die when it falls on you, but for a time, it was a "house".

@tante The best engineers I know just became more ambitious, and so should all of us.

I'll keep repeating this, there's tonnes of proprietary binary blobs in all of our tech. You can shout from the rooftops about how much you love your /e/OS phone, if your phone modem relies on a proprietary driver, it's pretty much worthless as a "resistance against big tech". European digital sovereignty is equally worthless.

LLMs are good at staring at hexdumps, humans aren't. Use their advantage to build actually open tech.

> But you do pay a hefty price: In lowering quality (security issues, less maintainable), in skill decay in the people "guiding" the stochastic parrots, etc.

Skill issue, idk what more to say. I don't find it any different to managing juniors and reviewing their PRs. Bad code is bad code.

@tante

5 years retired from IT but I still remember CRUD

@tante Hot take: there should be no "software writing" involved in CRUD to begin with. Just some declarative stuff for your specific application and fully generic code.
@dalias @tante Some frameworks let you do this. Django lets you merrily write CRUD in like 12 lines of code (+ model definition). Rails can generate you entire (ugly as heck but functional) webpages for CRUD.
There's backend-as-a-service that also lets you just write frontend components and not care about CRUD stuff at all.
@tante Between development speed and software quality, the industry has chosen speed long before LLMs came to be. It would no doubt be better for all of us, if we all did not let LLMs write software. But whether not using them is viable for a specific company or a specific individual in a competitive environment stays open.
@Oytis @tante The point being: capitalism is not viable for us people.
@grymt @tante You can abolish competition in one particular country, but it only means you are going to be losing international competition - that's already been tested in practice.
@tante used to be companies said they were customer first. Yes most times it was PR but by going AI first not even pretending to care about what is best for customer
@tante Skill decay is the terminology I've been looking for!

@tante I think LLM's are only good at learning programming language's basics or getting yourself interested in some random topic. That's what I do with a local LLM. I do the actual research and learning more than the basics myself.

Idk. I've never liked being spoonfed information anyway and I learn best when my effort of learning pays off. I always liked reading and learning stuff on my own even before LLM's started popping up but idk why people started decided to become really lazy.

@tante so uhm, i rewrote some api of a midsize free software project a year ago. it works fine, except that some features in other parts of of the project stopped working, because they depended on bugs in said api, that i rewrote. It is such an overwhelming amount of work, that i cannot motivate myself to do it. The idea of trying a llm for this is very tempting rn.

@tante I don't believe that to be universally true. I *wish* it was, because it'd be so much easier to argue against them.

Unfortunately, the "mere" fact that all currently existing incarnations are fundamentally evil does not mean they must lead to lower quality software.

A velocity-first mindset has *always* led to lower quality, regardless of GenAI. And they make that rush accessible to everyone, regardless of expertise/skill.

[1/2]

@tante I'm also unsure skill decay is real as such. I also would struggle for a few moments before I could do long division again, or implementing a sorting algorithm.

We get the lower quality not because people use LLMs.

But because they are pressured to ever faster velocity by capitalism/fascism that wants to deregulate everything.

LLMs, used right, can be *useful*.

The problem is they are currently a) evil, b) used badly at scale.

One *can* use them for high quality results. [2/3]

@tante One can - and probably should - argue that one *shouldn't* with the current systems (see "evil, fascism" above), and also not make them so widely / forcibly available. That they need bette regulation, oversight, ... And that the software produced must be held to the same if not higher quality standards.

Sure.

But that's a different take on "any software created with LLMs must be and will be lower quality."

[3/3]

@larsmb the skill decay thing has been shown over and over in studies, even by Microsoft. The canonical defense is: "Yeah but we have always lost skills, it's just normal"

@tante Yes, but is that actually untrue? I know that even Anthropic has shown that people learn less (of what they'd have learned via the traditional method) when completing a task using GenAI, sure.

But are they maybe learning *other* things? Is their use of that tool/method improving, for example? e.g., the Anthropic paper showed that this varied widely for different Usage Patterns.

IDNK. I think it's truly too early to truly understand those mid- to long-term effects.

@larsmb People are so much worse at assembler ever since compilers came along.
@tante
Creativity will suffer in the long run. AI isn't creating anything new.
I could see a world where coders stop sharing data online, ceemreat and lock their code in a new "Internet." Making the old, open Internet dead.
@tante What's harder: writing code or doing a quality review?