AI for programming is just a worse version of what we already had

https://slrpnk.net/post/35609071

AI for programming is just a worse version of what we already had - SLRPNK

I worked as a software engineer. AI is supposed to replace programmers, or at least help you write code. But I never really wrote a lot of code in the first place?? I looked up libraries that do what I need and then wrote a bit of code in-between to link our API or GUI to the right functions of the selected library. And these libraries were tested, functional and most of all consistent and reliable. Now what do you want me to do? Ask an non-deterministic LLM to implement the code from scratch every time I need it in my project? That doesn’t makes sense at all. That’s like building a car and every day you ask somebody else to make you a new wheel. And every wheel will be slightly different than the previous. So your car will drive like shit. Instead, why not just ask a reputable wheel manufacturer to make you 4 wheels? You know they will work. And in the case of programming, people are literally giving away good, reliable wheels for free! (free libraries and APIs) Why use LLMs at all?

Very little of what we’re told to use AI for makes any sense because there were already trained. People were very good at doing those things doing them already and using AI to do those things not only cost money to implement the AI, cost many people jobs by laying them off, and the end result is usually crap. Then there’s a huge invest investment in money in time in fixing the crap so the end result is finally a usable product.

The reason why AI is being pushed so hard is because of all the money that’s been invested in it and the lie that it’s actually worthwhile. No one wants to admit that it isn’t, at least not the people who invested so much money and time into it.

Eventually, the bubble is going to pop, and our entire economy will probably crash as a result.

there is one area where it excels though: bullshitting. that’s why c-levels and aspirational middle management are so impressed, because their roles are all about bullshit.
I argue. It’s just that people who operate at those levels are terrible at detecting AI bullshit. If you spend more than the bare minimum of effort (or intelligence) at trying, it’s pretty obvious when you’re reading AI slop.

yeah some people seem extremely susceptible.

i will admit that my detection skill has been improved by using local models, because i studied machine learning at uni twelve years ago and jumped at the opportunity when the hype cycle began. but it just hasn’t gotten good at anything concrete. it improves marginally at certain tasks, only to fail in more subtle ways every time. it’s getting better not at being a tool, but at disguising itself as one.

Yeah, it all seemed so very promising back then, but those promises really never seemed to materialize… I’m just so disappointed.

i mean it still could lead to something

not by the current big actors, but sometime in the future hopefully.

Oh, I’m sure that’s true, but probably something quite different than what we are being promised and much further down the road. Like how VR was hyped a lot in the early 90s, but we really didn’t get anything like that until quite recently, and it’s not quite the same.

yeah, the tech just wasn’t there for vr. just like how llms aren’t the be all end all of generative machine learning models. agents are getting close, but with the tech we currently have there is no way it could reach the promised agi status.

i actually protested to my professor about this when we were working with neural networks in 2014. were were doing handwriting recognition and i told him “this isn’t ai”. he shot back “oh really? then write me a paper on why” and i couldn’t do it because while i could describe what ai is not, i could not define what it actually is. that feels like the main question we want to be solving for, rather than “how to get statistical text generators to seem clever”.

Even this is disappointing. LLM bullshit is only impressively fluent compared to older generative systems. (It is very impressive compared to them. It just should have stayed in academia longer and its components could develop into useful things. Instead everyone’s falling over themselves about a kick-ass demo.)
yeah it’s the middle-management thing again. “wow it can answer emails” “wow it can shit out demos” “wow it can follow an api spec”. as internet hippo so aptly put it, they saw that it could do the job of a manager and concluded that it was sentient rather than coming to the correct conclusion that managers aren’t.
To be fair, it doesn’t have to be computer code. You could ask it to write a letter to your boss, demanding a raise. Or apply for a new stapler for the office?! Draw an astronaut on a horse as a logo for the new interconnection between the atronomy and horses libraries? 😂

Or apply for a new stapler for the office?!

^^^^^imgonnaburnthisplacetotheground

Ijustwantedmystaplerbuthewouldnthivememystaplerbutmystaplerwasjustthreijustwantedmystapler.
i usually write a lot of code because i have to develop novel algorithms or bespoke drivers for proprietary hardware. llms are completely unable to help with that because that’s not in the training data.

Yeah I usually have the opposite problem op has.

I dont write as much code as people thing engineers do, most of what I do is translating what the user says they want to what they actually want

But the code i do write is so novel and tailor made to our implementation of standards, that AI isn’t useful at all.

What’s weird is the few times im doing the kind of work OP is talking about, is the only time its actually useful because it can generate boilerplate code that interacts with common libraries and tools, which it has a lot of training data on.

Hey! That’s not true!

…it’s also more expensive!

Now what do you want me to do? Ask an non-deterministic LLM to implement the code from scratch every time I need it in my project?

I have some coworkers who are excited about exactly this and I don’t get it at all.

Imagine if we could completely do away with standardization! Think of the security implications!

Literally today I saw a Youtube video demonstrating an AI that responds to your instructions for game inputs. It maybe worked 50% of the time when prompted, and required a second GPU to run the AI, had a terrible UI (looked like mostly coding data stream I dunno I’m not a programmer) and was slow when it did work. People in the chat: “the future of gaming!”.

I was running Game Commander on a 486 (iirc), and Voice Attack a decade ago. We are doomed.

Im far from a proponent for AI, but you can ask it to use libraries
yep, i was just ranting to my girlfriend about this. It’s really depressing that folks are convinced this is going to solve real problems.

…wat?

LLMs are completely capable of invoking npm install, the fuck you smoking lol.

Theyre perfectly able to read docs and use existing libraries fine.

Arguably I find better, because when I bump into a bug in a lib, I can fork it, fix the bug, get my patch up, and use the patch in my project in like 15min flat, no longer having to even worry about “ugh is it worth the effort to fix it”

Yes, its so easy now. I just do it.

Furthermore agents can look up documentation in mere seconds and read it (better than Id say a lot of junior devs tend to approach problems ngl)

If the agent is equipped with adequate tools and instructions, its extremely productive under strict guidance.

i agree i think of AI as the fossil fuel of technology, incredibly wasteful

i have had some success with it getting examples of things i know are common but i can’t find. so i’m using like a search engine for ideas or templates