I used AI. It worked. I hated it.

https://lemmy.world/post/45089084

I used AI. It worked. I hated it. - Lemmy.World

cross-posted from: https://programming.dev/post/48191305 [https://programming.dev/post/48191305] > Or maybe that’s just me. I’ve been writing code for a good chunk of my life now. I find deep joy in the struggle of creation. I want to keep doing it, even if it’s slower. Even if it’s worse. I want to keep writing code. But I suspect not everyone feels that way about it. Are they wrong? Or can different people find different value in the same task? And what does society owe to those who enjoy an older way of doing things? > > If I could disinvent this technology, I would. My experiences, while enlightening as to models’ capabilities, have not altered my belief that they cause more harm than good. And yet, I have no plan on how to destroy generative AI. I don’t think this is a technology we can put back in the box. It may not take the same form a year from now; it may not be as ubiquitous or as celebrated, but it will remain. > > And in the realm of software development, its presence fundamentally changes the nature of the trade. We must learn how to exist in a world where some will choose to use these tools, whether responsibly or not. Is it possible to distinguish one from the other? Is it possible to renounce all code not written by human hands? > > https://taggart-tech.com/reckoning [https://web.archive.org/web/20260402210313/https://taggart-tech.com/reckoning/] [web-archive]

I want to keep doing it, even if it’s slower. Even if it’s worse.

You haven’t tried AI long enough. It’s faster than you, but the code you create is way better.

AI is only superficially good. If you ask it to piss some code that does something, sure it’ll do it. The code will even be readable, well-formatted and decently correct. Where AI fails is when the code lives in a particular environment, has constraints in terms of compatibility, solves a particular problem in a complex environment… Then AI will fail you spectacularly, and if you get lulled into thinking it’s cleverer than the idiot savant it truly is, you’ll get bitten hard.

Not sure it will be worse then my coding specifically, but any dev who knows Edgar they’re doing really shouldn’t be relying on AI and will get frustrated a lot when vibe coding. It’s only that I sort of know what I’m doing that I’m finding out that the LLMs take my logs and my context as explanation and just start ‘patching’ by writing conversion functions out the wazoo. At some point you’ll have 16 conversion layers and yeah it works but it’s a serious design flaw.

Yeah an LLM is like a fancy code completion tool. It’s useful for repetitive fast things you can’t be bothered with. Or looking at final code to see if there’s anything obvious or smarter way overlooked. That’s about it. It will hallucinate and fuck up about 1/5 times and contradict itself constantly. It’ll also argue a lot about non-existent functions working and stuff.

This can’t ever be improved because it’s algorithmic at the core. We’ll only ever be able to apply guardrail on guardrail on guardrail and get messy creative with the already established limit.

Yeah, that’s cope. You can use it poorly, sure. Vibing will absolutely bite you in the ass.

But you can also feed it smaller, manageable chunks that you know are possible and straightforward, and then you can review those results and tweak and revise as needed. This is not vibe coding, but it is using AI in a way that will make you faster and more efficient.

For 95% of applications, it’s appropriate and efficient to use some level of AI. Maybe things like the Linux kernel don’t need it, but those applications are few and far between.

I’m not worried about being replaced. I’ve seen what it can do, for better and worse, and someone who knows what they’re doing needs to be in the loop. I don’t see that going anywhere anytime soon.

Really what I see happening is that projects that would take two months now take six weeks, but also end up more polished and feature complete than they would have otherwise. Maybe with that kind of efficiency gain you can cut a few people, but not that many. And most places can just use the extra 20% productivity.

If you’re doing it right and not vibing, it’s not the 2000% gain the techbros promise, but it is some gain.

I’ll agree with the assessment, moderately useful, depending on context, but “vibe” coding is a recipe for failure.

It also tends to neglect identifying a library and just embedding code directly, which for one makes me feel uncomfortable about not getting external maintenance and for another goes too far into maybe lifting someone’s work without attribution.

So if I want to make a cli utility, sure I might prompt up the argv parsing since it’s tedious and obvious and not going to be knocking off a viable off the shelf option. But the tech has to be applied very carefully.

I’m excited about using fewer third party dependencies, personally. npm is a hot mess.

Problem is you end up using the third party code indirectly, but without maintenance or a mechanism to notify about discovered flaws.

I wouldn’t touch the npm with a 20 foot pole if I could help it, hard to avoid with a SPA but even then I am very careful and don’t pull in dependencies lightly.

If not running in a browser, then it’s probably golang or c for me, depending on the application. Python of it’s got to be worked together with some other people, but very wary of pip just like npm (but pythons core is richer and easy to use c libraries)

Really what I see happening is that projects that would take two months now take six weeks, but also end up more polished and feature complete than they would have otherwise.

Which is not at all what we’re in reality. Stuff is actively getting worse when using AI code heavily (take GitHub for example), and companies who use AI for code aren’t producing any productivity gains, or increases profits.

There is no sign of benefit on either side of the equation.

Well, those companies are pushing AI and vibe coding so hard it’s actually stupid. CEOs are on this bandwagon that AI is going to replace 80% of their labor costs.

It ain’t gonna happen. And trying it that way is to going to do a lot of damage, both to people and the companies.

It’s a lot like the dotcom bubble, not perfectly, but close enough that we should’ve learned more from it. There’s a lot of absolutely dumb shit going around, but eventually some of it is going to stick. I expect the stuff that sticks won’t be from people humping the hype train.

The problem is that you can do it yourself in 8 weeks, AI assisted equivalent in 6 weeks… Or pure AI YOLO in a few hours.

Management prefers to take the risk, specially when they are already hiring the cheapest they can’t find which won’t be great anyway.

My mindset: understand the task, prompt in detailed steps, validate each change, test thoroughly.

Yeah, that’s cope

Sorry if I sound hopeless, but out of curiosity, what does “cope” mean?
Inquiring mind from the 70’s wants to know.

As in “coping mechanism”, or rationalizing with a bias to make yourself feel better about something.

Almost related, www.youtube.com/shorts/azAu98AIN_I

Welcome Zaddy Harry🎨

YouTube

As in “coping mechanism”

Oh right, so it was about coping. I thought maybe it was like “groovy” or “cool” which have nothing to do with records or heat.

Almost related, https://www.youtube.com/shorts/azAu98AIN_I

Well hot damn, I didn’t get half of it. That made me feel old. Thanks 🙂

Welcome Zaddy Harry🎨

YouTube

It depends on the situation.

If the situation is you are playing in a very well trodden area and you can be flexible in accepting the LLM product even as it didn’t fit what you would have had in mind, it can likely do “ok”. “Make me a super Mario Brothers style game”. The output will not be what you probably wouild have wanted, and further it will be a soul crushingly pointless “game” compared to just playing an existing platformer, but it will crank out something vaguely like you would have guessed. The sort of projects I have generally avoided because they usually reinvent the wheel for pointless reasons and it’s very unrewarding for me. However fairly common in big businesses to make stupid internal applications like this. Very depressingly, I expect steam to be flooded with AI slop just like it has been flooded with stock asset slop.

If you are making something more novel and/or cannot tolerate deviations from a very specific vision, well LLM goes more pointless.

I agree that the code AI produces is incoherent garbage, but the problem is that 95% of human written code in circulation is also incoherent garbage with zero structure or documentation. Unless you are writing very important software the bar is hilariously low. I personally think the solution here is to treat software with more respect in general, but that would require companies thinking beyond the next quarter, and to end the rats’ race which causes employees to not be incentivized to care about efficient and we’ll structured software.
This reads like it was wrote by a paid bot

really? it read to me that the vigilance the writer had to exude to maintain this project under the care of the LLM, was exhausting. did it work? yea. i kinda see it the same way he does, except you have to really REALLY know what you are doing and hyper vigilant to make sure the ai does not hallucinate and mess up. and these tech bros do not sell the llm’s this way.

LLM’s have their uses, where that is worth the squeeze, i am not sure about. but i find it’s much better to use the LLM’s as a method of teaching and not doing. but the danger lies in that the LLM, if it can not find a correct answer, will use outdated data, even if it knows it’s out dated, or will straight up lie, and refuse to admit it does not know unless it is called out on it. again you have to know when and where to do that.

Nice article! What I’d like to know is the cost of a project this size. Like what was the AI bill for that? And for how long did they sit in front of the computer and press 1 repeatedly?
Why was this article so difficult to read?? The language was heavy in strange metaphors that didn’t fit or have a point. I lost interest after the second paragraph. I still don’t have a clear picture of the authors story.
It reads like AI slop.

I think there’s a missing point from the “Brain drain” part of the post:

Yes, brain drain is real when using these tools. But its brain drain and the fact that this gravy train bubble will pop soon. And once it does? Prices for these tools will skyrocket.

Even if the bubble doesn’t pop, they’ll have to increase prices to remain viable. At which point, you’re dev infrastructure will be entirely dependent on AI slop, and you won’t have developed any of the skills to cope without it. This is a slow motion car crash waiting to happen. If you outsource intelligence to private corps, you WILL get screwed.