I used AI. It worked. I hated it.

https://lemmy.world/post/45089084

I used AI. It worked. I hated it. - Lemmy.World

cross-posted from: https://programming.dev/post/48191305 [https://programming.dev/post/48191305] > Or maybe that’s just me. I’ve been writing code for a good chunk of my life now. I find deep joy in the struggle of creation. I want to keep doing it, even if it’s slower. Even if it’s worse. I want to keep writing code. But I suspect not everyone feels that way about it. Are they wrong? Or can different people find different value in the same task? And what does society owe to those who enjoy an older way of doing things? > > If I could disinvent this technology, I would. My experiences, while enlightening as to models’ capabilities, have not altered my belief that they cause more harm than good. And yet, I have no plan on how to destroy generative AI. I don’t think this is a technology we can put back in the box. It may not take the same form a year from now; it may not be as ubiquitous or as celebrated, but it will remain. > > And in the realm of software development, its presence fundamentally changes the nature of the trade. We must learn how to exist in a world where some will choose to use these tools, whether responsibly or not. Is it possible to distinguish one from the other? Is it possible to renounce all code not written by human hands? > > https://taggart-tech.com/reckoning [https://web.archive.org/web/20260402210313/https://taggart-tech.com/reckoning/] [web-archive]

I want to keep doing it, even if it’s slower. Even if it’s worse.

You haven’t tried AI long enough. It’s faster than you, but the code you create is way better.

AI is only superficially good. If you ask it to piss some code that does something, sure it’ll do it. The code will even be readable, well-formatted and decently correct. Where AI fails is when the code lives in a particular environment, has constraints in terms of compatibility, solves a particular problem in a complex environment… Then AI will fail you spectacularly, and if you get lulled into thinking it’s cleverer than the idiot savant it truly is, you’ll get bitten hard.

It depends on the situation.

If the situation is you are playing in a very well trodden area and you can be flexible in accepting the LLM product even as it didn’t fit what you would have had in mind, it can likely do “ok”. “Make me a super Mario Brothers style game”. The output will not be what you probably wouild have wanted, and further it will be a soul crushingly pointless “game” compared to just playing an existing platformer, but it will crank out something vaguely like you would have guessed. The sort of projects I have generally avoided because they usually reinvent the wheel for pointless reasons and it’s very unrewarding for me. However fairly common in big businesses to make stupid internal applications like this. Very depressingly, I expect steam to be flooded with AI slop just like it has been flooded with stock asset slop.

If you are making something more novel and/or cannot tolerate deviations from a very specific vision, well LLM goes more pointless.