I used AI. It worked. I hated it.

https://lemmy.world/post/45089084

I used AI. It worked. I hated it. - Lemmy.World

cross-posted from: https://programming.dev/post/48191305 [https://programming.dev/post/48191305] > Or maybe that’s just me. I’ve been writing code for a good chunk of my life now. I find deep joy in the struggle of creation. I want to keep doing it, even if it’s slower. Even if it’s worse. I want to keep writing code. But I suspect not everyone feels that way about it. Are they wrong? Or can different people find different value in the same task? And what does society owe to those who enjoy an older way of doing things? > > If I could disinvent this technology, I would. My experiences, while enlightening as to models’ capabilities, have not altered my belief that they cause more harm than good. And yet, I have no plan on how to destroy generative AI. I don’t think this is a technology we can put back in the box. It may not take the same form a year from now; it may not be as ubiquitous or as celebrated, but it will remain. > > And in the realm of software development, its presence fundamentally changes the nature of the trade. We must learn how to exist in a world where some will choose to use these tools, whether responsibly or not. Is it possible to distinguish one from the other? Is it possible to renounce all code not written by human hands? > > https://taggart-tech.com/reckoning [https://web.archive.org/web/20260402210313/https://taggart-tech.com/reckoning/] [web-archive]

This reads like it was wrote by a paid bot

really? it read to me that the vigilance the writer had to exude to maintain this project under the care of the LLM, was exhausting. did it work? yea. i kinda see it the same way he does, except you have to really REALLY know what you are doing and hyper vigilant to make sure the ai does not hallucinate and mess up. and these tech bros do not sell the llm’s this way.

LLM’s have their uses, where that is worth the squeeze, i am not sure about. but i find it’s much better to use the LLM’s as a method of teaching and not doing. but the danger lies in that the LLM, if it can not find a correct answer, will use outdated data, even if it knows it’s out dated, or will straight up lie, and refuse to admit it does not know unless it is called out on it. again you have to know when and where to do that.