'AI' Sucks the Joy Out of Programming

https://programming.dev/post/39805616

'AI' Sucks the Joy Out of Programming - programming.dev

Lemmy

I’m having the opposite experience: It’s been super fun! It can be frustrating though when the AI can’t figure things out but overall I’ve found it quite pleasant when using Claude Code (and ollama gpt-oss:120b for when I run out of credits haha). The codex extension and the entire range of OpenAI gpt5 models don’t provide the same level of “wow, that just worked!” Or “wow, this code is actually well-documented and readable.”

Seriously: If you haven’t tried Claude Code (in VS Code via that extension of the same name), you’re missing out. It’s really a full generation or two ahead of the other coding assistant models. It’s that good.

Spend $20 and give it a try. Then join the rest of us bitching that $20 doesn’t give you enough credits and the gap between $20/month and $100/month is too large 😁

I just hate that they stole all that licensed code.

It feels so wrong that people are paying to get access to code…that others put out there as open source. You can see the GPL violations sometimes when it outputs some code from doom or other such projects. Some function made with the express purpose for that library, only to be used to make Microsoft shareholders richer. And to eventually remove the developer from the development. Its really sad and makes me not want to code on GitHub. And ive been on the platform for 15+ years.

And theres been an uptick in malware libraries that are propagating via Claude. One such example: https://www.greenbot.com/ai-malware-hunt-github-accounts/

At least with the open source models, you are helping propagate actual free (as in freedom) LLMs and info.

AI Turned Against Developers: Malware Prompts Claude And Gemini In GitHub Supply Chain Attack

Unknown attackers weaponized artificial intelligence (AI) command-line tools to automatically hunt for sensitive data, compromising over 2,180 GitHub accounts

GreenBot | Android News, Hacks, Apps, Tips & Reviews Blog

stole all that licensed code.

Stealing is when the owner of a thing doesn’t have it anymore; because it was stolen.

LLMs aren’t “stealing” anything… yet! Soon we’ll have them hooked up to robots then they’ll be stealing¹ 👍

  • Because a user instructed it to do so.
  • I think I get what your saying. LOL LLM bots stealing all the things.

    You may note, im not arguing the ethical concerns of LLMs, just the way it was pulled. Its why open source models that pull data and let others have full access to said data could be argued as more ethical. For practical purposes, it means we can just pull them off hugging face and use them on our home setups. And reproduce them with the “correct” datasets. As always garbage in/ garbage out. I wish my work would allow me to put all the SQL over a 30(?) year period into a custom LLM just for our proprietary BS. Thats something I would have NO ethical concerns about at all.

    For reference, every AI image model uses ImageNET (as far as I know) which is just a big database of publicly accessible URLs and metadata (classification info like, “bird” <coordinates in the image>).

    The “big AI” companies like Meta, Google, and OpenAI/Microsoft have access to additional image data sets that are 100% proprietary. But what’s interesting is that the image models that are constructed from just ImageNET (and other open sources) are better! They’re superior in just about every way!

    Compare what you get from say, ChatGPT (DALL-E 3) with a FLUX model you can download from civit.ai… you’ll get such superior results it’s like night and day! Not only that, but you have an enormous plethora of LoRAs to choose from to get exactly the type of image you want.

    What we’re missing is the same sort of open data sets for LLMs. Universities have access to some stuff but even that is licensed.