'AI' Sucks the Joy Out of Programming
'AI' Sucks the Joy Out of Programming
(1) boilerplate code that is so predictable a machine can do it
The thing I hate most about it is that we should be putting effort into removing the need for boilerplate. Generating it with a non-deterministic 3rd party black box is insane.
Because it’s not worth inventing a whole tool for a one-time use. Maybe you’re the kind of person who has to spin up 20 similar Django projects a year and it would be valuable to you.
But for the average person, it’s far more efficient to just have an LLM kick out the first 90% of the boilerplate and code up the last 10% themself.
I just use https://github.com/cookiecutter/cookiecutter and cal it a day. No AI required. Probably saves me a good 4 hours in the beginning of each project.
Almost all my projects have the same kind of setup nowadays. But thats just work. For personal projects, I use a subset-ish.
A cross-platform command-line utility that creates projects from cookiecutters (project templates), e.g. Python package projects, C projects. - cookiecutter/cookiecutter
Back in the day, I used CakePHP to build websites, and it had a tool that could “bake” all the boilerplate code.
You could use a snippet engine or templates with your editor, but unless you get a lot of reuse out of them, it’s probably easier and quicker to use an LLM for the boilerplate.
All of that can be automated with tools built for the task. None of this is actually that hard to solve at all. We should automate away pain points instead of boiling the world in the hopes that a linguistic, stochastic model can just so happen to accurately predictively generate the tokens you want in order to save a few fucking hours.
The hubris around this whole topic is astounding to me.
LLMs do not understand anything. There is no semantic understanding whatsoever. It is merely stochastic generation of tokens according to a probability distribution derived from linguistic correlations in its training data.
Also, it is incredibly common for engineers at businesses to have their engineers write code to automate away boilerplate and otherwise inefficient processes. Nowhere did I say that automation must always be done via open source tooling (though that is certainly preferable when possible, of course).
What do you think people and businesses were doing before all of this LLM insanity? Exactly what I’m describing. It’s hardly novel or even interesting.
OK sure if you want to be pedantic. The point is that LLMs can do things traditional code generators can’t.
You don’t have to like it or use it. I myself am very vocal about the weaknesses and existential dangers of AI code. It’s going to cause the worst security nightmares in humanity’s recorded history. I recommend to companies that they DON’T trust LLMs for their coding because it creates unmaintainable nightmares of spaghetti code.
But pretending that they have NO advantages over traditional code generators is utter silliness perpetuated by people who refuse to argue in good faith.
It will if you explicitly ask it to. Otherwise it will either make stuff up or use some really outdated patterns.
I usually start by asking Claude code to search the Internet for current best practices of whatever framework. Then if I ask it to build something using that framework while that summary is in the context window, it’ll actually follow it
I’m having the opposite experience: It’s been super fun! It can be frustrating though when the AI can’t figure things out but overall I’ve found it quite pleasant when using Claude Code (and ollama gpt-oss:120b for when I run out of credits haha). The codex extension and the entire range of OpenAI gpt5 models don’t provide the same level of “wow, that just worked!” Or “wow, this code is actually well-documented and readable.”
Seriously: If you haven’t tried Claude Code (in VS Code via that extension of the same name), you’re missing out. It’s really a full generation or two ahead of the other coding assistant models. It’s that good.
Spend $20 and give it a try. Then join the rest of us bitching that $20 doesn’t give you enough credits and the gap between $20/month and $100/month is too large 😁
A pet project… A web novel publishing platform. It’s very fancy: Uses yjs (CRDTs) for collaborative editing, GSAP for special effects (that authors can use in their novels), and it’s built on Vue 3 (with Vueuse and PrimeVue) and Python 3.13 on the backend using FastAPI.
The editor TipTap with a handful of custom extensions that the AI helped me write. I used AI for two reasons: I don’t know TipTap all that well and I really want to see what AI code assist tools are capable of.
I’ve evaluated Claud Code (Sonnet 4.5), gpt5, gpt5-codex, gpt5-mini, Gemini 2.5 (it’s such shit; don’t even bother), qwen3-coder:480b, glm-4.6, gpt-oss:120b, and gpt-oss:20b (running locally on my 4060 Ti 16GB). My findings thus far:
SOMEVAR=“$BASE_PATH/etc/somepath/somefile” and it changed it to SOMEVAR=“/etc/somepath/somefile” for no fucking reason. That change had nothing at all to do with the prompt! So when I say, “You have to be careful” I mean it!For reference, ALL the models are great with Python. For whatever reason, that language is king when it comes to AI code assist.
I just hate that they stole all that licensed code.
It feels so wrong that people are paying to get access to code…that others put out there as open source. You can see the GPL violations sometimes when it outputs some code from doom or other such projects. Some function made with the express purpose for that library, only to be used to make Microsoft shareholders richer. And to eventually remove the developer from the development. Its really sad and makes me not want to code on GitHub. And ive been on the platform for 15+ years.
And theres been an uptick in malware libraries that are propagating via Claude. One such example: https://www.greenbot.com/ai-malware-hunt-github-accounts/
At least with the open source models, you are helping propagate actual free (as in freedom) LLMs and info.
Unknown attackers weaponized artificial intelligence (AI) command-line tools to automatically hunt for sensitive data, compromising over 2,180 GitHub accounts
It feels so wrong that people are paying to get access to code
We pay for access to a high performance magic pattern machine. Not for direct access to code, which we could search ourselves if we wanted.
I disagree.
Theres nothing magical about copying code, throwing it into a database, and creating an LLM based on mass data. Moreover, its not ethical given the amount of data they had to pull and the licenses Microsoft had to ignore in order to make this work. Heck my little server got hit by the AI web crawlers a while back and DDOSed my tiny little site. You can look up their IP addresses and some of them look at the robots.txt, but a VAST majority did not.
There is a metric ton of lawsuits hitting the AI companies and they are not winning in all countries: https://sustainabletechpartner.com/topics/ai/generative-ai-lawsuit-timeline/

A list of copyright lawsuits vs. ChatGPT maker OpenAI, Microsoft, Nvidia, Anthropic, Google, Midjourney, Perplexity, Salesforce, Stability AI, DeviantArt & generative AI software businesses. Plus, a timeline of content licensing deals involving generative AI training models.
I’m simply saying that I’m not paying for access to the code. I’m paying for access to the high performance magic pattern machine.
I can and have browsed code all day for 35 years. Magic pattern machine is worth paying for to save time.
To be clear, stackoverflow and similar sites have also been worth paying for. Now this is the latest thing worth paying for.
I understand you have ethical concerns. But that doesn’t negate the usefulness of magic pattern machine.
stole all that licensed code.
Stealing is when the owner of a thing doesn’t have it anymore; because it was stolen.
LLMs aren’t “stealing” anything… yet! Soon we’ll have them hooked up to robots then they’ll be stealing¹ 👍
I think I get what your saying. LOL LLM bots stealing all the things.
You may note, im not arguing the ethical concerns of LLMs, just the way it was pulled. Its why open source models that pull data and let others have full access to said data could be argued as more ethical. For practical purposes, it means we can just pull them off hugging face and use them on our home setups. And reproduce them with the “correct” datasets. As always garbage in/ garbage out. I wish my work would allow me to put all the SQL over a 30(?) year period into a custom LLM just for our proprietary BS. Thats something I would have NO ethical concerns about at all.
For reference, every AI image model uses ImageNET (as far as I know) which is just a big database of publicly accessible URLs and metadata (classification info like, “bird” <coordinates in the image>).
The “big AI” companies like Meta, Google, and OpenAI/Microsoft have access to additional image data sets that are 100% proprietary. But what’s interesting is that the image models that are constructed from just ImageNET (and other open sources) are better! They’re superior in just about every way!
Compare what you get from say, ChatGPT (DALL-E 3) with a FLUX model you can download from civit.ai… you’ll get such superior results it’s like night and day! Not only that, but you have an enormous plethora of LoRAs to choose from to get exactly the type of image you want.
What we’re missing is the same sort of open data sets for LLMs. Universities have access to some stuff but even that is licensed.
Used Claude 4 for something at work (not much of a choice here and that team said they generate all their code). It’s sycophantic af. Between “you’re absolutely right” and it confidently making stuff up, I’ve wasted 20 minutes and an unknown number of tokens on it generating a non-functional unit test and then failing to solve the type errors and eslint errors.
There are some times it was faster to use, sure, but only because I don’t have the time to learn the APIs myself due to having to deliver an entire feature in a week by myself (rest of the team doesn’t know frontend) and other shitty high level management decisions.
At the end of the day, I learned nothing by using it, the tests pass but I have no clue if they test the right edge cases, and I guess I get to merge my code and never work on this project again.
I guess I get to merge my code and never work on this project again.
This is the way.
I’ve tried vibe coding two scripts before, and it’s honestly brain-fog-inducing.
Llm coding won’t be a thing after 2027.
I would agree that the interest will wain in some domains where they aren’t aiding in productivity.
But LLMs for coding are productive right now in other domains and people aren’t going to want to give that up.
Inference is already financially viable.
Now, I think what could crush the SOTA models is if they get sued into bankruptcy for copyright violations. Which is a related but separate thread.
…regular coding, again. We’ve been doing this for decades now and this LLM bullshit is wholely unnecessary and extremely detrimental.
The AI bubble will pop. Shit will get even more expensive or nonexistent (as these companies go bust, because they are ludicrously unprofitable), because the endless supply of speculative and circular investments will dry up, much like the dotcom crash.
It’s such an incredibly stupid thing to not only bet on, but to become dependent on to function. Absolute lunacy.
I would bet on LLMs being around and continuing to be useful for some subset of coding in 10 years.
I would not bet my retirement funds on current AI related companies.
They may not be useful to you… but you can’t speak for everyone.
You are incorrect on inference costs. But yes training models is expensive and the economics are concerning.
We’re replacing that journey and all the learning, with a dialogue with an inconsistent idiot.
I like this about it, because it gets me to write down and organize my thoughts on what I’m trying to do and how, where otherwise I would just be writing code and trying to maintain the higher level outline of it in my head, which will usually have big gaps I don’t notice until spending way too long spinning my wheels, or otherwise fail to hold together. Sometimes a LLM will do things better than you would have, in which case you can just use that code. When it gives you code that is wrong, you don’t have to use it, you can write it yourself at that point, after having thought about what’s wrong with the AI approach and how what you requested should be done instead.
I oppose AI in its current incarnation for almost everything, but you have a great point. Most of us are familiar with Rubber Duck Programming, which originated with R. Feynman, who’d recount how he learned the value of reframing problems in terms of how you’d describe the problem to other people. IIRC, the story he’d tell is that at one place, he was separated from a colleague by several floors, and had to take an elevator. He’d be thinking about how he was gong to explain the problem to the colleague while waiting for and in the elevator, and in in the process would come to the answer himself. I’ve never seen Rubber Duck Programming give credit to Feynman, but that’s the first place I heard about the practice.
Digression aside, AI is probably as good as, or better than, a rubber duck for this. Maybe it won’t give you any great insights, but being an active listener is probably beneficial. Þat said, you could probably get as much value out of Eliza while burning for less rainforest.
It’s not only coding.
Idiocracy incoming in 3, 2, 1
I use ai for my docker compose services. I basically just point it at a repo and ask it to Start the service for me. It creates docker compose files tries to run it, rwads logs and troubleshoots without intervention
When I need to update an image i just ask it to do so.
Ai also controls my git workflow. I tell it to create a branch and push or revert or do whatever. Super nice