If you don’t have the resources to write and understand the code yourself, you don’t have the resources to maintain it either.

Any monkey with a keyboard can write code. Writing code has never been hard. People were churning out crappy code en masse way before generative AI and LLMs. I know because I’ve seen it, I’ve had to work with it, and I no doubt wrote (and continue to write) my share of it.

What’s never been easy, and what remains difficult, is figuring out the right problem to solve, solving it elegantly, and doing so in a way that’s maintainable and sustainable given your means.

Code is not an artefact, code is a machine. Code is either a living thing or it is dead and decaying. You don’t just write code and you’re done. It’s a perpetual first draft that you constantly iterate on, and, depending on what it does and how much of that has to do with meeting the evolving needs of the people it serves, it may never be done. With occasional exceptions (perhaps? maybe?) for well-defined and narrowly-scoped tools, done code is dead code.

So much of what we call “writing” code is actually changing, iterating on, investigating issues with, fixing, and improving code. And to do that you must not only understand the problem you’re solving but also how you’re solving it (or how you thought you were solving it) through the code you’ve already written and the code you still have to write.

So it should come as no surprise that one of the hardest things in development is understanding someone else’s code, let alone fixing it when something doesn’t work as it should. Because it’s not about knowing this programming language or that (learning a programming language is the easiest part of coding), or this framework or that, or even knowing this design pattern or that (although all of these are important prerequisites for comprehension) but understanding what was going on in someone else’s head when they wrote the code the way they wrote it to solve a particular problem.

It frankly boggles my mind that some people are advocating for automating the easy part (writing code) by exponentially scaling the difficult part (understanding how exactly someone else – in this case, a junior dev who knows all the hows of things but none of the whys – decided to solve the problem). It is, to borrow a technical term, ass-backwards.

They might as well call vibe coding duct-tape-driven development or technical debt as a service.

🤷‍♂️

#AI #LLMs #vibeCoding #softwareDevelopment #design #craft

@aral vibe coders churn code for real coders to fix. sheesh!
@aral TLDR Hey claude, summarise that for me.

@aral I agree, and I am absolutely *not* against AI tools. But as wise people say — code is read much more than it is being written. This is why it should be easy to read. That’s one thing.

The other is: I believe a lot depends on *how* you use the tools and your literacy in how they work and what to expect. AI is amazing at helping to *read* code exactly — analyze, travel all those paths that would take me hours to travel, and find out how things work, what the dependencies are and what the data flow is.

@ikari @aral you... know that the LLM when asked to summarize code is just giving you the most likely summary based on the closest matches it could find in the training database, right?

I mean, you clearly don't understand the implications of that, so maybe you didn't know that? But your comment about the importance of understanding the tools capabilities implied that?

LLMs are a *terrible* tool for explaining or summarizing code, precisely because they will get it right some high percentage of the time and imperceptibly but potentially disastrously wrong some low percentage of the time.

To give a very concrete example: if I find a function with 17 if/else cases and hard-to-follow juggling of multiple state variables, and I just want to know: will it always return a string? If I prompt an LLM with that question, it doesn't answer the question by doing static analysis of the code paths to try to prove or disprove the answer (a type checker would do that). Instead it effectively is searching through its training data for instances where other people asked the same question about different code, with some bias towards more-similar code to the code you're looking at, and then predicting what an answer would look like based on that search. The answer will sound plausible and might even include a plausible-sounding explanation, but the relationship between the answer you get and the code you asked about is extremely tenuous. In this example if it's gives you a false positive and you use the function assuming it does always return a string, you're in for some fun later when a non-string gets returned. The worst part is that you won't remember later what the likely culprit is because when the LLM-generated answer assured you the function always returned a string, you mentally dropped your suspicions about whether that code would always return a string, so you don't even have a lingering "if it crashes due to a non-string value in a place a string is needed I should double-check function X" mental note.

@aral six months ago I would have agreed completely. And I still do for anything that goes into production, or is to be used by anyone but myself. But single use little apps has become my best friend...

I work a lot with databases at the moment, and one of the most hellish things is to switch modes (mentally) between automatic sorting using regex and whatnot, manual tweaking to things that don't get caught by regex and others scripts, different data views to see what's going on where, etc.

By vibe coding small Textual apps, using code I could never write or maintain myself, I can read from a live database, and write to a live database. The app can do any changes I want, using any keyboard shortcuts I want.

So when I inspect entries in a database, and "oh crap there are a hundred mangled replicas of the same entry here, but no columns are the same!!" - instead of spending 20 minutes figuring out how a script can detect duplicate content across different columns, I can spend 3 minutes vibing a new version of my app, that is basically just Tinder for databases. Right arrow = keep, left arrow = discard, up arrow = good data. I can now sort 100 entries in less than a minute.

I would never claim the app is good, I would never sell it, heck I probably wouldn't even give it to anyone because it would as you rightly say be hell (or impossible) to maintain or improve. It crashes sometimes, but it doesn't matter because it writes directly to db, so all changes are saved.

There *are* things AI can be used for, that involve a clueless monkey just asking it to do things. Like write good SEO metadata for a website. Like looking through a complicated repo and giving me different sorts of maps and flowcharts depending on what I need to look for and understand. Like reading a 40mb plain text file and finding out what sort of CSV it originally was supposed to be, before someone gabled it through 15 different encodings. In that last example, I would never of course use LLMs to recreate the data structure (slow, unnecessary use of energy, lots and lots of errors, etc). But having a machine tell me the structure, visually in a TUI, so I can get the regex right the fourteenth time instead of the 140th time, dude I'm here for it.

@me @aral

Your example reminds me of those flimsy plastic shopping bags.

Made for single use, really handy if you forgot to bring a bag, firm chance of falling apart before you reach the kitchen, but(!) at that moment the cheap and quick way to get stuff home...

... and causing immeasurable damage for centuries on a global scale.

The right acronym for AI should be EC. Externalized Cost.

PS: I claim no moral high ground to stand on; not about plastic bags, nor AI usage. I too live in this world.

@avuko @me @aral No AI == Idiot Assistant.
@jaypeach53 @avuko @aral Which "AI" are you talking about? BERT? Scikit-learn? Whisper? .....Gaussian splatting?

Do you think all subtitles on social media videos should be transcribed by humans rather than a Whisper-model, for example? Or humans should manually detect spam mail instead of scikit? Come on...
@avuko @aral that's a fair analogy. Especially since what I do, which is not vibe-coding entire sites, solutions, backends etc, but ~10 prompts that gives me a machine I can use a lot could be comparable to using a couple of plastic bags 100 times before throwing them away.
@me @aral you’re a lousy coder if you need sleaze machines to do little apps. And your logic that if it screws up, the database will save you is an example of your incompetence.
@jaypeach53 @aral ok, then write me a textual app that does the 15 different thing my stupid little apps do, if you mean you can do it more efficiently. It took me 15 minutes to vibe the first one, and about 3 minutes to adjust the secondary iterations = ~60 minutes.

If it does the job, it does the job. What you're basically saying is that no one should use a non-stick pan, because if you know how to do it properly, all pans are non stick.
@jaypeach53 @aral Lol you also think I'm an incompetent coder. I am not a coder. I suppose you're one of those people who have their heads so far up their own ego that there are no other competences than their own.

@aral

Programs must be written for people to read, and only incidentally for machines to execute

- Harold Abelson

@aral Yes indeed. I worked with several companies over the years that had sunk lots of money into developing lots of software the old-fashioned way -- human time and effort, various levels of quality and results. A considerable fraction of those firms treated their code like the crown jewels and surrounded it with a royal guard of lawyers, contracts, rules and threats. I always found this amusing. Once removed from the environment on which it was so dependent, even if by hook or crook, the code would have been worthless. I don't work with AI, but I suppose that much of the code produced nowadays by AI, or by gig economy, offshore, or otherwise alienated workers, arrives on the scene with shortcomings -- sort of like a teenager from outer space.

@aral if you do not understand what is being generated, it is called technical debt.

Much less typing. 🙃

This applies to stackoverflow plagiarism, right up to... this... and beyond.

A programmer thinks.

@knowprose @aral He understands. Believe me, he understands,

@holdenweb @aral I am a firm advocate of brevity, so you will forgive me as I will forgive the long scroll.

That is the negotiation.

@aral The accurate technical term is "back asswards" though.
@ska @aral I've always used "bass ackwards" myself.

@aral I completely agree.

It's like ditching a language interpreter and asking an online service to do it. You might get the basics but you will miss the details and eventually you will misunderstand entirely.

All this AI code is going to eventually expose us to massive data breaches and failures.

@daj @aral to agree pedantically here: it is *already* causing failures. Amazon admitted it outright, instituting new rules for code review (hah, not a good solution) after LLM-"assisted" changes led to several outages. Looking at Github's abysmal uptime recently it's hard not to imagine they're suffering too.

Plus there's the spectacular *lack* of any new significant proudly-LLM-coded apps entering the market. If there claims of the boosters were true, we'd be drowning in new software right now. Instead, besides apps for AI itself, I haven't heard of a single new app with a significant userbase recently. The reality seems to be: the existing apps that have pushed LLMs onto their programmers are now struggling, and new companies that attempt to use it don't succeed enough for anyone to hear about them.

@aral "technical debt as a service" yes I love this :)

@aral
> Any monkey with a keyboard can write code.

This describes my current state of learning how to code perfectly. :D

@Nephrite Ssh, don’t tell anyone but it’s how we all started out ;)

@aral Agreed. Also, coding is the one thing I think we have lots of curricula and training on. The other parts of shipping good software aren’t taught well or often. Lots of people get very little training in design, decomposition, project management, reliability, etc.

So we have shifted the emphasis really hard onto things we don’t teach, and don’t do well, as an industry.

@aral
It's also using the right code for the right job. Not using one flavour for everything.

@aral I think this applies to tools, but I also think one of those "narrowly-scoped" fields is games, as far as "done code" goes... Sometimes software is done: it's all it needs to be, and that's OK. Games don't need to evolve forever. They're often an artistic vision that just needs to continue to be supported by the OS.

Saying code needs to evolve forever is throwing the baby out with the bathwater and leads to Apple not being able to run PPC games and having even Steam be basically broken.

@indigoparadox Perhaps. But even games have bugs and updates, etc. My reticence is mostly from a security viewpoint: even tightly-scoped libraries, etc., might have to be updated due to newly-discovered threats (either within their own code bases or the greater environment in which they operate).

And, of course, the alternative to evolution is replacement or abandonment where library A is retired and replaced with library B. Or the game gets a separate sequel, or the author halts further fixes and makes the next game…

@aral Bugs and exploits give a sjngle-player game charm, and a sequel is a different game, not a replacement. Riven wasn't a replacement for Myst!

You could argue that games can be ported to new platforms, but I'm wary of any thinking that lets old platforms get away with shifting out from under the feet of an already ported game. Someday the author will die, and not every game is FOSS! A platform that relies on a living author is a lousy platform!

Anyway this is a big niche but important! IMO

@aral this is stupid and wrong
@Profpatsch Yeah, well you have a poopy face.

@aral

Wrote my reply in a short blog post.

https://daveverse.org/2026/04/03/can-ai-bots-write-maintainable-code/

it didn't fit within the character limit.

@aral this is, seriously, something to take away from the claude code leak. It reads as a mess that has been developed over decades by a constantly changing team where no one really cared about maintenance, just slapping whatever crap they cut and pasted from Stack Overflow as many times as they needed to get it running.

But of course it's not that, it's what, a couple years old, max?

@ricci @aral Claude Code is like the portrait of Dorian Gray, but for software

@aral

"What’s never been easy, and what remains difficult, is figuring out the right problem to solve, solving it elegantly, and doing so in a way that’s maintainable and sustainable given your means."

Well put! 👍

Reminds me of this quote from the #book "Are Your Lights On?" by Donald Gause & Gerald Weinberg way back in the 1990s...

"You can never be sure you have a correct problem definition, but don't ever stop trying to get one."

#ProblemSolving #ProblemDefinition #Quote

@aral
It's much easier to maintain and modify code you've written yourself. You can much more easily find your way around.

I'd think trying to modify AI written code would be even harder than maintaining another person's code.

@aral

This is so perfectly poignant it almost brings a tear to my eye. You captured this so beautifully. Good job!

@aral Well, old-skool game console programmers would disagree - that code was permanent, forever! 😆
@aral agree 100%. One issue in a commercial dev context is, that accruing tech debt has long been part of the gig. "What's a bit more convoluted rubbish code? We've invented rubbish code" is a pointed way I'd summarize the push for LLMs.
@aral my org is "solving" this problem by forcing engineers to maintain the code that others write with llms. i fear that may be our fate as the new status quo