If you don’t have the resources to write and understand the code yourself, you don’t have the resources to maintain it either.

Any monkey with a keyboard can write code. Writing code has never been hard. People were churning out crappy code en masse way before generative AI and LLMs. I know because I’ve seen it, I’ve had to work with it, and I no doubt wrote (and continue to write) my share of it.

What’s never been easy, and what remains difficult, is figuring out the right problem to solve, solving it elegantly, and doing so in a way that’s maintainable and sustainable given your means.

Code is not an artefact, code is a machine. Code is either a living thing or it is dead and decaying. You don’t just write code and you’re done. It’s a perpetual first draft that you constantly iterate on, and, depending on what it does and how much of that has to do with meeting the evolving needs of the people it serves, it may never be done. With occasional exceptions (perhaps? maybe?) for well-defined and narrowly-scoped tools, done code is dead code.

So much of what we call “writing” code is actually changing, iterating on, investigating issues with, fixing, and improving code. And to do that you must not only understand the problem you’re solving but also how you’re solving it (or how you thought you were solving it) through the code you’ve already written and the code you still have to write.

So it should come as no surprise that one of the hardest things in development is understanding someone else’s code, let alone fixing it when something doesn’t work as it should. Because it’s not about knowing this programming language or that (learning a programming language is the easiest part of coding), or this framework or that, or even knowing this design pattern or that (although all of these are important prerequisites for comprehension) but understanding what was going on in someone else’s head when they wrote the code the way they wrote it to solve a particular problem.

It frankly boggles my mind that some people are advocating for automating the easy part (writing code) by exponentially scaling the difficult part (understanding how exactly someone else – in this case, a junior dev who knows all the hows of things but none of the whys – decided to solve the problem). It is, to borrow a technical term, ass-backwards.

They might as well call vibe coding duct-tape-driven development or technical debt as a service.

🤷‍♂️

#AI #LLMs #vibeCoding #softwareDevelopment #design #craft

@aral I agree, and I am absolutely *not* against AI tools. But as wise people say — code is read much more than it is being written. This is why it should be easy to read. That’s one thing.

The other is: I believe a lot depends on *how* you use the tools and your literacy in how they work and what to expect. AI is amazing at helping to *read* code exactly — analyze, travel all those paths that would take me hours to travel, and find out how things work, what the dependencies are and what the data flow is.

@ikari @aral you... know that the LLM when asked to summarize code is just giving you the most likely summary based on the closest matches it could find in the training database, right?

I mean, you clearly don't understand the implications of that, so maybe you didn't know that? But your comment about the importance of understanding the tools capabilities implied that?

LLMs are a *terrible* tool for explaining or summarizing code, precisely because they will get it right some high percentage of the time and imperceptibly but potentially disastrously wrong some low percentage of the time.

To give a very concrete example: if I find a function with 17 if/else cases and hard-to-follow juggling of multiple state variables, and I just want to know: will it always return a string? If I prompt an LLM with that question, it doesn't answer the question by doing static analysis of the code paths to try to prove or disprove the answer (a type checker would do that). Instead it effectively is searching through its training data for instances where other people asked the same question about different code, with some bias towards more-similar code to the code you're looking at, and then predicting what an answer would look like based on that search. The answer will sound plausible and might even include a plausible-sounding explanation, but the relationship between the answer you get and the code you asked about is extremely tenuous. In this example if it's gives you a false positive and you use the function assuming it does always return a string, you're in for some fun later when a non-string gets returned. The worst part is that you won't remember later what the likely culprit is because when the LLM-generated answer assured you the function always returned a string, you mentally dropped your suspicions about whether that code would always return a string, so you don't even have a lingering "if it crashes due to a non-string value in a place a string is needed I should double-check function X" mental note.