It's clear that AI assisted coding is dividing developers (welcome to the culture wars!). I've seen a few blog posts now that talk about how some people just "love the craft", "delight in making something just right, like knitting", etc, as opposed to people who just "want to make it work". As if that explains the divide.

How about this, some people resent the notion of being a babysitter to a stochastic token machine, hastening their own cognitive decline. Some people resent paying rent to a handful of US companies, all coming directly out of the TESCREAL human extinction cult, to be able to write software. Some people resent the "worse is better" steady decline of software quality over the past two decades, now supercharged. Some people resent that the hegemonic computing ecosystem is entirely shaped by the logic of venture capital. Some people hate that the digital commons is walled off and sold back to us. Oh and I guess some people also don't like the thought of making coding several orders of magnitude more energy intensive during a climate emergency.

But sure, no, it's really because we mourn the loss of our hobby.

@plexus In the end, software engineering is about creating solutions to problems other people have. The solutions are not a byproduct, but the primary purpose. To the majority of users, the inner workings and the creation process of software is opaque. The qualities that software exposes on the outside are largely independent of its inner workings.

This means that for most people in the software industry, adapting to the new tooling that makes the creation process more efficient is 1/

@hanshuebner @plexus Did you ever read the toot you replied to before arguing with standard AI propaganda points? 🙄
@dalias @plexus I'm just a software developer. What I write comes from my personal experience writing software with Claude Code. Do you have any experience you can share? What are your credentials?

@hanshuebner You are replying to Rich Felker, primary developer of the musl C library for Linux, a shining example of software at a low layer of the stack developed with meticulous attention to quality. True, quality that business people probably don't appreciate, but if software at all layers were developed with this attention to quality, I think users would feel the difference.

@dalias @plexus

@matt @dalias @plexus Is the reality not that not all software is developed with meticulous attention to quality? In my experience, most software is primarily written with the intent to solve a problem. The engineering challenge is to make it maintainable as requirements evolve. Success is when the software fulfills its purpose.

I love writing beautiful code, but don't expect anyone to pay me for it - not only because beauty is in the eye of the beholder, but also because users don't care.

@hanshuebner @matt @plexus Software written without any concern that it's doing the wrong thing does not "solve any problem" except "how to line venture capitalists' pockets".

Unless it's just being written for fun and not actual deployment to real-world applications, software is responsible for people's safety.

It controls deadly machines like cars and airplanes.It's used to design buildings and bridges. It guards people's communications against abusive partners, stalkers, governments. It controls people's money. It controls who gets need-based benefits. It decides whether people will be wrongly accused of embezzlement and driven to suicide.

@dalias @matt @plexus In my experience as a software developer, there is no difference between a program written by a human and one written by an LLM. Both can be bad or dangerous, or good, or right. It is just that LLMs are faster at cranking out code.

@hanshuebner @dalias (Dropping the original author as they already warned you that they're in no mood for your arguments.)

IMO, code is not something to be cranked out en masse. Every detail matters; as such, we should write every line deliberately, with care, as the clearest, most direct expression of our understanding of how to solve the problem, certainly clearer and more precise than a natural-language prompt.

@matt @dalias "Our understanding" is often incomplete, leading to code that is just a reflection of the process of understanding the task at hand. Code often suffers from that in that the person working on it learned faster than they could or would refactor. The resulting reality is that code, by and large, is messy.

Not everyone is working the same way, but it is certainly true that not everyone is a genius. Thus, bad, human code prevails.

@hanshuebner @matt "Capitalism is already producing bad things so we should just accelerate that" 🙄

@dalias @matt I live in capitalism as a software developer. I don't get to choose what tools I use, I'm getting paid to do the work. I can change my profession, or I can pick up what I need to know in order to sustain myself. This is me personally.

Then: LLMs create code that is comparable to human written code in that frame of reference. There is better code, but there is also much worse.

Finally: LLMs create shitty prose, shitty images and shitty music. I hate all of that.

@hanshuebner @dalias If LLMs create shitty prose, images, and music, why is code the exception? Simply because that's the area that we work in and we're afraid of losing our jobs? (I admit I'm not immune to that fear.)
@matt @dalias Code is different because it has a function that is beyond human reception.
@hanshuebner @dalias The details still matter though. The same lack of attention to detail that makes LLM prose, images, and music shitty, will come back to bite us, or the people affected by our work, sooner or later, in the form of defects. So I'd rather give each detail the attention it deserves, by writing the code myself, than roll the dice and find out later that some detail in that mass of LLM-extruded code was wrong -- possibly subtly wrong, in a way that's easy to miss in review.

@matt @dalias You are absolutely right, but here's the thing: Code review also does not prevent subtle bugs from creeping into the code base when humans wrote the code. Review is just one of the tools that ensure software quality.

This is to say that code written by LLMs and humans suffer from similar issues, require similar care and review and can fail in similar ways. There is more LLM code, though, and there are new challenges because scaling with LLMs works differently than with humans.

@hanshuebner @dalias Isn't it obvious, though, that the risks are higher when you have an LLM generate code statistically from a natural-language prompt, as opposed to writing the code and paying attention to every detail yourself?

@matt @dalias Statistically, you will have more bugs because you have more software. But also, you can easily create tests, refactor and make executable requirements.

Making good software with LLM support is hard work and takes time. If you look at the stuff that people make with three prompts and then post to LinkedIn, you know what I mean.

A good program requires attention to detail, no matter what the tool does for you.

@hanshuebner @dalias So then why do it with an LLM as opposed to the hard work of writing the code directly? Is it just to appease capital's irrational demands?
@matt @dalias You use an LLM because it makes the code writing part take radically less time.
@hanshuebner @dalias But then you have to spend time putting guardrails in place (e.g. comprehensive tests) to make sure the LLM doesn't do something wrong; using an LLM is rolling the dice, after all. Now, if you believe that one should always put maximal guardrails in place anyway even for human-written code, then I suppose the faster code generation could still be a net gain. But I'm not sure there's one correct answer to how much one should invest in guardrails (tests, types, lints, etc.).
@hanshuebner For example, I write in Rust. I find that I never again want to do without the strong static typing, the controlled mutability, and the borrow checker. But @dalias writes excellent C code without these things. Would I trust an LLM to write C code like Rich does? Never. My point is that if the code is written by skilled humans, you don't necessarily need guardrails to the extent that you do for LLM-extruded code. So do LLMs really save time, *for high-quality code*? I'm skeptical.

@matt @dalias One part of the conversation is of course the craftmanship - You write high-quality code as a matter of your ethos, and you employ the tools that you believe help you do that best. While other developers can be the judge of that, your users really cannot. To them, it is the external behavior of your code that matters.

Now, you can argue how user satisfaction is possible only with high-quality code, but that'd be mostly a theoretical discussion because most code in existence 1/

@matt @dalias is not of high quality.

So we have an internal and an external view on quality that are not necessarily the same. At the same time, we have the external force for functionality, and I'd argue that to users, that force is more important than the internal quality of the code, which matters (only) to us.

The realization that with LLM help, people can create something that satisfies the desire of users in short amount of time will create more pull towards meeting those desires. 2/

@matt @dalias Saying that the desire can't be met because we can't create the software with the internal quality that we desire won't be successful in the long run.

Sure, some users will use bad software written with LLM help, blame it on LLMs and then ask for a handcrafted solution, if they can afford it. But that will be the exception, not the norm.

This is why I believe that as a software developer, I need to know how to work with LLMs rather than avoid them. YMMV. 3/3

@hanshuebner @dalias I'm sympathetic to the argument that creating more software and implementing more features faster isn't just about making money, but about solving problems, including problems that are causing suffering because a solution hasn't yet been implemented. I'm thinking in particular of the field that I work in, accessibility for disabled people. But programming via gambling, as one does when using an LLM, isn't the only way to address that urgency.
@matt @dalias It can't hurt to try what the tools can do today, even if it'd be just to reinforce your preconceptions. :)
@hanshuebner @dalias I'm apprehensive about trying a full agent-based workflow because I'm afraid I'll be so dazzled by what it can do (unreliably) via brute force that I'll let my guard down in terms of evaluating it critically.
@matt @dalias Someone else will try and boast about it.
@hanshuebner @matt You realize you sound exactly like a drug dealer, right?

@hanshuebner @matt Yes it can hurt to try them. They are cognitohazards and are designed to make you think they're doing things they're not. This works on a lot of people, even people who think themselves very intelligent and thereby immune.

This is how we end up with folks praising them while putting out clearly worse writing, code, etc. that nobody wants.

@hanshuebner

Correct me if I'm wrong, I think the argument you're making is: If software does useful things as measured by user experience and testing, then it doesn't matter, or shouldn't matter, if the code is messy and/or incomprehensible-to-humans "on its inside".

If so, I can see how that makes sense in some circumstances - and presumably you're currently in those circumstances.

I'm intuitively wary of that kind of position, though, because of the cans which it kicks down the road.

• The gigantic financial subsidies could go away.
• One day there could be a bug which can't be solved by repeatedly re-rolling the LLM dice, at which point a human does need to get stuck in to the code. And if that _did_ happen, then statistically-assembled code which no human understands would be a difficult place to start.
• On a bigger scale, climate change / energy efficiency / water shortages.

Points 1 and 2 seem to me like they _could_ come back to bite you on a relatively short timescale.

In comparison, elegant code and hands-on coding skill are stable investments.

So for me, taking that path would feel like intentionally leaving the "on the safe side" path, and instead embracing some inherent fragility.

#LLMs #coding

@unchartedworlds I'm making the point that internal and external qualities are independent. It is possible to have messy programs that user percieve as having high quality, or the other way around. High internal quality is not a prerequisite.

This is not to say that internal quality does not matter - This is true both for human and for LLM coders. A messy program is hard to change for LLM assistants, and it requires effort to ensure maintainability in either case.

1/

@unchartedworlds I think it is a misconception that LLM based development tooling is dependent on the success of AI at large. Coding assistance is one of the actual use cases that make sense to many people, and it goes beyond the "wow, computers can do this now" point because it is instrumental to the actual delivery of value over time.

As in other contexts, one should bear in mind that costs are not always beared by at the point of consumption.

2/

@hanshuebner

Well I don't disagree that elegant code and friendly user interface are separate variables, that's for sure :-)