It's clear that AI assisted coding is dividing developers (welcome to the culture wars!). I've seen a few blog posts now that talk about how some people just "love the craft", "delight in making something just right, like knitting", etc, as opposed to people who just "want to make it work". As if that explains the divide.

How about this, some people resent the notion of being a babysitter to a stochastic token machine, hastening their own cognitive decline. Some people resent paying rent to a handful of US companies, all coming directly out of the TESCREAL human extinction cult, to be able to write software. Some people resent the "worse is better" steady decline of software quality over the past two decades, now supercharged. Some people resent that the hegemonic computing ecosystem is entirely shaped by the logic of venture capital. Some people hate that the digital commons is walled off and sold back to us. Oh and I guess some people also don't like the thought of making coding several orders of magnitude more energy intensive during a climate emergency.

But sure, no, it's really because we mourn the loss of our hobby.

@plexus In the end, software engineering is about creating solutions to problems other people have. The solutions are not a byproduct, but the primary purpose. To the majority of users, the inner workings and the creation process of software is opaque. The qualities that software exposes on the outside are largely independent of its inner workings.

This means that for most people in the software industry, adapting to the new tooling that makes the creation process more efficient is 1/

@hanshuebner @plexus Did you ever read the toot you replied to before arguing with standard AI propaganda points? 🙄
@dalias @plexus I'm just a software developer. What I write comes from my personal experience writing software with Claude Code. Do you have any experience you can share? What are your credentials?

@hanshuebner You are replying to Rich Felker, primary developer of the musl C library for Linux, a shining example of software at a low layer of the stack developed with meticulous attention to quality. True, quality that business people probably don't appreciate, but if software at all layers were developed with this attention to quality, I think users would feel the difference.

@dalias @plexus

@matt @dalias @plexus Is the reality not that not all software is developed with meticulous attention to quality? In my experience, most software is primarily written with the intent to solve a problem. The engineering challenge is to make it maintainable as requirements evolve. Success is when the software fulfills its purpose.

I love writing beautiful code, but don't expect anyone to pay me for it - not only because beauty is in the eye of the beholder, but also because users don't care.

@hanshuebner @matt @plexus Software written without any concern that it's doing the wrong thing does not "solve any problem" except "how to line venture capitalists' pockets".

Unless it's just being written for fun and not actual deployment to real-world applications, software is responsible for people's safety.

It controls deadly machines like cars and airplanes.It's used to design buildings and bridges. It guards people's communications against abusive partners, stalkers, governments. It controls people's money. It controls who gets need-based benefits. It decides whether people will be wrongly accused of embezzlement and driven to suicide.

@dalias @matt @plexus In my experience as a software developer, there is no difference between a program written by a human and one written by an LLM. Both can be bad or dangerous, or good, or right. It is just that LLMs are faster at cranking out code.

@hanshuebner @dalias (Dropping the original author as they already warned you that they're in no mood for your arguments.)

IMO, code is not something to be cranked out en masse. Every detail matters; as such, we should write every line deliberately, with care, as the clearest, most direct expression of our understanding of how to solve the problem, certainly clearer and more precise than a natural-language prompt.

@matt @dalias "Our understanding" is often incomplete, leading to code that is just a reflection of the process of understanding the task at hand. Code often suffers from that in that the person working on it learned faster than they could or would refactor. The resulting reality is that code, by and large, is messy.

Not everyone is working the same way, but it is certainly true that not everyone is a genius. Thus, bad, human code prevails.

@hanshuebner @matt "Capitalism is already producing bad things so we should just accelerate that" 🙄

@dalias @matt I live in capitalism as a software developer. I don't get to choose what tools I use, I'm getting paid to do the work. I can change my profession, or I can pick up what I need to know in order to sustain myself. This is me personally.

Then: LLMs create code that is comparable to human written code in that frame of reference. There is better code, but there is also much worse.

Finally: LLMs create shitty prose, shitty images and shitty music. I hate all of that.

@hanshuebner @dalias If LLMs create shitty prose, images, and music, why is code the exception? Simply because that's the area that we work in and we're afraid of losing our jobs? (I admit I'm not immune to that fear.)
@matt @dalias Code is different because it has a function that is beyond human reception.
@hanshuebner @dalias The details still matter though. The same lack of attention to detail that makes LLM prose, images, and music shitty, will come back to bite us, or the people affected by our work, sooner or later, in the form of defects. So I'd rather give each detail the attention it deserves, by writing the code myself, than roll the dice and find out later that some detail in that mass of LLM-extruded code was wrong -- possibly subtly wrong, in a way that's easy to miss in review.

@matt @dalias You are absolutely right, but here's the thing: Code review also does not prevent subtle bugs from creeping into the code base when humans wrote the code. Review is just one of the tools that ensure software quality.

This is to say that code written by LLMs and humans suffer from similar issues, require similar care and review and can fail in similar ways. There is more LLM code, though, and there are new challenges because scaling with LLMs works differently than with humans.

@hanshuebner @dalias Isn't it obvious, though, that the risks are higher when you have an LLM generate code statistically from a natural-language prompt, as opposed to writing the code and paying attention to every detail yourself?

@matt @dalias Statistically, you will have more bugs because you have more software. But also, you can easily create tests, refactor and make executable requirements.

Making good software with LLM support is hard work and takes time. If you look at the stuff that people make with three prompts and then post to LinkedIn, you know what I mean.

A good program requires attention to detail, no matter what the tool does for you.

@hanshuebner @dalias So then why do it with an LLM as opposed to the hard work of writing the code directly? Is it just to appease capital's irrational demands?
@matt @dalias You use an LLM because it makes the code writing part take radically less time.
@hanshuebner @dalias But then you have to spend time putting guardrails in place (e.g. comprehensive tests) to make sure the LLM doesn't do something wrong; using an LLM is rolling the dice, after all. Now, if you believe that one should always put maximal guardrails in place anyway even for human-written code, then I suppose the faster code generation could still be a net gain. But I'm not sure there's one correct answer to how much one should invest in guardrails (tests, types, lints, etc.).
@hanshuebner For example, I write in Rust. I find that I never again want to do without the strong static typing, the controlled mutability, and the borrow checker. But @dalias writes excellent C code without these things. Would I trust an LLM to write C code like Rich does? Never. My point is that if the code is written by skilled humans, you don't necessarily need guardrails to the extent that you do for LLM-extruded code. So do LLMs really save time, *for high-quality code*? I'm skeptical.

@matt @dalias One part of the conversation is of course the craftmanship - You write high-quality code as a matter of your ethos, and you employ the tools that you believe help you do that best. While other developers can be the judge of that, your users really cannot. To them, it is the external behavior of your code that matters.

Now, you can argue how user satisfaction is possible only with high-quality code, but that'd be mostly a theoretical discussion because most code in existence 1/

@matt @dalias is not of high quality.

So we have an internal and an external view on quality that are not necessarily the same. At the same time, we have the external force for functionality, and I'd argue that to users, that force is more important than the internal quality of the code, which matters (only) to us.

The realization that with LLM help, people can create something that satisfies the desire of users in short amount of time will create more pull towards meeting those desires. 2/

@matt @dalias Saying that the desire can't be met because we can't create the software with the internal quality that we desire won't be successful in the long run.

Sure, some users will use bad software written with LLM help, blame it on LLMs and then ask for a handcrafted solution, if they can afford it. But that will be the exception, not the norm.

This is why I believe that as a software developer, I need to know how to work with LLMs rather than avoid them. YMMV. 3/3

@hanshuebner @dalias I'm sympathetic to the argument that creating more software and implementing more features faster isn't just about making money, but about solving problems, including problems that are causing suffering because a solution hasn't yet been implemented. I'm thinking in particular of the field that I work in, accessibility for disabled people. But programming via gambling, as one does when using an LLM, isn't the only way to address that urgency.
@matt @dalias It can't hurt to try what the tools can do today, even if it'd be just to reinforce your preconceptions. :)
@hanshuebner @dalias I'm apprehensive about trying a full agent-based workflow because I'm afraid I'll be so dazzled by what it can do (unreliably) via brute force that I'll let my guard down in terms of evaluating it critically.
@matt @dalias Someone else will try and boast about it.
@hanshuebner @matt You realize you sound exactly like a drug dealer, right?

@hanshuebner @matt Yes it can hurt to try them. They are cognitohazards and are designed to make you think they're doing things they're not. This works on a lot of people, even people who think themselves very intelligent and thereby immune.

This is how we end up with folks praising them while putting out clearly worse writing, code, etc. that nobody wants.

@hanshuebner

Correct me if I'm wrong, I think the argument you're making is: If software does useful things as measured by user experience and testing, then it doesn't matter, or shouldn't matter, if the code is messy and/or incomprehensible-to-humans "on its inside".

If so, I can see how that makes sense in some circumstances - and presumably you're currently in those circumstances.

I'm intuitively wary of that kind of position, though, because of the cans which it kicks down the road.

• The gigantic financial subsidies could go away.
• One day there could be a bug which can't be solved by repeatedly re-rolling the LLM dice, at which point a human does need to get stuck in to the code. And if that _did_ happen, then statistically-assembled code which no human understands would be a difficult place to start.
• On a bigger scale, climate change / energy efficiency / water shortages.

Points 1 and 2 seem to me like they _could_ come back to bite you on a relatively short timescale.

In comparison, elegant code and hands-on coding skill are stable investments.

So for me, taking that path would feel like intentionally leaving the "on the safe side" path, and instead embracing some inherent fragility.

#LLMs #coding

@unchartedworlds I'm making the point that internal and external qualities are independent. It is possible to have messy programs that user percieve as having high quality, or the other way around. High internal quality is not a prerequisite.

This is not to say that internal quality does not matter - This is true both for human and for LLM coders. A messy program is hard to change for LLM assistants, and it requires effort to ensure maintainability in either case.

1/

@unchartedworlds I think it is a misconception that LLM based development tooling is dependent on the success of AI at large. Coding assistance is one of the actual use cases that make sense to many people, and it goes beyond the "wow, computers can do this now" point because it is instrumental to the actual delivery of value over time.

As in other contexts, one should bear in mind that costs are not always beared by at the point of consumption.

2/

@hanshuebner

Well I don't disagree that elegant code and friendly user interface are separate variables, that's for sure :-)

@matt @dalias In my own anecdotal and personal experience, LLM code gets enough things right to be competitive, but that experience is just a couple of months old. I can say, though, that the systems I created with LLM help very much fulfilled a real purpose, did not break randomly and are maintainable with LLM help.
@hanshuebner @matt Your anecdotes are contrary to empirical evidence and the mechanisms of how the thing works. I think at this point we can say you've bought into the parlor tricks and there's not much point in having this conversation.
@hanshuebner @matt @dalias
Would you highlight this map: written by LLM -> maintaned with LLM [tooling] ?
Or, you will maintain such artifact manually?
Also, can another LLM be used for maintenance?

@mikalai If I create an application program that mostly fulfills end-user requirements, I am less concerned with the inner workings and more with steering the architecture in the right direction. When working on systems that are maintained by humans, I put more emphasis on controlling source level structure.

I rarely see the need to write code myself recently. I still do it for fun, sometimes.

1/

@mikalai LLM coding agents can work on any code base, no matter it was created by the same or a different LLM or by humans.

Someone made the point that they don't want to try LLM assisted coding because they fear that their judgement would be influenced by the experience. I can certainly say that this was the case for me. When I first used an LLM to write me a special purpose, non-trivial tool that I needed and it took just a few minutes to complete, it impressed me thoroughly.

2/2

@hanshuebner
I "can" write in any language. But, in practice, are you bound to initial LLM, thus, vendor, if this wasn't local tooling?

@mikalai No, not at all. The LLM based tools use their "world knowledge" and agent rules to examine your code base and generate new code based on that and your prompts. It does not matter how the code was initially created.

The fundamental limitation is the context window of the LLM - It cannot just include all of your code, so it needs to find the relevant declarations, schema definitions etc. The tooling is responsible for keeping enough information in the context so it can keep going.

@matt @hanshuebner @dalias Just jumping into this thread to point out the elephant in the room: empirically, using LLMs to write code makes the process slower, not faster. Once you wade through the anecdotes and the puff-pieces and the LLM-developer-sponsored "research" using metrics like "lines of code" you'll find solid academic articles observing slower time-to-task-completion despite (incorrect) subjective perception of a speedup:

https://arxiv.org/abs/2507.09089

The entire premise of "well we've got to adopt it to keep up" is just plain not supported by a preponderance of credible evidence. "This is the future" is a pure marketing line and seems likely to be an outright lie. Accepting it as the framing of the argument cedes *way* too much ground to the AI pushers.

Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity

Despite widespread adoption, the impact of AI tools on software development in the wild remains understudied. We conduct a randomized controlled trial (RCT) to understand how AI tools at the February-June 2025 frontier affect the productivity of experienced open-source developers. 16 developers with moderate AI experience complete 246 tasks in mature projects on which they have an average of 5 years of prior experience. Each task is randomly assigned to allow or disallow usage of early 2025 AI tools. When AI tools are allowed, developers primarily use Cursor Pro, a popular code editor, and Claude 3.5/3.7 Sonnet. Before starting tasks, developers forecast that allowing AI will reduce completion time by 24%. After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%--AI tooling slowed developers down. This slowdown also contradicts predictions from experts in economics (39% shorter) and ML (38% shorter). To understand this result, we collect and evaluate evidence for 20 properties of our setting that a priori could contribute to the observed slowdown effect--for example, the size and quality standards of projects, or prior developer experience with AI tooling. Although the influence of experimental artifacts cannot be entirely ruled out, the robustness of the slowdown effect across our analyses suggests it is unlikely to primarily be a function of our experimental design.

arXiv.org

@tiotasram @matt @hanshuebner @dalias Honestly, I don't think even the AI labs are ignoring these issues. At the very least Anthropic has been fairly up front about these concerns and reporting them in their research and surveywork.

Their most recent one is pretty comprehensive: https://www.anthropic.com/features/81k-interviews

What 81,000 people want from AI

Last December, tens of thousands of Claude users around the world had a conversation with our AI interviewer to share how they use AI, what they dream it could make possible, and what they fear it might do.

@neal @matt @hanshuebner @dalias this is... A "study" conducted by an AI company who used their own tool to summarize and classify the results. The quotes pulled at the top and even the design of the study are designed to take as an unspoken assumption "AI is the future" and their recruitment method selects for people who likely believe that too. The quotes they pull are split between AI boosterism & doomerism, both of which help feed the hype around AI capabilities that helps sell their product.

I'll bet that some of the Anthropic employees involved think this is honest self-reflection, but I'll bet twice as much that a marketing team signed off on the copy. It puts serious harms life psychological dependence next to questionable benefits, and makes this seem like a good thing.

It mentions productivity as a category and shows that more respondents *feel* they are getting productivity gains than feel their productivity decreased (37% vs 17%). Impossible with this design to quantify how much productivity actually increased or decreased, but of course that wasn't a study goal. If the results in the study I linked about overestimation of productivity with LLM "help" hold up, they're consistent with this data.

Not mentioned anywhere in this report (by design): opinions of people who don't want to use AI at all and who want harmful AI development processes like those at Anthropic to stop.

Finally, one main thrust of the report is that individual users will directly benefit from the supposed productivity increases LLMs are delivering. I've already written about this misconception, which amounts to a trap designed to prevent the masses of workers who stand to lose by the machinations of big AI companies from actually opposing them:

https://cs.wellesley.edu/~pmwh/advice/aiProductivity.html

TL;DR: Anthropic briefly gives out free cookies to everyone and says "see, don't you love having more cookies?" In a few years, either your boss fires you because they can get cookies from Anthropic directly or they demand you hand over the extra cookies to them.

aiProductivity

@tiotasram @matt @hanshuebner @dalias So you're saying you'd prefer slanted research as long as it favors your point of view that "AI is bad"? That wall of text you wrote basically oversimplifies everything to a negative bias.

Even Anthropic acknowledged that this is from Claude users, but you discount the weight of the opinions of people simply because of that?

And the sample size is more than sufficiently large to be considered rigorous.

@neal @tiotasram @matt @hanshuebner Slanted research? The "slanted research" is that which was funded by a party with a financial interest in a particular outcome.

All of the real researchers have been defunded/fired for publishing things that reflected poorly on the industry, going all the way back to Timnit Gebru.

Do better.

@dalias @tiotasram @matt @hanshuebner You do better, too. So far I've seen nothing from this thread that attempts to connect with people in a way that would make them want to consider your position. I have been wary of this tech. But as an "ordinary" freelance software engineer, the pressure to be as good as a genius is real.

I've already stated this once before in another thread, and I'll say it again here: your way of communicating this is making people consider the opposite as reasonable.

@dalias @tiotasram @matt @hanshuebner I would prefer if we as software engineers were valued for our labor and capabilities. I would prefer if software engineers were required to understand the ethics of their decision-making. I would prefer if every software developer was required to justify their efforts contextually with a full software development and maintenance lifecycle.

But alas, we don't. Working incrementally to make those things a reality is all I can do.

@neal @tiotasram @matt @hanshuebner When you cite clearly biased industry-sponsored "research" as a credible source, it makes it hard to believe you have the same goals and values on this as I do.

And a belief that we do have shared goals and values is a necessary prerequisite for taking serious any advice you might give on how to achieve those goals.

Without that it comes across as concern trolling.

@dalias @tiotasram @matt @hanshuebner My *point* is that it isn't a topic the AI labs are ignoring, because they would be absolutely stupid to ignore it. But just saying "no AI" isn't going to work anymore. There needs to be a framework to push things in a direction that improves the value for society and the world.

Even the environmental angle is something that people are looking at: https://arxiv.org/abs/2603.21419

We're at the beginning of something, and everything sucks at the beginning, sadly.

Is the future of AI green? What can innovation diffusion models say about generative AI's environmental impact?

The rise of generative artificial intelligence (GAI) has led to alarming predictions about its environmental impact. However, these predictions often overlook the fact that the diffusion of innovation is accompanied by the evolution of products and the optimization of their performance, primarily for economic reasons. This can also reduce their environmental impact. By analyzing the GAI ecosystem using the classic A-U innovation diffusion model, we can forecast this industry's structure and how its environmental impact will evolve. While GAI will never be green, its impact may not be as problematic as is sometimes claimed. However, this depends on which business model becomes dominant.

arXiv.org
@neal @tiotasram @matt @hanshuebner I think we have fundamentally different perspectives and values on this and that yours is not helpful to me.

@dalias @tiotasram @matt @hanshuebner I'm saying that yours isn't helping *you*. Whether your perspective and my perspective line up is somewhat immaterial.

In public discourse and for the successful debate of ideas, if you want yours to win (which I think you do for your own objectives), you need to do better to get people to come to your point of view.

@dalias @tiotasram @matt @hanshuebner But to make my stance clearer, I have *rarely* seen quality code submitted to me from people using predominantly AI-based tooling. It is frustrating, annoying, and it wastes my very precious time.

I don't like slopware, I didn't like it before AI and I don't like it now. What I *hate* is that it's so easy now to create it that I am barraged with it.

I want people that *care* about the code they deliver.

@neal @tiotasram @matt @hanshuebner I'm sorry but you're wrong about that.

Your premise is that my goal is to convince people who have bought into the AI propaganda to change their minds. I don't actually care what they think.

My goal is to validate and build community with the people who have rejected the AI cult's worldview, and to maintain and build the software infrastructure we need.

@hanshuebner @matt @dalias No, it doesn't. Every study on this has shown the same thing. It make developers *think* they are "solving problems" faster, but they are actually *slower*.

If work requires LLM use, I'll do it, as I have to survive in capitalism. But, for my own projects I'll continue to do it the fast way and avoid generated code.

(Even if it were faster, I try to avoid engaging in unethical practice for mere expediency and I find most models to be ethically questionable at best.)