It's clear that AI assisted coding is dividing developers (welcome to the culture wars!). I've seen a few blog posts now that talk about how some people just "love the craft", "delight in making something just right, like knitting", etc, as opposed to people who just "want to make it work". As if that explains the divide.

How about this, some people resent the notion of being a babysitter to a stochastic token machine, hastening their own cognitive decline. Some people resent paying rent to a handful of US companies, all coming directly out of the TESCREAL human extinction cult, to be able to write software. Some people resent the "worse is better" steady decline of software quality over the past two decades, now supercharged. Some people resent that the hegemonic computing ecosystem is entirely shaped by the logic of venture capital. Some people hate that the digital commons is walled off and sold back to us. Oh and I guess some people also don't like the thought of making coding several orders of magnitude more energy intensive during a climate emergency.

But sure, no, it's really because we mourn the loss of our hobby.

@plexus In the end, software engineering is about creating solutions to problems other people have. The solutions are not a byproduct, but the primary purpose. To the majority of users, the inner workings and the creation process of software is opaque. The qualities that software exposes on the outside are largely independent of its inner workings.

This means that for most people in the software industry, adapting to the new tooling that makes the creation process more efficient is 1/

@hanshuebner @plexus Did you ever read the toot you replied to before arguing with standard AI propaganda points? 🙄
@dalias @plexus I'm just a software developer. What I write comes from my personal experience writing software with Claude Code. Do you have any experience you can share? What are your credentials?

@hanshuebner You are replying to Rich Felker, primary developer of the musl C library for Linux, a shining example of software at a low layer of the stack developed with meticulous attention to quality. True, quality that business people probably don't appreciate, but if software at all layers were developed with this attention to quality, I think users would feel the difference.

@dalias @plexus

@matt @dalias @plexus Is the reality not that not all software is developed with meticulous attention to quality? In my experience, most software is primarily written with the intent to solve a problem. The engineering challenge is to make it maintainable as requirements evolve. Success is when the software fulfills its purpose.

I love writing beautiful code, but don't expect anyone to pay me for it - not only because beauty is in the eye of the beholder, but also because users don't care.

@hanshuebner @matt @plexus Software written without any concern that it's doing the wrong thing does not "solve any problem" except "how to line venture capitalists' pockets".

Unless it's just being written for fun and not actual deployment to real-world applications, software is responsible for people's safety.

It controls deadly machines like cars and airplanes.It's used to design buildings and bridges. It guards people's communications against abusive partners, stalkers, governments. It controls people's money. It controls who gets need-based benefits. It decides whether people will be wrongly accused of embezzlement and driven to suicide.

@dalias @matt @plexus In my experience as a software developer, there is no difference between a program written by a human and one written by an LLM. Both can be bad or dangerous, or good, or right. It is just that LLMs are faster at cranking out code.

@hanshuebner @dalias (Dropping the original author as they already warned you that they're in no mood for your arguments.)

IMO, code is not something to be cranked out en masse. Every detail matters; as such, we should write every line deliberately, with care, as the clearest, most direct expression of our understanding of how to solve the problem, certainly clearer and more precise than a natural-language prompt.

@matt @dalias "Our understanding" is often incomplete, leading to code that is just a reflection of the process of understanding the task at hand. Code often suffers from that in that the person working on it learned faster than they could or would refactor. The resulting reality is that code, by and large, is messy.

Not everyone is working the same way, but it is certainly true that not everyone is a genius. Thus, bad, human code prevails.

@hanshuebner @matt "Capitalism is already producing bad things so we should just accelerate that" 🙄

@dalias @matt I live in capitalism as a software developer. I don't get to choose what tools I use, I'm getting paid to do the work. I can change my profession, or I can pick up what I need to know in order to sustain myself. This is me personally.

Then: LLMs create code that is comparable to human written code in that frame of reference. There is better code, but there is also much worse.

Finally: LLMs create shitty prose, shitty images and shitty music. I hate all of that.

@hanshuebner @dalias If LLMs create shitty prose, images, and music, why is code the exception? Simply because that's the area that we work in and we're afraid of losing our jobs? (I admit I'm not immune to that fear.)
@matt @dalias Code is different because it has a function that is beyond human reception.
@hanshuebner @dalias The details still matter though. The same lack of attention to detail that makes LLM prose, images, and music shitty, will come back to bite us, or the people affected by our work, sooner or later, in the form of defects. So I'd rather give each detail the attention it deserves, by writing the code myself, than roll the dice and find out later that some detail in that mass of LLM-extruded code was wrong -- possibly subtly wrong, in a way that's easy to miss in review.

@matt @dalias You are absolutely right, but here's the thing: Code review also does not prevent subtle bugs from creeping into the code base when humans wrote the code. Review is just one of the tools that ensure software quality.

This is to say that code written by LLMs and humans suffer from similar issues, require similar care and review and can fail in similar ways. There is more LLM code, though, and there are new challenges because scaling with LLMs works differently than with humans.

@hanshuebner @dalias Isn't it obvious, though, that the risks are higher when you have an LLM generate code statistically from a natural-language prompt, as opposed to writing the code and paying attention to every detail yourself?

@matt @dalias Statistically, you will have more bugs because you have more software. But also, you can easily create tests, refactor and make executable requirements.

Making good software with LLM support is hard work and takes time. If you look at the stuff that people make with three prompts and then post to LinkedIn, you know what I mean.

A good program requires attention to detail, no matter what the tool does for you.

@hanshuebner @dalias So then why do it with an LLM as opposed to the hard work of writing the code directly? Is it just to appease capital's irrational demands?
@matt @dalias You use an LLM because it makes the code writing part take radically less time.
@hanshuebner @dalias But then you have to spend time putting guardrails in place (e.g. comprehensive tests) to make sure the LLM doesn't do something wrong; using an LLM is rolling the dice, after all. Now, if you believe that one should always put maximal guardrails in place anyway even for human-written code, then I suppose the faster code generation could still be a net gain. But I'm not sure there's one correct answer to how much one should invest in guardrails (tests, types, lints, etc.).

@matt @hanshuebner @dalias Just jumping into this thread to point out the elephant in the room: empirically, using LLMs to write code makes the process slower, not faster. Once you wade through the anecdotes and the puff-pieces and the LLM-developer-sponsored "research" using metrics like "lines of code" you'll find solid academic articles observing slower time-to-task-completion despite (incorrect) subjective perception of a speedup:

https://arxiv.org/abs/2507.09089

The entire premise of "well we've got to adopt it to keep up" is just plain not supported by a preponderance of credible evidence. "This is the future" is a pure marketing line and seems likely to be an outright lie. Accepting it as the framing of the argument cedes *way* too much ground to the AI pushers.

Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity

Despite widespread adoption, the impact of AI tools on software development in the wild remains understudied. We conduct a randomized controlled trial (RCT) to understand how AI tools at the February-June 2025 frontier affect the productivity of experienced open-source developers. 16 developers with moderate AI experience complete 246 tasks in mature projects on which they have an average of 5 years of prior experience. Each task is randomly assigned to allow or disallow usage of early 2025 AI tools. When AI tools are allowed, developers primarily use Cursor Pro, a popular code editor, and Claude 3.5/3.7 Sonnet. Before starting tasks, developers forecast that allowing AI will reduce completion time by 24%. After completing the study, developers estimate that allowing AI reduced completion time by 20%. Surprisingly, we find that allowing AI actually increases completion time by 19%--AI tooling slowed developers down. This slowdown also contradicts predictions from experts in economics (39% shorter) and ML (38% shorter). To understand this result, we collect and evaluate evidence for 20 properties of our setting that a priori could contribute to the observed slowdown effect--for example, the size and quality standards of projects, or prior developer experience with AI tooling. Although the influence of experimental artifacts cannot be entirely ruled out, the robustness of the slowdown effect across our analyses suggests it is unlikely to primarily be a function of our experimental design.

arXiv.org

@tiotasram @matt @hanshuebner @dalias Honestly, I don't think even the AI labs are ignoring these issues. At the very least Anthropic has been fairly up front about these concerns and reporting them in their research and surveywork.

Their most recent one is pretty comprehensive: https://www.anthropic.com/features/81k-interviews

What 81,000 people want from AI

Last December, tens of thousands of Claude users around the world had a conversation with our AI interviewer to share how they use AI, what they dream it could make possible, and what they fear it might do.

@neal @matt @hanshuebner @dalias this is... A "study" conducted by an AI company who used their own tool to summarize and classify the results. The quotes pulled at the top and even the design of the study are designed to take as an unspoken assumption "AI is the future" and their recruitment method selects for people who likely believe that too. The quotes they pull are split between AI boosterism & doomerism, both of which help feed the hype around AI capabilities that helps sell their product.

I'll bet that some of the Anthropic employees involved think this is honest self-reflection, but I'll bet twice as much that a marketing team signed off on the copy. It puts serious harms life psychological dependence next to questionable benefits, and makes this seem like a good thing.

It mentions productivity as a category and shows that more respondents *feel* they are getting productivity gains than feel their productivity decreased (37% vs 17%). Impossible with this design to quantify how much productivity actually increased or decreased, but of course that wasn't a study goal. If the results in the study I linked about overestimation of productivity with LLM "help" hold up, they're consistent with this data.

Not mentioned anywhere in this report (by design): opinions of people who don't want to use AI at all and who want harmful AI development processes like those at Anthropic to stop.

Finally, one main thrust of the report is that individual users will directly benefit from the supposed productivity increases LLMs are delivering. I've already written about this misconception, which amounts to a trap designed to prevent the masses of workers who stand to lose by the machinations of big AI companies from actually opposing them:

https://cs.wellesley.edu/~pmwh/advice/aiProductivity.html

TL;DR: Anthropic briefly gives out free cookies to everyone and says "see, don't you love having more cookies?" In a few years, either your boss fires you because they can get cookies from Anthropic directly or they demand you hand over the extra cookies to them.

aiProductivity

@tiotasram @matt @hanshuebner @dalias So you're saying you'd prefer slanted research as long as it favors your point of view that "AI is bad"? That wall of text you wrote basically oversimplifies everything to a negative bias.

Even Anthropic acknowledged that this is from Claude users, but you discount the weight of the opinions of people simply because of that?

And the sample size is more than sufficiently large to be considered rigorous.

@neal @tiotasram @matt @hanshuebner Slanted research? The "slanted research" is that which was funded by a party with a financial interest in a particular outcome.

All of the real researchers have been defunded/fired for publishing things that reflected poorly on the industry, going all the way back to Timnit Gebru.

Do better.

@dalias @tiotasram @matt @hanshuebner You do better, too. So far I've seen nothing from this thread that attempts to connect with people in a way that would make them want to consider your position. I have been wary of this tech. But as an "ordinary" freelance software engineer, the pressure to be as good as a genius is real.

I've already stated this once before in another thread, and I'll say it again here: your way of communicating this is making people consider the opposite as reasonable.

@neal @tiotasram @matt @hanshuebner When you cite clearly biased industry-sponsored "research" as a credible source, it makes it hard to believe you have the same goals and values on this as I do.

And a belief that we do have shared goals and values is a necessary prerequisite for taking serious any advice you might give on how to achieve those goals.

Without that it comes across as concern trolling.

@dalias @tiotasram @matt @hanshuebner My *point* is that it isn't a topic the AI labs are ignoring, because they would be absolutely stupid to ignore it. But just saying "no AI" isn't going to work anymore. There needs to be a framework to push things in a direction that improves the value for society and the world.

Even the environmental angle is something that people are looking at: https://arxiv.org/abs/2603.21419

We're at the beginning of something, and everything sucks at the beginning, sadly.

Is the future of AI green? What can innovation diffusion models say about generative AI's environmental impact?

The rise of generative artificial intelligence (GAI) has led to alarming predictions about its environmental impact. However, these predictions often overlook the fact that the diffusion of innovation is accompanied by the evolution of products and the optimization of their performance, primarily for economic reasons. This can also reduce their environmental impact. By analyzing the GAI ecosystem using the classic A-U innovation diffusion model, we can forecast this industry's structure and how its environmental impact will evolve. While GAI will never be green, its impact may not be as problematic as is sometimes claimed. However, this depends on which business model becomes dominant.

arXiv.org
@neal @tiotasram @matt @hanshuebner I think we have fundamentally different perspectives and values on this and that yours is not helpful to me.

@dalias @tiotasram @matt @hanshuebner I'm saying that yours isn't helping *you*. Whether your perspective and my perspective line up is somewhat immaterial.

In public discourse and for the successful debate of ideas, if you want yours to win (which I think you do for your own objectives), you need to do better to get people to come to your point of view.

@dalias @tiotasram @matt @hanshuebner But to make my stance clearer, I have *rarely* seen quality code submitted to me from people using predominantly AI-based tooling. It is frustrating, annoying, and it wastes my very precious time.

I don't like slopware, I didn't like it before AI and I don't like it now. What I *hate* is that it's so easy now to create it that I am barraged with it.

I want people that *care* about the code they deliver.

@neal @tiotasram @matt @hanshuebner I'm sorry but you're wrong about that.

Your premise is that my goal is to convince people who have bought into the AI propaganda to change their minds. I don't actually care what they think.

My goal is to validate and build community with the people who have rejected the AI cult's worldview, and to maintain and build the software infrastructure we need.

@dalias @neal @matt @hanshuebner

To chime in with my own reply: actually, for my purposes "just saying no AI" is working just fine, and I expect it to continue to do so. Obviously I'm also doing things like following research closely about its successes & especially failures so I can provide solid evidence-based arguments for *why* "no AI" should be a popular position. It's interesting that I've yet to meet a single AI advocate interested in the evidence, but plenty of people on the fence or just ignorant of the whole debate have been convinced.

If as you claim you want to avoid a worst-case scenario of AI harms, there's something concrete you can work on: promote unionization efforts in the software industry, including workplace-agnostic unions.

In the long run, there will always be a community of open-source devs who reject LLM coding "assistance." I'm going to be one of them, and I think being loud and clear about the harms involved is a good strategy for growing that community. If you study persuasion, I think you'll find that convincing people to change their behavior by public shaming is orders of magnitude more effective than by "winning arguments" or "toning down your call-outs to save the feelings of those who are pushing for harmful systems," especially when it comes to how many bystanders get influenced either way by watching the exchange.

One of my main goals here is to make sure that LLM promoters feel appropriately uncomfortable and embarrassed by their position, and that those who dabble feel like they're doing something wrong which needs to be justified somehow. It seems to have been successful in this case.