It's clear that AI assisted coding is dividing developers (welcome to the culture wars!). I've seen a few blog posts now that talk about how some people just "love the craft", "delight in making something just right, like knitting", etc, as opposed to people who just "want to make it work". As if that explains the divide.

How about this, some people resent the notion of being a babysitter to a stochastic token machine, hastening their own cognitive decline. Some people resent paying rent to a handful of US companies, all coming directly out of the TESCREAL human extinction cult, to be able to write software. Some people resent the "worse is better" steady decline of software quality over the past two decades, now supercharged. Some people resent that the hegemonic computing ecosystem is entirely shaped by the logic of venture capital. Some people hate that the digital commons is walled off and sold back to us. Oh and I guess some people also don't like the thought of making coding several orders of magnitude more energy intensive during a climate emergency.

But sure, no, it's really because we mourn the loss of our hobby.

@plexus In the end, software engineering is about creating solutions to problems other people have. The solutions are not a byproduct, but the primary purpose. To the majority of users, the inner workings and the creation process of software is opaque. The qualities that software exposes on the outside are largely independent of its inner workings.

This means that for most people in the software industry, adapting to the new tooling that makes the creation process more efficient is 1/

@hanshuebner @plexus
"The qualities that software exposes on the outside are largely independent of its inner workings." Sorry, but this can't be further away form truth. Our 70+ years pile of empirical evidence says otherwise. The whole history of software engineering is about how to manage and improve internal quality in order to result in good external quality.
@flooper @hanshuebner @plexus This. The fact that so many people believe otherwise doesn't make it true, and we will suffer the consequences of that stupid ideology.
@jmax @flooper @plexus I don't believe that "getting stuff done" is an ideology, but rather the reality under which every worker lives in capitalism. We're not getting paid for doing the right or the good thing, we're paid for getting the work done that the man wants us to do.

@hanshuebner @flooper @plexus And if your view of the world begins and ends with making money, as I admit is capitalist dogma, fair enough.

But producing code with LLMs - or using them for anything which needs to be correct - is deception (whether you're deceiving yourself or others) on a massive scale, on a par with crypto, Ponzi schemes, climate denial, etc.

(1/2)

@hanshuebner @flooper

Anthropomorphizing them (as many do, but I don't think you are) is a flawed view, but does provide one useful insight.

If one treats an LLM as a person, then the fundamental issue is:

They are a bullshit artist with a huge library. They do not have competence at anything except bullshitting, at which they are superb.

I agree that it's amazing that we can build a mechanical bullshit generator that's good enough to bypass most people's defenses.

@jmax @flooper I think I'm with you. The difficult part of LLMs for code generation for me is that the bullshit is executable. I can and do dismiss AI "prose", "art" and "music" easily because it is devoid of what makes me want to consume the thing in the first place. Code is primarirly consumed by machines, however, and its primary purpose is the functionality that it provides. That sets it apart from other slop.

@hanshuebner @jmax We can agree that the primary purpose of a software system is the functionality it provides, but "Code is primarily consumed by machines" is not fully true.

Code is RUN by machines but it is still, and it will be for a long time, primarily consumed, as in written and read and changed by humans (https://www.goodreads.com/quotes/6341736-any-fool-can-write-code-that-a-computer-can-understand)

And code that cannot be changed will eventually stop being useful (Lehman's 1st law of software evolution: https://en.wikipedia.org/wiki/Lehman%27s_laws_of_software_evolution)

A quote by Martin Fowler

Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

@flooper I don't see how Fowler's quote asserts that code "will still be consumed [...] by humans". Nothing in that quote says that it would, and from my experience, there is no need to actually read LLM written code in order to ensure that they work.

You seem to want to imply that LLM written code can only function if a human has read and approved it, which is not true. But maybe I misunderstood.

You also seem to want to imply that LLM code is not maintainable. What makes you think so?

AI Coding Assistants Boost Velocity, But Worsen Technical Debt | Hao He posted on the topic | LinkedIn

Excited to share our new research accepted at MSR'26! With the hype around AI coding assistants claiming 10x productivity gains, we wanted to know: What does the empirical evidence actually show? What we did: 📊 Analyzed ~800 GitHub projects adopting Cursor vs. matched controls  📈 Used difference-in-differences + panel GMM for causal inference  ⏱️ Tracked velocity and quality metrics over 18+ months What we found: ✅ The good: Cursor adoption leads to a significant velocity spike—up to 3-5x more lines added in the first month ⚠️ The catch: Gains dissipate after ~2 months 📉 The concern: Persistent increases in technical debt (+30% static analysis warnings) and code complexity (+41%)—and this accumulated debt slows down future development The takeaway: AI coding tools deliver real short-term gains, but without scaling quality assurance alongside velocity, teams risk a self-reinforcing cycle of technical debt. This doesn’t mean we should stop using AI, but we should use AI wisely. Quality assurance needs to be a first-class citizen in AI-driven development workflows. 💾 Replication package: https://lnkd.in/ebrZ8eTz 📄 Preprint: https://lnkd.in/eQNg2XJD Grateful to my amazing co-authors Courtney Miller, Shyam Agarwal, Christian Kästner, and Bogdan Vasilescu!

LinkedIn

@flooper Now that you could not explain why what you wrote before (Fowler says writing code well for humans to read is hard) is proof for your claim that only code read by humans can be good, you come up with one study that claims LLM coding agents are because Copilot did not work in a sample set.

I'm not going to be able to challenge your beliefs, but gladly I don't need to.

@hanshuebner What I meant is that, for now and with the evidence we have, our ability to maintain and evolve a software system still depends on humans being able to do it.

You are right, a single paper means nothing, but evidence is build up by the accumulated findings of many studies like this.

I can pull others but the paper I referenced is relevant, because, a) it's quite recent, b) it's published in the current top conference on empirical software engineering

@flooper I'm going off my own experience, which is some 40 years as a programmer and 1.5 years of working with LLMs. From that, I can only conclude that code writing is over. I don't do it unless I want to, but it is not economical and there is no tangible benefit to doing so.

This does not mean that studies which find LLM projects to have quality issues are invalid. The reason, though, is not that LLMs cannot write code, but that software engineering with LLMs requires different processes.

@hanshuebner indeed. Your experience checks-out and is aligned with what we are seeing as the bigger picture. Good programmers and high-performing organisations, that were already doing great without LLM-based gen AI, are seeing some performance boost. But, for the rest, using these tools is resulting in none, or negative impact. Funny, this means that widening the adoption of proved good and effective software engineering practices is more positively impactful than using gen-AI

@flooper In the end, it is the outcomes that matter. Who cares whether human teams would have created better outcomes if they had adopted good software engineering practices, when machine-generated solutions fit the requirements and do the work cheaper? Who, other than the people that are out of a job now, of course.

We programmers thought we were special and needed, and now we are not. Or maybe it is just a matter of us adapting to the new tools.