It's clear that AI assisted coding is dividing developers (welcome to the culture wars!). I've seen a few blog posts now that talk about how some people just "love the craft", "delight in making something just right, like knitting", etc, as opposed to people who just "want to make it work". As if that explains the divide.

How about this, some people resent the notion of being a babysitter to a stochastic token machine, hastening their own cognitive decline. Some people resent paying rent to a handful of US companies, all coming directly out of the TESCREAL human extinction cult, to be able to write software. Some people resent the "worse is better" steady decline of software quality over the past two decades, now supercharged. Some people resent that the hegemonic computing ecosystem is entirely shaped by the logic of venture capital. Some people hate that the digital commons is walled off and sold back to us. Oh and I guess some people also don't like the thought of making coding several orders of magnitude more energy intensive during a climate emergency.

But sure, no, it's really because we mourn the loss of our hobby.

@plexus In the end, software engineering is about creating solutions to problems other people have. The solutions are not a byproduct, but the primary purpose. To the majority of users, the inner workings and the creation process of software is opaque. The qualities that software exposes on the outside are largely independent of its inner workings.

This means that for most people in the software industry, adapting to the new tooling that makes the creation process more efficient is 1/

@hanshuebner @plexus
"The qualities that software exposes on the outside are largely independent of its inner workings." Sorry, but this can't be further away form truth. Our 70+ years pile of empirical evidence says otherwise. The whole history of software engineering is about how to manage and improve internal quality in order to result in good external quality.

@flooper @plexus You can certainly define "quality" so that what you wrote is true. I know of enough "successful" software that was "successful" without having "good quality" on the inside. "Success" is something that many people would associate with "quality", so there you have the definition that I was talking about.

I believe that disussions around quality that don't consider users is worthless. The connection between external and internal quality less tight than some make it appear.

@hanshuebner @flooper I explicitly called out Worse is Better, which is exactly what you are talking about. The original formulation was that Unix "won" because it was "worse", it was simpler, easier to port, etc. That whole dogma has morphed over time. During the SaaS boom worse-is-better meant ship MVPs to capture market and lock in users. Now that we're in the enshittify stage it means "drop quality and raise prices as much as the user will bear before churning", enabled by platform lock in. So yes, for some capitalist notion this is winning, it's certainly extracting value. It's a notion I wholeheartedly reject.

@plexus @flooper "Worse is better" is not a dogma, it is a thesis and an interpretation of history, which can be read in different ways. It was originally frame in the context of Unix and how it was worse than other systems. These other systems were, e.g. Multics, VAX/VMS, VM/370 or Genera, and much of the resent of the applauding audience came from habit, arrogance and hubris.

In that context, it can also be argued that Unix was better than these other systems, strictly because of its 1/

@plexus @flooper simplicity. And simplicity has become a primary quality in the recent years, as you know.

This teaches us that resentment to technology within the technology field is very much bound to the time period in which it occurs, and to common habits.

It is tempting to interleave social and technological critique, but I'd argue that it is often not leading to a very focused conversation. 2/2

@hanshuebner @plexus @flooper

Yes, »worse is better« morphed from /description/ to /prescription/. (There is a nice talk by Romeu Moura about this fallacy: https://www.youtube.com/watch?v=92Pq4-e0QyI)

In short: people erroneously move from »it's like this« to »it should be like this« or »it's inevitable like this«, and then enshrine it as a given fact, assumption or axiom instead of asking what can be done about it.

Why do hotel bathrooms lack toothpaste - Romeu Moura - DDD Europe 2019

YouTube
@flooper @hanshuebner @plexus This. The fact that so many people believe otherwise doesn't make it true, and we will suffer the consequences of that stupid ideology.
@jmax @flooper @plexus I don't believe that "getting stuff done" is an ideology, but rather the reality under which every worker lives in capitalism. We're not getting paid for doing the right or the good thing, we're paid for getting the work done that the man wants us to do.
@hanshuebner @jmax @flooper can I please be untagged from this thread? thanks!

@hanshuebner @flooper @plexus And if your view of the world begins and ends with making money, as I admit is capitalist dogma, fair enough.

But producing code with LLMs - or using them for anything which needs to be correct - is deception (whether you're deceiving yourself or others) on a massive scale, on a par with crypto, Ponzi schemes, climate denial, etc.

(1/2)

@jmax @flooper @plexus I'm not sure how you feed yourself and your kids. Maybe you are rich and don't have to worry about that. I'm not all that privileged.

@hanshuebner @flooper @plexus I work for a living and try to avoid dishonesty while doing so.

Since I understand that LLMs are fundamentally and inherently dishonest, that doesn't leave much wiggle room for me.

@jmax @flooper Machines don't have a concept of honesty, but I think I know what you mean. Thank you for participating in this exchange!
@hanshuebner @flooper Yes. But useful tools are those machines which do have honesty, in a mechanical sense.

@hanshuebner @flooper

Anthropomorphizing them (as many do, but I don't think you are) is a flawed view, but does provide one useful insight.

If one treats an LLM as a person, then the fundamental issue is:

They are a bullshit artist with a huge library. They do not have competence at anything except bullshitting, at which they are superb.

I agree that it's amazing that we can build a mechanical bullshit generator that's good enough to bypass most people's defenses.

@jmax @flooper I think I'm with you. The difficult part of LLMs for code generation for me is that the bullshit is executable. I can and do dismiss AI "prose", "art" and "music" easily because it is devoid of what makes me want to consume the thing in the first place. Code is primarirly consumed by machines, however, and its primary purpose is the functionality that it provides. That sets it apart from other slop.

@hanshuebner @flooper And the assumption that it's OK to build high rise apartments from paper mache, which is what I'm being asked to swallow, is not OK.

And the fact that we have a sophisticated machine for patching together buildings from recycled concrete slabs patched together with paper mache - carefully concealed where possible, or skillfully painted with stucco where necessary - makes it worse, not better.

Even if they do stand up for a little while before they collapse.

@jmax @flooper To stay in that analogy: If you, the developer, ask the LLM to create a high-rise out of paper mache, it'll gladly do so. It is your job as the software developer to create the architecture.

As the old adage goes: You can write bad FORTRAN in any language.

@hanshuebner @flooper Your analogy is inaccurate.

A similar, much better one, would be if I asked for a high rise in compliance with building codes and good civil engineering practice, and I got a building crafted from rubble and paper mache, carefully designed to only one standard:
Meet and pass the precise tests, in the precise locations, that will be used to certify the building for occupancy.

Those tests assume competence and skill on the part of the designing engineers.

@hanshuebner @flooper They are not designed to catch engineers who simply don't give a shit.

@hanshuebner @flooper And if you think your job (when using LLMs) is anything other than taking the blame, you're delusional.

I will accept blame for my mistakes. I will not accept blame for the problems introduced by a crappy tool forced on me.

@hanshuebner @jmax We can agree that the primary purpose of a software system is the functionality it provides, but "Code is primarily consumed by machines" is not fully true.

Code is RUN by machines but it is still, and it will be for a long time, primarily consumed, as in written and read and changed by humans (https://www.goodreads.com/quotes/6341736-any-fool-can-write-code-that-a-computer-can-understand)

And code that cannot be changed will eventually stop being useful (Lehman's 1st law of software evolution: https://en.wikipedia.org/wiki/Lehman%27s_laws_of_software_evolution)

A quote by Martin Fowler

Any fool can write code that a computer can understand. Good programmers write code that humans can understand.

@flooper I don't see how Fowler's quote asserts that code "will still be consumed [...] by humans". Nothing in that quote says that it would, and from my experience, there is no need to actually read LLM written code in order to ensure that they work.

You seem to want to imply that LLM written code can only function if a human has read and approved it, which is not true. But maybe I misunderstood.

You also seem to want to imply that LLM code is not maintainable. What makes you think so?

AI Coding Assistants Boost Velocity, But Worsen Technical Debt | Hao He posted on the topic | LinkedIn

Excited to share our new research accepted at MSR'26! With the hype around AI coding assistants claiming 10x productivity gains, we wanted to know: What does the empirical evidence actually show? What we did: 📊 Analyzed ~800 GitHub projects adopting Cursor vs. matched controls  📈 Used difference-in-differences + panel GMM for causal inference  ⏱️ Tracked velocity and quality metrics over 18+ months What we found: ✅ The good: Cursor adoption leads to a significant velocity spike—up to 3-5x more lines added in the first month ⚠️ The catch: Gains dissipate after ~2 months 📉 The concern: Persistent increases in technical debt (+30% static analysis warnings) and code complexity (+41%)—and this accumulated debt slows down future development The takeaway: AI coding tools deliver real short-term gains, but without scaling quality assurance alongside velocity, teams risk a self-reinforcing cycle of technical debt. This doesn’t mean we should stop using AI, but we should use AI wisely. Quality assurance needs to be a first-class citizen in AI-driven development workflows. 💾 Replication package: https://lnkd.in/ebrZ8eTz 📄 Preprint: https://lnkd.in/eQNg2XJD Grateful to my amazing co-authors Courtney Miller, Shyam Agarwal, Christian Kästner, and Bogdan Vasilescu!

LinkedIn

@flooper Now that you could not explain why what you wrote before (Fowler says writing code well for humans to read is hard) is proof for your claim that only code read by humans can be good, you come up with one study that claims LLM coding agents are because Copilot did not work in a sample set.

I'm not going to be able to challenge your beliefs, but gladly I don't need to.

@hanshuebner What I meant is that, for now and with the evidence we have, our ability to maintain and evolve a software system still depends on humans being able to do it.

You are right, a single paper means nothing, but evidence is build up by the accumulated findings of many studies like this.

I can pull others but the paper I referenced is relevant, because, a) it's quite recent, b) it's published in the current top conference on empirical software engineering

@flooper I'm going off my own experience, which is some 40 years as a programmer and 1.5 years of working with LLMs. From that, I can only conclude that code writing is over. I don't do it unless I want to, but it is not economical and there is no tangible benefit to doing so.

This does not mean that studies which find LLM projects to have quality issues are invalid. The reason, though, is not that LLMs cannot write code, but that software engineering with LLMs requires different processes.

@hanshuebner indeed. Your experience checks-out and is aligned with what we are seeing as the bigger picture. Good programmers and high-performing organisations, that were already doing great without LLM-based gen AI, are seeing some performance boost. But, for the rest, using these tools is resulting in none, or negative impact. Funny, this means that widening the adoption of proved good and effective software engineering practices is more positively impactful than using gen-AI

@flooper In the end, it is the outcomes that matter. Who cares whether human teams would have created better outcomes if they had adopted good software engineering practices, when machine-generated solutions fit the requirements and do the work cheaper? Who, other than the people that are out of a job now, of course.

We programmers thought we were special and needed, and now we are not. Or maybe it is just a matter of us adapting to the new tools.