It's clear that AI assisted coding is dividing developers (welcome to the culture wars!). I've seen a few blog posts now that talk about how some people just "love the craft", "delight in making something just right, like knitting", etc, as opposed to people who just "want to make it work". As if that explains the divide.

How about this, some people resent the notion of being a babysitter to a stochastic token machine, hastening their own cognitive decline. Some people resent paying rent to a handful of US companies, all coming directly out of the TESCREAL human extinction cult, to be able to write software. Some people resent the "worse is better" steady decline of software quality over the past two decades, now supercharged. Some people resent that the hegemonic computing ecosystem is entirely shaped by the logic of venture capital. Some people hate that the digital commons is walled off and sold back to us. Oh and I guess some people also don't like the thought of making coding several orders of magnitude more energy intensive during a climate emergency.

But sure, no, it's really because we mourn the loss of our hobby.

@plexus In the end, software engineering is about creating solutions to problems other people have. The solutions are not a byproduct, but the primary purpose. To the majority of users, the inner workings and the creation process of software is opaque. The qualities that software exposes on the outside are largely independent of its inner workings.

This means that for most people in the software industry, adapting to the new tooling that makes the creation process more efficient is 1/

@plexus not a matter of choice, or resent. The market for "human crafted" software will be small, much smaller than the market for software that is cheap and does what users "want".

It is clear that the hidden costs of LLM generated software are huge, but these costs are not going to be realised at the point of creation.

This mechanism is the same for many aspects of capitalism. Opting out of one thing won't fix the system, but is a gesture. Just. 2/2

@hanshuebner @plexus yes and no. It‘s a systemic problem that needs a fix on a regulatory scale. But enough single devs opting out can also make a difference. Furthermore, regulation is done by politics, which in the end is the sum of the votes and voices of the people.

@can @plexus It is a personal choice to frame it that way, if you can afford it. For the majority of developers, it is a question of adapting or dropping out of the industry.

Humanity went through this process a couple of times now, and every industrial cycle left those who were made redundant by new technologies with the same choice.

Social change is possible, but our class - workers of the software industry - is not going to spark the next revolution, I fear.

@hanshuebner @plexus if you are against how big tech is pushing LLMs, but you can’t do any chages because you will lose your job and don’t have any alternative, you can keep using it at your job but e.g. pressure your elected officials to improve regulation of big tech with regard to LLMs.
@hanshuebner @can seriously Hans, I am in no mood for this. Yes, the force of capital is overwhelming, and there's little that a bubble of old timers on the fediverse is going to do about it. We're all going to have to reckon with that and figure out what choices we have left. That's life under capitalism. The least we can do is speak our truth, and call things out for what they really are. At least we won't feel like we're the only ones who think this shit sucks, or who see it for what it really is. There's a reason I talk about hegemony. The defeatism only hastens the process.

@plexus @can I don't actually think this shit sucks. Things are not that easy. Mind you, computers are a product of the military industrial complex in itself, and we were just lucky to be far away from WWII and the Manhattan Project that we could ignore and forget how all this stuff came to fruition in the first place.

There is no alternative to taking the world as it is when working on social change, though. It does not seem like a successful strategy to opt out of the technology everyone 1/

@plexus @can else and our enemies use. You don't go to a gun fight with a knife as a weapon, even if you believe that guns should not exist in the first place.
@hanshuebner @plexus I don't know what your point is?
@can @plexus Sorry. I'm not great at words.
@hanshuebner @can @plexus Actually, you are doing great putting my exact feelings into words. Thanks for that!
@hanshuebner @plexus I think we all agree that this shit sucks and many of us are familar with the history of modern computing. I disagree that workers of the software industry can't spark change. We are probably the most privileged of the working class. So I would even argue it's our duty to do something with this privelege...

@can @hanshuebner @plexus
> on a regulatory scale

You can't "regulate" anymore.

The one-hour income of the business to be regulated surpasses sum* of whole-live earnings of less than thousand politicians that might regulate.

*Sans the side channel paymnts from the unregulated.

@ohir @can @plexus Is it worth fighting for a world were regulation is possible? Or do we just need to succumb to the all-encompassing power of the economy and capitalism?
@hanshuebner @can @plexus
> Is it worth fighting for a [just] world
https://www.youtube.com/watch?v=QFu0o8NB5Io
Daniel Kahn & The Painted Bird - "Freedom Is A Verb" (official video)

YouTube
@hanshuebner I didn't say anything about fixing the system, I only talked about the resentment, which is real.
@plexus I can relate to that.

@hanshuebner @plexus

You are stating a lot of assumptions:

- That the qualities that software exposes on the outside are largely independent of its inner workings.

- That LLMs make the creation process more efficient.

- That LLM-generated software is cheap and does what users “want.”

- That fixing one thing is not worthwhile while other things are not fixed.

But:

- Inner quality does matter a lot. E. g. JIRA receives a lot of complaints because it is not well designed internally.

@hanshuebner @plexus


- LLMs generate straw-fire software. It seems to burn at first, but it's not even hot enough to start a real fire.

- This seems cheap in a very short-term view, and it might satisfy short-term “wants”, but it's not sustainable.

- We need to start fixing somewhere. Two holes in a bucket are not a dilemma, but two tasks.

@Ardubal @hanshuebner @plexus "Move fast and break things" has been one of the worst motivators of our time.
@Ardubal @plexus I am not stating "assumptions", but "opinions" and you are entitled to yours, which I don't agree with.
@hanshuebner @plexus
"The qualities that software exposes on the outside are largely independent of its inner workings." Sorry, but this can't be further away form truth. Our 70+ years pile of empirical evidence says otherwise. The whole history of software engineering is about how to manage and improve internal quality in order to result in good external quality.

@flooper @plexus You can certainly define "quality" so that what you wrote is true. I know of enough "successful" software that was "successful" without having "good quality" on the inside. "Success" is something that many people would associate with "quality", so there you have the definition that I was talking about.

I believe that disussions around quality that don't consider users is worthless. The connection between external and internal quality less tight than some make it appear.

@hanshuebner @flooper I explicitly called out Worse is Better, which is exactly what you are talking about. The original formulation was that Unix "won" because it was "worse", it was simpler, easier to port, etc. That whole dogma has morphed over time. During the SaaS boom worse-is-better meant ship MVPs to capture market and lock in users. Now that we're in the enshittify stage it means "drop quality and raise prices as much as the user will bear before churning", enabled by platform lock in. So yes, for some capitalist notion this is winning, it's certainly extracting value. It's a notion I wholeheartedly reject.

@plexus @flooper "Worse is better" is not a dogma, it is a thesis and an interpretation of history, which can be read in different ways. It was originally frame in the context of Unix and how it was worse than other systems. These other systems were, e.g. Multics, VAX/VMS, VM/370 or Genera, and much of the resent of the applauding audience came from habit, arrogance and hubris.

In that context, it can also be argued that Unix was better than these other systems, strictly because of its 1/

@plexus @flooper simplicity. And simplicity has become a primary quality in the recent years, as you know.

This teaches us that resentment to technology within the technology field is very much bound to the time period in which it occurs, and to common habits.

It is tempting to interleave social and technological critique, but I'd argue that it is often not leading to a very focused conversation. 2/2

@hanshuebner @plexus @flooper

Yes, »worse is better« morphed from /description/ to /prescription/. (There is a nice talk by Romeu Moura about this fallacy: https://www.youtube.com/watch?v=92Pq4-e0QyI)

In short: people erroneously move from »it's like this« to »it should be like this« or »it's inevitable like this«, and then enshrine it as a given fact, assumption or axiom instead of asking what can be done about it.

Why do hotel bathrooms lack toothpaste - Romeu Moura - DDD Europe 2019

YouTube
@flooper @hanshuebner @plexus This. The fact that so many people believe otherwise doesn't make it true, and we will suffer the consequences of that stupid ideology.
@jmax @flooper @plexus I don't believe that "getting stuff done" is an ideology, but rather the reality under which every worker lives in capitalism. We're not getting paid for doing the right or the good thing, we're paid for getting the work done that the man wants us to do.
@hanshuebner @jmax @flooper can I please be untagged from this thread? thanks!

@hanshuebner @flooper @plexus And if your view of the world begins and ends with making money, as I admit is capitalist dogma, fair enough.

But producing code with LLMs - or using them for anything which needs to be correct - is deception (whether you're deceiving yourself or others) on a massive scale, on a par with crypto, Ponzi schemes, climate denial, etc.

(1/2)

@jmax @flooper @plexus I'm not sure how you feed yourself and your kids. Maybe you are rich and don't have to worry about that. I'm not all that privileged.

@hanshuebner @flooper @plexus I work for a living and try to avoid dishonesty while doing so.

Since I understand that LLMs are fundamentally and inherently dishonest, that doesn't leave much wiggle room for me.

@jmax @flooper Machines don't have a concept of honesty, but I think I know what you mean. Thank you for participating in this exchange!
@hanshuebner @flooper Yes. But useful tools are those machines which do have honesty, in a mechanical sense.

@hanshuebner @flooper

Anthropomorphizing them (as many do, but I don't think you are) is a flawed view, but does provide one useful insight.

If one treats an LLM as a person, then the fundamental issue is:

They are a bullshit artist with a huge library. They do not have competence at anything except bullshitting, at which they are superb.

I agree that it's amazing that we can build a mechanical bullshit generator that's good enough to bypass most people's defenses.

@jmax @flooper I think I'm with you. The difficult part of LLMs for code generation for me is that the bullshit is executable. I can and do dismiss AI "prose", "art" and "music" easily because it is devoid of what makes me want to consume the thing in the first place. Code is primarirly consumed by machines, however, and its primary purpose is the functionality that it provides. That sets it apart from other slop.

@hanshuebner @flooper And the assumption that it's OK to build high rise apartments from paper mache, which is what I'm being asked to swallow, is not OK.

And the fact that we have a sophisticated machine for patching together buildings from recycled concrete slabs patched together with paper mache - carefully concealed where possible, or skillfully painted with stucco where necessary - makes it worse, not better.

Even if they do stand up for a little while before they collapse.

@jmax @flooper To stay in that analogy: If you, the developer, ask the LLM to create a high-rise out of paper mache, it'll gladly do so. It is your job as the software developer to create the architecture.

As the old adage goes: You can write bad FORTRAN in any language.

Hans, except in the modern software industry, the problems that are being solved by software products are not those of the end users, but instead those of the company that makes it or its investors. You can't explain all the humiliatingly hostile UX decisions of the last decade of software otherwise. No user problems are being solved by onboardings that get in your damn way when you want to use the app for its one and only purpose in a hurry.

@grishka Right on, and then consider that with the traditional mode of writing software, the cost of creating something that is good is very high.

I'd argue that with faster (machine assisted) software creation, it is easier to meet the need of users because the cost of change is drastically reduced. I'm experiencing that with those system that I'm currently writing that way.

The whole argument that software written by humans is better does not bear any merit for me.

@grishka It is basically the same argument that old-school programmers make since decades when a new tool comes to the market.

@hanshuebner @grishka

"the cost of change is drastically reduced"

Only because the true costs are either being externalised or hidden by vast amounts of circular investments.

When the bubble pops and the bill comes due, we'll see how much the costs were actually reduced.

Oh. No, we won't. Because the too-big-to-fail companies will get bailed out by the tax payers. Again.

@rogerlipscombe @grishka I'm not trying to debate that.

Look at the Internet: The cost for it were never paid by its users directly, but rather by ad revenue. It happened with networking, and it will happen with compute as well.

@hanshuebner What does "software is better" even mean in this context?

I wonder if this entire "LLM generated code is good enough and it's creation is much more efficient" argument will stand the test of time when a lot of code is generated on the same product / project by many people. We do not know the answer to this yet.
@grishka

Holger, as far as I understand the capabilities of LLMs, they only really produce a passable result when given a blank slate and the task at hand is some variation of gluing some libraries and/or REST APIs together.
@grishka @schaueho @plexus Just try it yourself on something that you think it cannot succeed at. Happy to share a €10 Claude Code pass if you need one.
@hanshuebner Thanks but I guess I will see the effects of using LLMs over a longer period of time on a bigger codebase at work anyway. I'm actually more concerned about the effects on the developers, which brings us back to Arne's original toot.
@grishka @plexus

@schaueho @grishka @plexus I believe it is mostly a learning challenge. It was always possible and common to write bad and good software, and with LLMs generating code, new ways will need to be developed to ensure quality. This is the systemic part.

The personal part is that for some developers, their development activity changes. Merely writing code will not be a very common job for humans. Focus will be more on architecture, feature definition, requirements engineering etc.

Hans, thanks but I'm not looking to change my workflow at this time. I'm fully satisfied with it as is now.
@grishka OK, but then be aware that your opinions will just be based on propaganda. I'd rather know what I'm talking about.

Hans, I understand the working principles of LLMs. I don't need to have used one for writing code (I did poke at ChatGPT and DeepSeek a bit out of curiosity) to know that I don't need it. I don't have the problems that they claim to solve. My bottleneck isn't typing the code into the editor, it's the very kind of abstract thinking that LLMs are incapable of by virtue of what they are. I ask a lot of questions, both to myself and to other people, before I write a single line of code.

Besides, I prefer my tools to be 100% deterministic, predictable, and knowable. LLMs are anything but. They are designed to give varied statistically likely output, there's a step at the end that deliberately applies a bit of randomness when picking which of the most likely next tokens is used for output.

@grishka @plexus Just keep watching then.
Hans, of course I will. There will be industrial amounts of gloat coming from me when the AI bubble pops.

@hanshuebner @grishka

I have, and it failed to complete the task AND "lied" to me at the same time.

Boyd Stephen Smith Jr., you: your shit still doesn't work after 5 attempts
AI: You're absolutely right! Proceeds to delete your entire home directory

@BoydStephenSmithJr @grishka If you have the expectation that it should complete the task flawlessly and point out that it "lied", it seems that you have achieved your goal of showing that it did not work for you.

I've had many successes, and none of the things that I created magically collapsed or failed to work except under narrow circumstances. I had to spend time creating and improving them, but I would not have started them if I'd needed to write the code myself.

@schaueho @hanshuebner @grishka

1/2

> "LLM generated code is good enough and it's creation is much more efficient" argument will stand the test of time when a lot of code is generated on the same product / project by many people. We do not know the answer to this yet.

I do suspect that some of the divide we see in the debate on Mastodon relates to the fact that some of the people arguing against it have not used LLMs to assist in writing Very Good Software At Scale using the methodologies available today.

I ship software to ~6 million monthly active users, with confidence, using Claude Code. I haven't written code by hand in ~10 months.

So, to the "We do not know the answer to this yet.", I think that we do.

We know that LLMs, used naively, make mistakes. And a craftsperson who knows the limitation of their tools (LLMs) can mitigate and verify in a number of ways.

Concession: LLMs were not ethically trained. Data centers are having awful impact on the energy grid + water use. I will never begrudge a person's choice to boycott.

Counterpoint: today, LLMs running on e.g. Apple Silicon approach the performance of the SOTA models. We're gonna see more of this, which will mitigate the individual's environmental impact as well as the need to pay forever-rent to big tech.

@schaueho @hanshuebner @grishka

I will never begrudge a person's decision to boycott LLM usage.

But I do grow weary of folk on Mastodon earnestly insisting that "the flaws in LLMs will somehow all be laid bare, and handcrafted, artisanal code is somehow inherently superior"

Y'all cheering for John Henry without understanding that this is a job that's actually very well suited for a machine.

https://en.wikipedia.org/wiki/John_Henry_(folklore)

John Henry (folklore) - Wikipedia

@dusk
OT, but cool John Henry video: https://www.youtube.com/watch?v=kt9NSMZR0dM
JOHN HENRY AND THE RAILROAD | Omeleto

YouTube