It's clear that AI assisted coding is dividing developers (welcome to the culture wars!). I've seen a few blog posts now that talk about how some people just "love the craft", "delight in making something just right, like knitting", etc, as opposed to people who just "want to make it work". As if that explains the divide.

How about this, some people resent the notion of being a babysitter to a stochastic token machine, hastening their own cognitive decline. Some people resent paying rent to a handful of US companies, all coming directly out of the TESCREAL human extinction cult, to be able to write software. Some people resent the "worse is better" steady decline of software quality over the past two decades, now supercharged. Some people resent that the hegemonic computing ecosystem is entirely shaped by the logic of venture capital. Some people hate that the digital commons is walled off and sold back to us. Oh and I guess some people also don't like the thought of making coding several orders of magnitude more energy intensive during a climate emergency.

But sure, no, it's really because we mourn the loss of our hobby.

@plexus In the end, software engineering is about creating solutions to problems other people have. The solutions are not a byproduct, but the primary purpose. To the majority of users, the inner workings and the creation process of software is opaque. The qualities that software exposes on the outside are largely independent of its inner workings.

This means that for most people in the software industry, adapting to the new tooling that makes the creation process more efficient is 1/

Hans, except in the modern software industry, the problems that are being solved by software products are not those of the end users, but instead those of the company that makes it or its investors. You can't explain all the humiliatingly hostile UX decisions of the last decade of software otherwise. No user problems are being solved by onboardings that get in your damn way when you want to use the app for its one and only purpose in a hurry.

@grishka Right on, and then consider that with the traditional mode of writing software, the cost of creating something that is good is very high.

I'd argue that with faster (machine assisted) software creation, it is easier to meet the need of users because the cost of change is drastically reduced. I'm experiencing that with those system that I'm currently writing that way.

The whole argument that software written by humans is better does not bear any merit for me.

@hanshuebner What does "software is better" even mean in this context?

I wonder if this entire "LLM generated code is good enough and it's creation is much more efficient" argument will stand the test of time when a lot of code is generated on the same product / project by many people. We do not know the answer to this yet.
@grishka

Holger, as far as I understand the capabilities of LLMs, they only really produce a passable result when given a blank slate and the task at hand is some variation of gluing some libraries and/or REST APIs together.
@grishka @schaueho @plexus Just try it yourself on something that you think it cannot succeed at. Happy to share a €10 Claude Code pass if you need one.
Hans, thanks but I'm not looking to change my workflow at this time. I'm fully satisfied with it as is now.
@grishka OK, but then be aware that your opinions will just be based on propaganda. I'd rather know what I'm talking about.

Hans, I understand the working principles of LLMs. I don't need to have used one for writing code (I did poke at ChatGPT and DeepSeek a bit out of curiosity) to know that I don't need it. I don't have the problems that they claim to solve. My bottleneck isn't typing the code into the editor, it's the very kind of abstract thinking that LLMs are incapable of by virtue of what they are. I ask a lot of questions, both to myself and to other people, before I write a single line of code.

Besides, I prefer my tools to be 100% deterministic, predictable, and knowable. LLMs are anything but. They are designed to give varied statistically likely output, there's a step at the end that deliberately applies a bit of randomness when picking which of the most likely next tokens is used for output.

@grishka @plexus Just keep watching then.
Hans, of course I will. There will be industrial amounts of gloat coming from me when the AI bubble pops.