It's clear that AI assisted coding is dividing developers (welcome to the culture wars!). I've seen a few blog posts now that talk about how some people just "love the craft", "delight in making something just right, like knitting", etc, as opposed to people who just "want to make it work". As if that explains the divide.

How about this, some people resent the notion of being a babysitter to a stochastic token machine, hastening their own cognitive decline. Some people resent paying rent to a handful of US companies, all coming directly out of the TESCREAL human extinction cult, to be able to write software. Some people resent the "worse is better" steady decline of software quality over the past two decades, now supercharged. Some people resent that the hegemonic computing ecosystem is entirely shaped by the logic of venture capital. Some people hate that the digital commons is walled off and sold back to us. Oh and I guess some people also don't like the thought of making coding several orders of magnitude more energy intensive during a climate emergency.

But sure, no, it's really because we mourn the loss of our hobby.

@plexus In the end, software engineering is about creating solutions to problems other people have. The solutions are not a byproduct, but the primary purpose. To the majority of users, the inner workings and the creation process of software is opaque. The qualities that software exposes on the outside are largely independent of its inner workings.

This means that for most people in the software industry, adapting to the new tooling that makes the creation process more efficient is 1/

Hans, except in the modern software industry, the problems that are being solved by software products are not those of the end users, but instead those of the company that makes it or its investors. You can't explain all the humiliatingly hostile UX decisions of the last decade of software otherwise. No user problems are being solved by onboardings that get in your damn way when you want to use the app for its one and only purpose in a hurry.

@grishka Right on, and then consider that with the traditional mode of writing software, the cost of creating something that is good is very high.

I'd argue that with faster (machine assisted) software creation, it is easier to meet the need of users because the cost of change is drastically reduced. I'm experiencing that with those system that I'm currently writing that way.

The whole argument that software written by humans is better does not bear any merit for me.

@hanshuebner What does "software is better" even mean in this context?

I wonder if this entire "LLM generated code is good enough and it's creation is much more efficient" argument will stand the test of time when a lot of code is generated on the same product / project by many people. We do not know the answer to this yet.
@grishka

Holger, as far as I understand the capabilities of LLMs, they only really produce a passable result when given a blank slate and the task at hand is some variation of gluing some libraries and/or REST APIs together.
@grishka @schaueho @plexus Just try it yourself on something that you think it cannot succeed at. Happy to share a €10 Claude Code pass if you need one.
@hanshuebner Thanks but I guess I will see the effects of using LLMs over a longer period of time on a bigger codebase at work anyway. I'm actually more concerned about the effects on the developers, which brings us back to Arne's original toot.
@grishka @plexus

@schaueho @grishka @plexus I believe it is mostly a learning challenge. It was always possible and common to write bad and good software, and with LLMs generating code, new ways will need to be developed to ensure quality. This is the systemic part.

The personal part is that for some developers, their development activity changes. Merely writing code will not be a very common job for humans. Focus will be more on architecture, feature definition, requirements engineering etc.

Hans, thanks but I'm not looking to change my workflow at this time. I'm fully satisfied with it as is now.
@grishka OK, but then be aware that your opinions will just be based on propaganda. I'd rather know what I'm talking about.

Hans, I understand the working principles of LLMs. I don't need to have used one for writing code (I did poke at ChatGPT and DeepSeek a bit out of curiosity) to know that I don't need it. I don't have the problems that they claim to solve. My bottleneck isn't typing the code into the editor, it's the very kind of abstract thinking that LLMs are incapable of by virtue of what they are. I ask a lot of questions, both to myself and to other people, before I write a single line of code.

Besides, I prefer my tools to be 100% deterministic, predictable, and knowable. LLMs are anything but. They are designed to give varied statistically likely output, there's a step at the end that deliberately applies a bit of randomness when picking which of the most likely next tokens is used for output.

@grishka @plexus Just keep watching then.
Hans, of course I will. There will be industrial amounts of gloat coming from me when the AI bubble pops.

@hanshuebner @grishka

I have, and it failed to complete the task AND "lied" to me at the same time.

Boyd Stephen Smith Jr., you: your shit still doesn't work after 5 attempts
AI: You're absolutely right! Proceeds to delete your entire home directory

@BoydStephenSmithJr @grishka If you have the expectation that it should complete the task flawlessly and point out that it "lied", it seems that you have achieved your goal of showing that it did not work for you.

I've had many successes, and none of the things that I created magically collapsed or failed to work except under narrow circumstances. I had to spend time creating and improving them, but I would not have started them if I'd needed to write the code myself.

@hanshuebner You really shouldn't let your experience overwhelm evidence collected more scientifically. That's especially true of generative AI where the training is almost guaranteed to produce a system that exploits and amplifies the reader's biases. (Science is one of the ways we attempt to control our biases.)

Also, my experience is far from unique: https://www.grumpygamer.com/my_dinner_with_ai/ (my actual experience was shorter, but with worse results)

Also, I'm not glad you required a system to steal from authors in order to start something. It would have been better if you collaborated with authors in your community. That might involve payment, but my experience is that a lot of good programmers enjoy one-on-one "mentoring" / "apprenticeship" collaborations enough to do them for free. (Designing a curriculum and dealing with a room "full" of students is very different.)

My Dinner With AI

Ron Gilbert's often incoherent and bitter ramblings about the Game Industry

Grumpy Gamer