It's clear that AI assisted coding is dividing developers (welcome to the culture wars!). I've seen a few blog posts now that talk about how some people just "love the craft", "delight in making something just right, like knitting", etc, as opposed to people who just "want to make it work". As if that explains the divide.

How about this, some people resent the notion of being a babysitter to a stochastic token machine, hastening their own cognitive decline. Some people resent paying rent to a handful of US companies, all coming directly out of the TESCREAL human extinction cult, to be able to write software. Some people resent the "worse is better" steady decline of software quality over the past two decades, now supercharged. Some people resent that the hegemonic computing ecosystem is entirely shaped by the logic of venture capital. Some people hate that the digital commons is walled off and sold back to us. Oh and I guess some people also don't like the thought of making coding several orders of magnitude more energy intensive during a climate emergency.

But sure, no, it's really because we mourn the loss of our hobby.

@plexus In the end, software engineering is about creating solutions to problems other people have. The solutions are not a byproduct, but the primary purpose. To the majority of users, the inner workings and the creation process of software is opaque. The qualities that software exposes on the outside are largely independent of its inner workings.

This means that for most people in the software industry, adapting to the new tooling that makes the creation process more efficient is 1/

@plexus not a matter of choice, or resent. The market for "human crafted" software will be small, much smaller than the market for software that is cheap and does what users "want".

It is clear that the hidden costs of LLM generated software are huge, but these costs are not going to be realised at the point of creation.

This mechanism is the same for many aspects of capitalism. Opting out of one thing won't fix the system, but is a gesture. Just. 2/2

@hanshuebner @plexus

You are stating a lot of assumptions:

- That the qualities that software exposes on the outside are largely independent of its inner workings.

- That LLMs make the creation process more efficient.

- That LLM-generated software is cheap and does what users “want.”

- That fixing one thing is not worthwhile while other things are not fixed.

But:

- Inner quality does matter a lot. E. g. JIRA receives a lot of complaints because it is not well designed internally.

@hanshuebner @plexus


- LLMs generate straw-fire software. It seems to burn at first, but it's not even hot enough to start a real fire.

- This seems cheap in a very short-term view, and it might satisfy short-term “wants”, but it's not sustainable.

- We need to start fixing somewhere. Two holes in a bucket are not a dilemma, but two tasks.

@Ardubal @hanshuebner @plexus "Move fast and break things" has been one of the worst motivators of our time.

@toerror @Ardubal @hanshuebner @plexus

> adapting to the new tooling that makes the creation process more efficient

That's a HELL of assumption, indeed.

And a main talking point between AI propagandists, actually.

An "efficiency" that is never clearly defined or measured, but always presumed.

See, I can produce a lot of shit! In a fast manner!!
Without using my hands, or my brain!
Won't you eat it? Too bad, you gonna be left behind.

Please, I implore, leave me behind.

@gregrorio What is your point? What am I "assuming" when I write that in my experience, using LLMs to write software is more efficient than me writing it myself? Do you have any contribution to the conversation to make, or are you just venting?

@hanshuebner Oh, I do. If you are interested, Pavel has some very good points about the complexity and common misconceptions in understanding "efficiency" and "productivity", you may find it interesting: https://productpicnic.beehiiv.com/p/checking-an-llm-s-work-is-a-systemic-not-an-individual-problem

Basically, we usually think about it in terms of velocity of output, neglecting aspects like maintainability and purpose.

Sum this with the ethical, social and environmental implications and we have very good reasons to refuse the slop agenda. Not just a gesture.

Checking an LLM's work is a systemic, not an individual, problem.

Only a culture of design critique can prevent sloppy AI-generated problem definitions.

The Product Picnic

@gregrorio Your criticism of LLMs seems to be rooted in how they were created and who runs them. Those are valid points, but claiming that, as a result, the output of LLMs is unfit for any purpose, is just nonsense. Repeating that or making such claims in an angry or declarative tone or telling anyone how disagrees that they're peddling propaganda is intellectual garbage.

If you can reject using LLM technology because you are not in a field of intellectual work, consider yourself lucky.

@hanshuebner
> LLMs is unfit for any purpose

Yes, you did got it right.

But I and others in this same thread also argued that the claims of AI bros that they are powerful, efficient, productive, inevitable are full of shit.

If you see yourself obligated to use it, so use it, I won’t judge you. I may see myself in that position in the future as well.

But what are you doing here is more than that, you’re making a call for surrender. No thank you.

@gregrorio I don't call for surrender, but in any case: It is difficult to make a choice if you want to live in a world in which virtually all products are created in an unethical fashion. From that perspective, dismissing LLMs is an understandable choice.

Claiming that what is being created with their help, however, has inherent flaws, is "worse" than what humans create and will eventually stop to work is just not a viable position.