The most surreal thing about AI coding shit taking on is the revelation that so many people who do this thing that I love, seem to have no care for the craft at all. Even people who I would have pointed at, years ago, as those who clearly care. And I know it has always been Just A Job for many people, but holy shit, do you even care a little bit?
We humans are not merely bad at it, we have people who have been doing the work with no desire to be good at it in the first place.
If you don’t like programming, I invite you to do something else with your time instead of promoting the machine that lets you not do it. Some people actually like it, you jerks.
I know a non-trivial amount of people who cannot code, but who use AI to make programs. They get excited from being able to create. And I find that great.

Their annoying mistake is that since AI enabled them to go from being unable to being able to make programs, it will make a developer who already can code better at making programs.

It's like offering a wheelchair to somebody that walks: just because it allows somebody with broken legs to move, it does not make the walking better at walking. But it has a place in aiding those that cannot move without it.

@michael Thing is, it doesn't make you "able to create."
It provides a convenient way of getting the thing.

This, to me, is the second most telling delusion around these things, after seeing them as sentient.
@pikhq

It absolutely does. It does not allow you to create the same way a developer does, but it allows creating anyway. Coding is a small part of software development: design and analysis is a much larger part for most projects, and while LLMs can be used to spar about that (to some more or less meaningful extent), you still need to do that.

Saying code made using an LLM isn't legitimate is just as meaningful as saying that code made using a high-level language is less legitimate than code made in assembler.

@michael @pikhq Sorry, but I'm not buying this argument.

"Coding is a small part of software development: design and analysis is a much larger part for most projects" - with this, I do agree.

However, if you sub out your coding to another person by saying "I need a chunk of code here, that fits these parameters," surely you wouldn't be making the same argument that you personally created it.
The difference is only that you're subbing out to a (deeply problematic in an impressive range of ways) software service. You still didn't create it; you commissioned it. It's just that you commissioned it from a mechanised service instead of from a person.

It's qualitatively not the same thing as writing code in a higher-level language, or even using a Lisp macro to generate the code for you. If it really were equivalent, then it would increase your level of skill over time, rather than causing it to atrophy.

I also didn't say anything about the resulting code being "legitimate," so I'm merely pointing that out as a straw man, and not otherwise engaging with it.

I didn't mean the code being legitimate or not, but that making programs using an LLM can be a legitimate way of making programs.

Using an LLM definitely doesn't teach you programming. But it does increase your skill in making programs using LLMs. They are different skills that accomplish something similar.

Using a high-level language allows you to write programs without having to worry about register allocations, call conventions, or memory models. Many modern developers don't even know what a register is, what a call convention is, or the difference between a stack and a heap. But that is ok: the compiler takes care of that, and you're just delegating to it to take care of the details that don't matter at your level.

For non-developers, the LLM can serve some of the same roles. It can write the code they don't understand and they can create by describing it, just as your or I create programs by describing it in our programming language of choice. It's fair if you don't see describing the program to an LLM as creating, but I see it as creating just on another level. And I hate using LLMs for coding or when try pushing me to using it.

The comparison with a compiler is a little flawed. It should be a comparison between compilers in the 1960s: back then, compilers were pretty garbage and frequently made mistakes. The fact that compilers eventually became great and better than humans should not be taken as an argument that the same will happen for LLMs. But either does allow making programs at a higher abstraction level.
@michael @KatS @pikhq just wait until LLMs are writing code for life support devices or for things like plane systems etc. Would you ever put a foot on a plane again? Imagine an LLM in a surgeon robot, would you let that thing make you a kidney transplant? Imagine an LLM that can build buildings, would you live in that skyscraper or near it? Software made with LLMs, as much as they seem to work, are not legit at all.

Quoting myself from another reply elsewhere in the thread (made after your post):

I completely agree that somebody creating using an LLM doesn’t understand the program and that is a problem in many cases. To expand on the wheelchair analogy, a wheelchair might be faster for going downhill, just as using an LLM might be faster for writing certain code. That still doesn’t make it great to YOLO downhill in thick fog towards a busy road. And a wheelchair is still unable to go up stairs. There are absolutely places where LLMs are not appropriate and cannot do the task.

The problem is that non-developers have a hard time telling the difference, and try using unqualified people armed with an LLM to replace developers where it is inappropriate. Those areas definitely including important logic and critical systems. I am on the fence about whether they can be used for writing GUIs – my experience says no, but people I respect find it works for them.

I completely agree there are places, many, where LLMs are not the right tool. That’s why I said it is annoying when people claim that since an LLM made them able to make programs, it will make me a better or faster coder.

And you are right, it’s a huge risk – that is already happening on a less serious but wider scale. I see many pieces of software I use getting worse, not just regular enshittification, but introducing more regressions than they used to.