Quoting myself from another reply elsewhere in the thread (made after your post):
I completely agree that somebody creating using an LLM doesn’t understand the program and that is a problem in many cases. To expand on the wheelchair analogy, a wheelchair might be faster for going downhill, just as using an LLM might be faster for writing certain code. That still doesn’t make it great to YOLO downhill in thick fog towards a busy road. And a wheelchair is still unable to go up stairs. There are absolutely places where LLMs are not appropriate and cannot do the task.
The problem is that non-developers have a hard time telling the difference, and try using unqualified people armed with an LLM to replace developers where it is inappropriate. Those areas definitely including important logic and critical systems. I am on the fence about whether they can be used for writing GUIs – my experience says no, but people I respect find it works for them.
I completely agree there are places, many, where LLMs are not the right tool. That’s why I said it is annoying when people claim that since an LLM made them able to make programs, it will make me a better or faster coder.
And you are right, it’s a huge risk – that is already happening on a less serious but wider scale. I see many pieces of software I use getting worse, not just regular enshittification, but introducing more regressions than they used to.