My impression is that the first field that AI will massively disrupt is not medicine, not writing, and not education, but software engineering. The thing about software engineering is that the computer can often check its own answer, and iterate to a better one. Not so easy in other fields. So progress in AI writing software will be very, very fast.

I'm guessing there will still very much be a job for software engineers, but it's going to change fast.

@ben Yes and no. From what I see now, AI can handle tactical questions (“implement quicksort” or "write a regular expression”), but I doubt very much it can come up with a higher-level architecture, because there won't be such patterns out there on the Internet. And testing? It's not always easy even for humans to convert a requirement into a set of test cases.
@SteveBellovin @ben This and the fact that it will never be able to debug anything are exactly why I think it's main use in programming will effectively be to augment Stack Overflow.
@vathpela @SteveBellovin never be able to debug anything? I'm not so sure.
@ben @SteveBellovin I think it'll be able to spot typos and even find copy-paste errors, and that may be nice but it's not debugging in any real sense. I haven't seen anything that makes me think it'll ever have a sense of what the task at hand is and where it's going wrong.

@vathpela @ben @SteveBellovin
The main problem remains: Either a task/problem is described exactly, then all we need is a code generator.
Or there are vague requirements, inconsistencies etc., which requires a kind of "understanding" to resolve inconsistencies and create solutions that fit the requirements.

Current LLM models are just not able to do the second thing.

And checking code for internal inconsistencies is something that belongs more in the area of formal verification.

These models might replace "copy&paste" programmers, so they might accelerate a trend of "writing code that doesn't work and nobody knows why".

@wakame @vathpela @SteveBellovin maybe, but my sense is the ability to iterate, tweak, tell the LLM "no, not quite more like that" and that iteration being effortless... Could be hugely powerful.

@ben @vathpela @SteveBellovin
I think that's a very good point. Maybe in concert with more "old-school" refactoring tools and unit tests.

Ideally, over time, the "very bad" cases (bugs that take a week or more to find and fix) could be removed (or at least accounted for). We would (in a sense) be working on the same "super software repository", applying a bug fix not to a single application/library, but to a body of knowledge.

@wakame @ben @SteveBellovin we *already have that*, and it's not through ML: https://lwn.net/Articles/315686/
Semantic patching with Coccinelle [LWN.net]