I think a lot of people that don't really understand what they're doing obsess over LLMs for the same reason that they might have obsessed over visual programming or Plain-English-programming a generation ago.

In their mind, programming works like this:

1) A clever person designs the system/app/website/game in their mind.

2) The person uses whatever tools available to wrangle the computer into reproducing that vision.

(1/...)

In this model, the bottleneck is (2) and anything that isn't the native tongue of the designer is actively getting in the way of the manifestation of that vision.

THIS IS NOT HOW PROGRAMMING WORKS.

In reality, programming is an eclectic back and forth conversation between developer and machine where the former explores the possibility space and the machine pushes back by unveiling its constraints.

No 'vision' survives this process unscathed, and this is a good thing.

Those that obsess over LLMs like to believe that Plain English sits at the top of the abstraction pile, that it is the thing that a programming environment should seek to model. From this point of view, an LLM seems perfect: type in words, program comes out.

But Plain English is not the top of the pile, not even close. It's an imprecise and clumsy lingo. The process of development is about throwing away that imprecision and engaging with the reality of the possibility space.

It can be hard for those that don't do a lot of programming to understand, but programmers do not think in Plain English (or whatever their native tongue is). They do not, for the most part, spend their time wrangling & getting frustrated at their tools.

Instead, programmers think in abstractions that sit beyond the realm of natural language, and those abstractions are carved through dialect with the machine. The machine pushes back, the chisel strikes the marble, and the abstraction evolves.

LLMs promise something enticing, but ultimately hollow: the ability to skip the dialectic and impose one's will on the machine by force. They are appealing because they do not allow space for their user to be wrong, or to be forced to encounter the consequences of their unrefined ideas.

This is why code written by LLMs is often buggy, insecure, and aimless: it is written to appease a master that does not understand the conflict between their ideas, nor the compromises necessary to resolve them.

If you're an LLM fan, it might initially appear confusing that a programmer might choose a statically typed language, and more confusing still that those with experience *yearn* for static typing. Why limit yourself?

But the reality is that the development of good software requires the dialectic between developer and machine to take place, and type systems accelerate this process by allowing a skilled programmer to refine their mental model much earlier in the development process.

I think this is all I have to say on this topic.
@jsbarretto you forgot the part where they're trained on average code, with the average code being shit and buggy
@dysfun Well, the less said about that the better. But in all honesty, I think that's less and less a problem and grows to be a weaker argument each day.
@jsbarretto why do you think that? they're now training on AI generated code.
@dysfun Perhaps you're right. Still, I think there is *a version of* the world in which LLMs are extremely competent programmers in their own right, but still fall short of being even adequate software architects, and that's the thing I'm interested in focusing on here.
@jsbarretto i mean fair enough on the focusing, but i don't buy they are competent.
@dysfun @jsbarretto I agree LLMs aren't and structurally cannot be competent, but on the other hand that they wouldn't be useful for programming even if they were is a strong argument, and it preempts the inevitable "we just need bigger/better LLMs". ​
@airtower @dysfun @jsbarretto They already produce useful code. "Translate this to javascript" is quite useful tool.
@pavel @jsbarretto @airtower if you already know javascript so you can find the subtle bugs it creates.
@dysfun @jsbarretto @airtower Maybe, maybe not. It worked for me. Plus, I read faster than I type, so if it is close enough, it is a win.

@pavel @jsbarretto @dysfun @airtower

And yet, that's a very minor use case. If it was made to work perfectly it'd still be a very inefficient compiler.

But if you need to understand the problem at hand, an LLM can't help you. We learn things by doing them; anything the machine does for you, you do not learn to do. If it makes all the choices of how to do something, you don't learn what those choices mean or even know there were choices to be made. It's an intellectual dead end.