I think the enthusiam for natural language programming - especially with the advent of LLMs - conflates two different things, one of which is rooted in a misconception

1) natural language is better for describing high-level ideas like "I want an app for tracking my spending"

2) natural language makes it easier for non-technical users to describe program requirements

The former is true, because no existing DSL has the scope to describe software at such a high level

English can describe systems that span multiple programming languages and platforms, but DSLs are, by their nature, *domain-specific*

LLMs are general-purpose. If I want an instruction interface for an LLM then clearly no existing DSL is going to cut it

(meta-languages like JSON, XML or Markdown don't count - these only formalize the structure, they don't specify how to interpret the content)

But the idea that natural language makes it easier for non-programmers to program is a misunderstanding

Programming is not about transcribing English to code, it's the art of turning vague requirements into concrete ones - identifying and filling in the blanks so that an imprecise spec becomes precise

When you "vibe code" you are asking an LLM to do that work - and they are remarkably good at it - but that is not *programming*, because you are not identifying and eliminating those ambiguities

DSLs are good for programming because they help you to eliminate ambiguities in two ways:

1) they use a restricted grammar that simply eliminates many possible ambiguities entirely. The higher level the language is, the fewer ambiguities it allows for (at a cost of flexibility)

2) they come with a parser or interpreter that identifies any remaining ambiguities and turns them into errors that are reported back to you to be fixed

In programming we tend to think of high-level languages like Python or JavaScript as being "easier" than low-level languages like C, and that's true, but it's not because they are closer to English

It's because they eliminate (or otherwise smooth over) more ambiguities at the syntax level, instead of allowing the programmer to write ambiguous or erroneous code but then have to deal with the resultant errors at runtime

You could be forgiven for thinking that English is the ultimate high-level language, but in many ways it's more like a *low-level* language because it doesn't impose any structure to guide you, and provides infinite ways to make mistakes or miscommunicate your intentions

*Everything* in English is ambiguous and open to interpretation. Almost no errors or ambiguities are prevented by the "syntax", and so they must be dealt with by the "interpreter" (which traditionally has always been a human)

This is why, up until the advent of LLMs, natural language programming - or natural language interfaces in general - were a fairly unsuccessful niche, because it was impractical to write an interpreter program that could handle and resolve all the possible ambiguities of arbitrary English input to produce any sort of reliable output without human supervision

But now we have LLMs, and they actually are (mostly) capable of taking unstructured English text and divining, if not the *correct* meaning, then at least a plausible meaning

But therein lies the problem. Traditional parsers are very good at detecting ambiguous or invalid input, but very bad at guessing what might have been meant

LLMs on the other hand are great at guessing what you might have meant, but not at detecting ambiguity, so as a consequence they tend to mask any bugs in the input

The job of a programmer is to identify ambiguities and make decisions about the correct thing to do

High level DSLs make programming easier because they eliminate ambiguity by baking in sensible, deterministic, domain-appropriate decisions, so the programmer doesn't have to make them

LLMs make programming easier by YOLOing all those decisions in a semi-random fashion so the programmer isn't even aware that a decision had to be made at all

This is generally fine if the programmer is capable of reviewing the (DSL) output of the LLM to verify that its decisions made sense, but this requires that the DSL still exists and that the programmer is capable of reading and understanding it

Which is why I currently don't see a future in which LLMs will eliminate the need for DSLs, or for programmers to need to learn them.

@nicklockwood
Exactly. An LLM will always give *an* answer, but you abdicate control over *which* answer you get. You don't know which other answers were possible, or what your own question was taken to mean by omission of detail.