Just had a wild thought:

Have we made our last major programming languages?

@marcoarment I actually have a friend who has spent the last month writing a new language that more deeply embeds the ideas of MCP into its core. It's fascinating to see him take what he's learned and bring it to a domain-- I think we may actually see lots of micro-languages that are purpose built.
@jsonbecker @marcoarment There's something to this. I'm working on an app that has a little purpose-built scripting language and LLMs can work really well with that when provided with a reference markdown file of the syntax and supported functions, plus a CLI to evaluate the code. It's pretty remarkable seeing that in action.
@marcoarment Feels like in some ways, prompt engineering (or whatever it morphs into) is the new programming language.

@marcoarment To be able to think about that, we need to define a programming language.

From my prospective, I think there will be many more structured ways of specifying algorithms. Maybe tailored to usage with LLMs.

@marcoarment last major language for humans?
@marcoarment Yes. The next programming language will be LLM-friendly, possibly even created by an LLM.
@marcoarment The best analogy I have is that our current abstraction programming languages are likely to become the next “assembly language”. Something that you can drop to if things are really hairy, but you are likely working at a higher-level abstraction nearly all the time.
@nebhale @marcoarment I'm curious how this LLM will be trained to use an LLM-friendly language.
@troldann we’re already using LLMs to generate the training data that’s passed in to train the frontier models. What if we asked that first LLM to generate a language that’d be easier for it to work with in the future and passed that in as training data? It’s basically a compiler bootstrapping pattern for LLMs.
@marcoarment why would people stop making those?
@marcoarment if we think about it - programming languages have largely been about enabling yet another layer of abstraction and becoming more human friendly. From flipping switches, to assembly/machine language to C to Java, Perl, etc … (except Perl, that was an abomination). Natural language to an LLM that will use whatever language is appropriate seems like the natural next step.
@marcoarment The idea that everyone turning to LLMs to generate their code means the code it generates doesn't matter is silly. Will there be wide swaths of slop where it doesn't matter? Sure, but for the places where quality matters (life safety, mission critical, in theory places like Apple) there will still be a need to review, tweak, and smooth the source code that an engineer first generates with an LLM tool
@marcoarment I suspect we’ll have a new round of languages that are designed to be used by LLMs more efficiently, but still readable/understandable by humans.

@emc @marcoarment actually I think this is even less likely. One reason LLMs are *so* good at writing code is because there is an insane amount of material for them to train on

What about current languages is inefficient for LLMs? What aspects of LLMs would you take advantage of with a new language?

How would you train all the edge cases and best practices if there's not a huge number of humans out there generating the content LLMs have trained on so far?

@mixdup @emc @marcoarment tokens for language primitives are an obvious point to pick at. LLMs are trained to replicate human work, but if you optimise and train the LLM, you'd get stuff which would be difficult to follow for a human but could be understood in a language model. Heck it might look closer to assembly.
@karanj @emc @marcoarment but can you come up with enough synthetic training data for the LLM to learn this optimized language as well as it has learned other languages?

@mixdup @karanj @marcoarment If the LLM creates the optimized language based on learnings with other languages and incorporates tests during the learning stage. Maybe?*

* please note: Erik can make mistakes. Please double-check responses.

@emc @karanj @marcoarment this is where I'm highly skeptical. LLMs are really good at remixing things that already exist. I'm not really convinced they are going to synthesize something brand new and truly understand what they're creating

@mixdup @karanj @marcoarment not truly understanding. This is where automated tests come in. Synthesizing tests to define an expected result and performing tests to validate whether the code successfully delivers that result.

The hardest part in this is likely defining the tests… and completely agree there are many many areas where this can go wrong. I can only imagine the vulnerabilities that will later be discovered. This is not a good idea. Just one I wouldn’t be surprised to see pushed out into the world.

@marcoarment i'm thinking: probably not. all of our existing languages have been for human understanding.

vibe coding still writes languages we've been using for years. what if we throw away the part where you get to audit it for the sake of a more efficient language?

that sounds crazy and reckless and totally on point witb 2026

@marcoarment Probably. And I actually think it has very little to do with LLMs. I feel like we’ve reached peak programming language.

@marcoarment all these real engineers with free time?

You know they're making languages.

@marcoarment Haskell proves that people will make programming languages that no one wants or needs, so I say no.
@marcoarment
Definitely no, but "You've made your last major programming language, {Villain_Name}!" makes for a great line.
@marcoarment Hell no. There’s usually a new version of Swift to learn every year.
@marcoarment I seriously doubt it. I’m not sure we’ll ever see another C++, but another Python? Absolutely.

@gormster @marcoarment Rust is an actual valid choice for the same things as C/C++ and it's even being considered for building operating systems.

Go is a fun one. It's got automatic garvage collection, but in terms of speed it comes relatively close to C/C++ but with a much easier learning curve that is not too far from Python in terms of difficulty.

I know there are haters out there, but I really do appreciate Go's perfect blend of raw power and ease of use.

Oh and then there's Zig.

@Chertograd mmm I kinda meant in terms of ubiquity - like every OS has a C++ compiler essentially built in, virtually every programmer can at least kinda read its code even if they wouldn’t be confident writing it. Rust might get there but I doubt it.
@gormster Ahh I gotcha! I misinterpreted your message.
@marcoarment I asked someone who worked on the first GitHub copilot if we need “a programming language for llms” his answer was that the existing ones are already perfect for it. So unlikely we will see another big one —
@marcoarment I should think that large scale concurrency has been solidly, intuitively, easily, and safely handled that there should not be any more languages needed. :-)
@marcoarment I feel like some variant of Lisp will end up being a universal language when machines are doing most of the raw writing and reading. But maybe that’s just me (as a former professional lisp coder)
@marcoarment We as in humans? Yes. I bet we see a LLM-created (or inspired) programming language. The key is data. Reasoning will be key here.
@marcoarment My totally unscientific take: yes for the „user space“ (App Development) no for kernel space.

@marcoarment propably. End of an era I guess.

I get that LLMs are the futures. But it has just killed my interest in tech/programming and I’m actively switching to a different career.

@vmachiel @marcoarment The worst part about it is it’ll only happen to the good ones. We’re not going to see bad developers quit, only disillusioned good ones. The ongoing trend of declining software quality is accelerating, and I wish proponents understood they will get bitten in the ass too.

We shouldn’t give a rat’s ass about being able to make the billionth to-do/notes/reminders app with one prompt, when the reality is we’ll be significantly more impacted when our hospitals/travel/government leaks our data and makes preventable mistakes that’ll maim our lives. And that’s without even mentioning the accelerating harm to society, which is worse than social media (look for AI psychosis and AI boyfriends).

But hey, at least we can now flood social media with the same boring takes and apps that no one will care about because they’re too busy making their own boring copies, while enriching the worst of the man-childs. So, worth it?

Tom Toro’s cartoon gets more likely by the day.

@marcoarment
If “major” means “used for a significant portion of code written/generated”, I would want to leave room for one or two new languages, authored by LLM for LLMs

Otherwise, with access to LLM themselves, my guess is that we will see a Cambrian explosion of small purpose built languages

@marcoarment
Good question - I have been worrying about innovation, and where the incentive is to innovate now.

If LLMs had become popular 10 years earlier, would Apple have made SwiftUI, when it’s easier to have a machine churn out UIKit code forever?

@marcoarment absolutely not. There’s still a need for languages that reduce the amount of boilerplate, increase safety without performance overhead, and with no reliance on the random-code generators. Especially when Anthropic/OpenAI will need to start earning profits from LLMs

@oskar @marcoarment I honestly believe that Go comes close in terms of being fast (faster than C# or Java, but not as fast as C/C++), being relatively easy to pick up (similar to Python), not being too verbose and instead, being pretty concise (despite the way they handle errror handling).

I think it's the perfect blend, but still has to take its time to mature in terms of libraries and frameworks. For example there aren't any top-tier GUI toolkits for it, currently.

@marcoarment Of course not. It's just that an LLM will make the next one, and it will probably read like brainfuck
@marcoarment probably not, but I do think the norm is going to move towards another low level wrapper where many aren’t bothered about the Swift / Java / PHP / whatever that the prompts or platform has generated
@marcoarment
In what format would you express requirements?
There will be the need for standardizations of formal language types, also for generative compilers.
Even though Tokens could be used as some sort of byte code already (i bet someone does this), the step from human to token would need to be structured and humans would need to learn basics about this structure to be efficient.
(But the reproducibility and energy efficiency of autoencoders is not exactly what computer engineers seek)
@marcoarment Yeah maybe. The last few have not gotten the kind of widespread adoption that C++ or Java have, and things are getting more fragmented. I'd say the last one was Typescript (behind most of the web and desktop via Electron).
@stevex @marcoarment I think you could easily say Typescript is responsible for more apps / LoC than C++ was at the equivalent age... but if you're talking like % of developers using or familiar with it, yeah, the thing with new languages is they rarely fully displace the old, so "fragmentation" is inevitable.
@marcoarment The other day I was trying to thing what the two or three newest languages where that where even decently adopted. It seams there a new JS framework popping up every week but setting that aside. Could it be Rust, Go, and Swift?
That said, I hear FORTRAN is OOP now.

@marcoarment I’ve been seeing a lot of Zig and Rust talk over on my end, but that’s a question I’ve been asking a lot about.

College is still behind by light years.

The programming language after Kotlin – with the creator of Kotlin

Andrey Breslav, creator of Kotlin and founder of CodeSpeak, shares lessons from designing Kotlin and why he’s building a new language to keep humans in control in the age of AI.

The Pragmatic Engineer

@marcoarment If we made another, where would the documentation, community activity come from to teach an LLM how to use it?

For that matter, have we made our last new framework? Our last major new version of existing languages/franeworks to be broadly adopted?

@marcoarment https://youtu.be/uMqx8NNT4xY?si=P0gs-bTuRk7OMcHn I just watched this and then saw your post. Interesting thoughts about how difficult it is to get a new language started.
The history of C# and TypeScript with Anders Hejlsberg | GitHub

YouTube
@marcoarment Just waiting for the next paradigm shift …

@marcoarment @caseyliss @siracusa I was going to write a big email about this to you guys, and I might yet, but I'll condense it here.

Dijkstra himself wrote about the futility of natural language programming: https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667.html

Even knowing what we know now about LLMs, I believe this to be true. The problem is that human languages are leaky and imprecise, so you'll always need something more symbolic or at least more precise. Even if we assume that LLMs are how we'll do our programming from now on, if you discover that 'frobnicate' produces better results than the verb 'create', you'll start using it, and so will other programmers. Programming languages may become pidgin languages, or dense jargon, or even legalese--anything to reduce the verbosity and ambiguity.

There's a lot more to say about it, but that's the basis of it. To get the best results out of LLMs, we'll create new languages that produce more predictable results.

E.W.Dijkstra Archive: On the foolishness of "natural language programming". (EWD 667)

@marcoarment I would say unlikely. For all that AI can do, the one thing it doesn’t do well is run in resource limited environments. Edge computing is still a thing and needs to be super efficient. But for the vast majority of software engineering, maybe there is little point in making them easier for human coders to work with.
@marcoarment What if this is peak AI? It runs out of stuff to “steal" and goes stagnant. Good enough to help, and to pull way too many people off programming. This could be the business model. Push developers to be borderline extinct and then charge the surviving ones whatever you want to keep them relevant.
Wonderful but dangerous times, people getting lazy, taking AI hints at face-value, losing critical thinking ability. Not pointing my finger neither excluding myself, just an observation.