One thing I haven't heard discussed much about LLM-based programming:

Are we at the end of any serious programming language evolution?

It seems to me that if you believe (a) coding assistants are going to be the dominant way software is written, and (b) the only way to get reasonably good coding assistants is to train on tons of existing code, then it follows that (c) new programming languages will be at a significant disadvantage because there's just no corpus to train on.

The strong version of this hypothesis is that even new features of existing programming languages will be hard to roll out, because the bots won't use them!

I'm sure there are researchers who are looking at "programming languages that are easy for coding assistants to write" (Darklang went this route) but I think this is futile as long as (b) is our best bet. Maybe "AI" will deliver general reasoning capabilities good enough to transfer to any language, but that's not what is shaking out right now.

Any studies on coding assistant success across existing languages? That might be interesting to look at.

#CodingAssistants #ProgrammingLanguages

Encoding Team Standards

martinfowler.com
AI Coding Assistants Haven’t Sped up Delivery Because Coding Was Never the Bottleneck

Agoda recently published an observation arguing that while AI coding tools have measurably raised individual developer output, the resulting velocity gains at the project level have been surprisingly

InfoQ
#OpenAI launched #GPT54 #mini and #nano, smaller and faster versions of their flagship GPT-5.4 model. These models are designed for efficient, high-volume #AIworkloads, offering near-flagship performance at a lower cost. They excel in coding, reasoning, multimodal understanding, and tool use, making them ideal for applications like #codingassistants. https://www.zdnet.com/article/gpt-5-4-mini-and-nano/?eicker.news #tech #media #news
OpenAI's GPT-5.4 mini and nano launch - with near flagship performance at much lower cost

The latest GPT-5.4 mini model delivers benchmark results surprisingly close to the full GPT-5.4 model while running much faster, signaling a shift toward smaller AI models powering real-world applications.

ZDNET
Fragments: March 16

fragments 16 Mar 2026

martinfowler.com
Besieged

As the last digit on the calendar rolled over from five to six, it took less than a month to realize the coming year was going to be different than the year that preceded it. Arguably the stage was set late last year with the November “inflection point” but with open source AI projects becoming

tecosystems

I start to leverage the true potential of #codingassistants only now. So far, I crossposted my blog posts by first transforming them from #Asciidoc to #Markdown by hand. Feasible, but boring.

I have created a #ClaudeCode command and my life is so much better.

When at first look at hooks in the context of #codingassistants, I was not amazed. But today, I watched a video where they implemented a hook that used #ClaudeCode itself 🤯🤯🤯

It opens so many doors I'll need a couple of weeks to think it over.

#NewIdeasComing

AI Coding Assistants and Agents are changing the way software is created, but developers have been expected to learn how to use them without any guidance. As managers push developers into adopting these tools how are they expected to avoid the dangers and pitfalls of vibe coding? In this post I take a high level look at the tools and practices developers can use to move quickly with AI without wanting to set their code bases on fire.

https://blog.tedivm.com/guides/2026/03/beyond-the-vibes-coding-assistants-and-agents/