"The specification language gets more precise over time, because natural language is ambiguous and different models interpret the same prompt differently. You add more structure. You define exact function signatures. You specify return types. You nail down error handling behavior with enough precision that two different models should produce interchangeable output. The specification starts looking less like English prose and more like a programming language."

https://nesbitt.io/2026/01/30/will-ai-make-package-managers-redundant.html

Will AI Make Package Managers Redundant?

Following the prompt registry idea to its logical conclusion.

Andrew Nesbitt
This is obviously a thought experiment but I can genuinely see a lot of these spec driven projects going this way, at some point you're trying to do something that would have been easier just using an existing high level programming language.
Which might be an indictment of how badly we've taught these programming languages tbh lol.
Honestly I think there is a lot to this, when I see some of the guides to using LLMs for folk without coding skills I think I could more easily just teach them to code. The mystification of coding is also a huge part of the appeal of this stuff for lots of people.
Part of what makes me think this is that teaching folk to build using LLMs means they'd get inconsistent results to their inputs, which would be a disastrous situation when learning to code, you can't build a foundation skillset without seeing consistent results for the same actions.
@sue indeed. It gets even more 'fun' when one uses a cloud based model and a self-updating coding harness - I had one update to a major new model while preparing a demo. I could turn back the model version, but not (easily) the coding harness.
I'm discussing with @emilybache and others about this. One exercise could be to build a small coding agent, using a local model. This removes some of the mystery.
I'm playing with a small bit of messy code, and a tiny agent. find smells, do refactoring.
@mostalive @emilybache This sounds really interesting!
@sue This was on the samman discord (I am a guest there, I've been on it for a while). Since you and someone on LinkedIn responded, I might develop this in the open and make a mini-series of blogposts about it . It is easier to be coherent on my own with code than in writing though ;-) might take a while.
@emilybache
@mostalive @sue I think building your own coding agent seems like a great project to understand what they do. To also gain control of a coding environment for exercises - that's a good idea too.
@emilybache @mostalive Myself, @anthrocypher and Ray Myers (who I don't think is on masto) used an agent making exercise recently to test Ana's theory that introducing human pair programming into learning / building with LLMs shifts the dynamic in a healthier direction