I hereby coin the term "Ptolemaic Code" to refer to software that appears functional but is based on a fundamentally incorrect model of the problem domain. As more code is generated by AI, the prevalence of such code is likely to increase.
1/7
I hereby coin the term "Ptolemaic Code" to refer to software that appears functional but is based on a fundamentally incorrect model of the problem domain. As more code is generated by AI, the prevalence of such code is likely to increase.
1/7
this code passes all its tests and satisfies, its specifications, yet is built on a fundamentally flawed logic.
AI code generation, which relies on examples, will likely produce significant amounts of Ptolemaic Code.
3/7
This is exactly how I have described the limitations of black box models to others.
A good scientific model should do two things:
1) Provide accurate predictions of outcomes given certain inputs, and
2) Enable greater understanding of how a system works.
Simpler machine learning models, like logistic regression or decision trees, can sometimes do both, at least for simpler phenomenon. The models are explainable and their decisions are interpretable. For those reasons among others, applied machine learning researchers still use these simpler approaches wherever they can be made to work.
But in our haste to increase accuracy for more complex phenomenon, we've created models that merely provide semi-accurate predictions at the expense of explainability and interpretability. Like the ptolemaic model of the solar system, these models mostly work well in predicting outcomes within the narrow areas in which they've been trained. But they do absolutely nothing to enable understanding of the underlying phenomenon. Or worse, they mislead us into fundamentally wrong understandings. And because their training is overfit onto the limits of their training data, their accuracy falls apart unpredictably when used for tasks outside the distribution of their training. Computational linguists and other experts that might celebrate these models instead lament the benighted ignorance left in their wake.
Or how it was more eloquently stated in the great philosophical film Billy Madison:
"Mr. Madison, what you've just said is one of the most insanely idiotic things I have ever heard. At no point in your rambling, incoherent response were you even close to anything that could be considered a rational thought. Everyone in this room is now dumber for having listened to it. I award you no points, and may God have mercy on your soul."
#pluralistic describes the technical debt of these AI coding models as asbestos in the walls.
A hazard we'll be digging out of the walls for decades to come.
It remains a fact that when petrostate despots are this desperate to impose user adoption, alarm bells should be ringing. Fossil fuel funded cyberwarfare.
https://fortune.com/2025/11/20/saudi-visit-kennedy-center-trump-mbs-huang-musk-1-trillion/
When anti-democracy billionaires are spending this kind of cash on a boondoggle...
https://www.forbes.com/sites/mattdurot/2025/07/17/bill-gates-charles-koch-and-three-other-billionaires-are-giving-1-billion-to-enhance-economic-mobility-in-the-us/
@dk @DaveMWilburn @pluralistic
How hard will billionaires work to impose the AI world view on the globe?
To the same degree as religious zealots?
Centuries of dispute over irrelevant issues like "how many angels can dance on the head of a pin?"
A worldview that says human expertise is dead.
The worldview that says "Democracy is dead. The CEO of Koch Industries & Palantir own you."
The world view of "might makes right" and "power & wealth has no goal, only its self-perpetuation".
1/
2/
Ptolemy launched a global practice of merging serfdom, godhood, & government, the pharaonic worldview of Thiel should be a major red flag.
The billionaires behind AI have no vision for its future aside from technofeudalism and mass immiseration policies.
Power for the sake of power.
https://www.counterpunch.org/2025/11/07/make-aristocracy-great-again-lost-roots-of-techno-feudalism/
https://www.politico.com/news/magazine/2025/01/30/curtis-yarvins-ideas-00201552
No one wants an AI that promotes obsolete ideas like the divine right of kings, God-King theocratic kleptocracy, and "gods, grain, & government" dictatorship.

It is difficult to interpret the Trump administration’s wholesale attacks on governmental programs as anything other than accelerationist efforts to destroy basic features of the American political and economic systems. From DOGE’s Artificial Intelligence-rampage through federal bureaus, the destruction of agencies like the Department of Education, or Trump’s expanding ICE as his well-funded private domestic army and occupying Democrat-governed cities; the destruction of old standards of normalcy are clear. While documents like Project 2025 reveal elements of Trump’s game plan, there are serious open questions concerning the administration’s long game and exactly how far the oligarchs influencing Trump want to take this antidemocratic movement.
@Npars01 @dk @DaveMWilburn @pluralistic
Easier to unplug the computer you can't afford which you only use to look at their products you can't afford while shivering with your energy turned off. Their world vision eliminates the consumer economy which is the only thing that keeps them rich.
@Npars01 @DaveMWilburn @pluralistic eg, this underappreciated joke from July
Excellent points. This concept also effectively addresses physical mechanical systems. It brings to mind my attempts to make inexperienced engineers understand no machine build will run with the same output as another "identical" copy. When you build a process for a specified output, no matter how precisely it matches the original it needs to be flexible/adjustible enough to be managed by the operator. Thus you build in capability to take measurements of critical output parameters and capability to adjust those output parameters to the precision and within the variability the output specification requires. Setting up and running maker lines, for example a multilayer thin film line, requires skill and and a degree of artistry that management rarely gives engineers and operators credit for. No machine has ever run to product quality spec with the same adjustments as the machine next to it or the same adjustments as last month.
Unfortunately, popular western science and engineering university courses lead one to believe that all variables are controllable or inconsequential and produce people who believe the can reproduce complexity absolutely (except in experimental agriculture studies which still follow the brilliant statistical thinking of a couple of Scot statistitians who understood chaos before it was named that).
Machines are very useful. So is mechanistic science theory which gives us a hammer model to break apart the how it works questions. But assuming that we can divorce the machine and the model from the complexity of the universe leads to bad jokes and costs. AI is, after all, a complex mechanical machine.
The ptolemaic mental model of the solar system was used as a pretext for centuries of religious warfare.
Entrenched interests fought wars to keep it according to James Burke's 1st episode "The Day the Universe Changed".
Why? Because it fed a narrative of an unchanging "natural order".
Will today's mental model of AI feed an equally self-serving set of narratives?
That's a good analogy.
It even extends further:
In order to make technological progress, we needed to abandon the incorrect model of our solar system. We would probably not have made it to the moon if we'd stuck to the ptolemaic model of our solar system.
Similarly, in order to meaningfully advance our software ecosystem, we need to abandon code produced using poor software engineering practices – such as LLM codegen.
@[email protected] Jumping back a level of analysis or two (and therefore maybe no longer being valid): I'm thinking of the tweaks LLM masters demand their engineers make to LLM output, usually (from what I've seen) for two reasons:
To reduce antisocial behavior (e.g., LLMs producing fascist, misogynist, racist, anti-queer, etc. content, or stop them from encouraging people to commit #suicide)
To increase the happiness of rich-people-who-own-the-LLMs (e.g., increase profit, decrease Grok saying Elon is an asshole, etc.)
The fact that both of these need to (apparently) be done regularly suggests a mismatch with "reality." Arguably, that is not objective external reality but the internal reality of the LLM vis-a-vis its constantly-updating training corpus. The combination of the LLM code and its training corpus seems to make LLMs regularly say awful things and also fail to generate maximum profit for the owners/shareholders.
I won't be the first (or 10,000th) to say there is a significant mismatch between what LLMs (currently) are and what their masters want them to do.