I hereby coin the term "Ptolemaic Code" to refer to software that appears functional but is based on a fundamentally incorrect model of the problem domain. As more code is generated by AI, the prevalence of such code is likely to increase.
1/7

#TheGeneralTheoryOfSlop

Like the ancient Ptolemaic model of the solar system, which tried to overcome its fundamentally incorrect understanding of the solar system by adding complex "epicycles" to force its Earth-centred model to match observed reality,
2/7

this code passes all its tests and satisfies, its specifications, yet is built on a fundamentally flawed logic.

AI code generation, which relies on examples, will likely produce significant amounts of Ptolemaic Code.
3/7

It attempts to fit solutions from potentially very different domains together with corrective code "epicycles" until it satisfies its context, the user, and tests.
4/7
This is a trap: while the code works within its parameters, its internal model is incoherent. When it inevitably fails, the incoherent basis for debugging or correction will lead to additional 'epicycles' being added. This process increases system complexity and brittleness.
5/7
The Ptolemaic model of the solar system, though completely incorrect, allowed sailors to navigate the globe for centuries, its failure was not practical but explanatory, similar to how Ptolemaic Code works until deeper issues arise.
6/7
Ptolemaic code works, but when it breaks, it can only be patched, not made correct.
7/7
@dk but more epicycles means more lines of code which means more productivity which means better "programmers" which means "better" software, more money, etc, etc...
@eruwero value adding epicycles!

@dk

This is exactly how I have described the limitations of black box models to others.

A good scientific model should do two things:

1) Provide accurate predictions of outcomes given certain inputs, and
2) Enable greater understanding of how a system works.

Simpler machine learning models, like logistic regression or decision trees, can sometimes do both, at least for simpler phenomenon. The models are explainable and their decisions are interpretable. For those reasons among others, applied machine learning researchers still use these simpler approaches wherever they can be made to work.

But in our haste to increase accuracy for more complex phenomenon, we've created models that merely provide semi-accurate predictions at the expense of explainability and interpretability. Like the ptolemaic model of the solar system, these models mostly work well in predicting outcomes within the narrow areas in which they've been trained. But they do absolutely nothing to enable understanding of the underlying phenomenon. Or worse, they mislead us into fundamentally wrong understandings. And because their training is overfit onto the limits of their training data, their accuracy falls apart unpredictably when used for tasks outside the distribution of their training. Computational linguists and other experts that might celebrate these models instead lament the benighted ignorance left in their wake.

Or how it was more eloquently stated in the great philosophical film Billy Madison:

"Mr. Madison, what you've just said is one of the most insanely idiotic things I have ever heard. At no point in your rambling, incoherent response were you even close to anything that could be considered a rational thought. Everyone in this room is now dumber for having listened to it. I award you no points, and may God have mercy on your soul."

@DaveMWilburn @dk

#pluralistic describes the technical debt of these AI coding models as asbestos in the walls.

A hazard we'll be digging out of the walls for decades to come.

It remains a fact that when petrostate despots are this desperate to impose user adoption, alarm bells should be ringing. Fossil fuel funded cyberwarfare.

https://www.reuters.com/technology/artificial-intelligencer-trump-mbs-meeting-brings-ai-money-2025-11-20/

https://fortune.com/2025/11/20/saudi-visit-kennedy-center-trump-mbs-huang-musk-1-trillion/

When anti-democracy billionaires are spending this kind of cash on a boondoggle...
https://www.forbes.com/sites/mattdurot/2025/07/17/bill-gates-charles-koch-and-three-other-billionaires-are-giving-1-billion-to-enhance-economic-mobility-in-the-us/

@Npars01 @DaveMWilburn yeah, I've used the technical debt point many times. @pluralistic is right

@dk @DaveMWilburn @pluralistic

How hard will billionaires work to impose the AI world view on the globe?

To the same degree as religious zealots?
Centuries of dispute over irrelevant issues like "how many angels can dance on the head of a pin?"

A worldview that says human expertise is dead.

The worldview that says "Democracy is dead. The CEO of Koch Industries & Palantir own you."

The world view of "might makes right" and "power & wealth has no goal, only its self-perpetuation".

1/

2/

Ptolemy launched a global practice of merging serfdom, godhood, & government, the pharaonic worldview of Thiel should be a major red flag.

The billionaires behind AI have no vision for its future aside from technofeudalism and mass immiseration policies.

Power for the sake of power.
https://www.counterpunch.org/2025/11/07/make-aristocracy-great-again-lost-roots-of-techno-feudalism/

https://www.politico.com/news/magazine/2025/01/30/curtis-yarvins-ideas-00201552

No one wants an AI that promotes obsolete ideas like the divine right of kings, God-King theocratic kleptocracy, and "gods, grain, & government" dictatorship.

Make Aristocracy Great Again: Lost Roots of Techno-Feudalism  

It is difficult to interpret the Trump administration’s wholesale attacks on governmental programs as anything other than accelerationist efforts to destroy basic features of the American political and economic systems. From DOGE’s Artificial Intelligence-rampage through federal bureaus, the destruction of agencies like the Department of Education, or Trump’s expanding ICE as his well-funded private domestic army and occupying Democrat-governed cities; the destruction of old standards of normalcy are clear. While documents like Project 2025 reveal elements of Trump’s game plan, there are serious open questions concerning the administration’s long game and exactly how far the oligarchs influencing Trump want to take this antidemocratic movement.

CounterPunch.org

@Npars01 @dk @DaveMWilburn @pluralistic

Easier to unplug the computer you can't afford which you only use to look at their products you can't afford while shivering with your energy turned off. Their world vision eliminates the consumer economy which is the only thing that keeps them rich.

@DaveMWilburn @dk Great quote. I’ll use it (plenty of opportunities, sadly).

@DaveMWilburn @dk

Excellent points. This concept also effectively addresses physical mechanical systems. It brings to mind my attempts to make inexperienced engineers understand no machine build will run with the same output as another "identical" copy. When you build a process for a specified output, no matter how precisely it matches the original it needs to be flexible/adjustible enough to be managed by the operator. Thus you build in capability to take measurements of critical output parameters and capability to adjust those output parameters to the precision and within the variability the output specification requires. Setting up and running maker lines, for example a multilayer thin film line, requires skill and and a degree of artistry that management rarely gives engineers and operators credit for. No machine has ever run to product quality spec with the same adjustments as the machine next to it or the same adjustments as last month.

Unfortunately, popular western science and engineering university courses lead one to believe that all variables are controllable or inconsequential and produce people who believe the can reproduce complexity absolutely (except in experimental agriculture studies which still follow the brilliant statistical thinking of a couple of Scot statistitians who understood chaos before it was named that).

Machines are very useful. So is mechanistic science theory which gives us a hammer model to break apart the how it works questions. But assuming that we can divorce the machine and the model from the complexity of the universe leads to bad jokes and costs. AI is, after all, a complex mechanical machine.

@dk

The ptolemaic mental model of the solar system was used as a pretext for centuries of religious warfare.

Entrenched interests fought wars to keep it according to James Burke's 1st episode "The Day the Universe Changed".

Why? Because it fed a narrative of an unchanging "natural order".

Will today's mental model of AI feed an equally self-serving set of narratives?

@dk As an astronomer myself, i cannot thank you enough the analogy with astronomy you just did. Brilliant.
@dk And the chatbot keeps tweaking the wrong epicycles whenever asked for an adjustment somewhere.
@dk This way of doing things not only happens in coding (as this example above) but un many disciplines including science itself (when theoretical knowledge is incorrect or not fully understood, so little empirical models are established until a bigger paradigm explains everything naturally) but also in industry, for example, in the US cows feed in industrial corn feeders instead of natural grasslands. This cause an increase of Escherichia Coli bacteria that naturally exists in the cow's rumen that ends up in the burger grinded meat. Industry invested in chemists and biochemists to develop a chemical process that uses ammonia and other substances that kill the EColi bacteria, but also leaves the meat without color (grey). So another process is developed to artificially color back the meat... Off course everything is fixed allowing cows to eat in grasslands, because the cellulose of the grass naturally controls the level of EColi bacteria. Artificial epicycles for fixing something fundamentally wrong that hould be fixed easily.
@dk It's like overfitting a high degree polynomial to some data - each new data point requires a significant change to the overfitted formula (for software, substitute "use case" for "data point").
@dk I’m also thinking about how Ptolemy was foisted, unwanted, upon the people of Egypt by a bro who was just going around smashing up the world and taking what he wanted.
@dk and this system lasted 14 centuries or something? pretty bleak ...

@dk

That's a good analogy.

It even extends further:

In order to make technological progress, we needed to abandon the incorrect model of our solar system. We would probably not have made it to the moon if we'd stuck to the ptolemaic model of our solar system.

Similarly, in order to meaningfully advance our software ecosystem, we need to abandon code produced using poor software engineering practices – such as LLM codegen.

@[email protected] Jumping back a level of analysis or two (and therefore maybe no longer being valid): I'm thinking of the tweaks LLM masters demand their engineers make to LLM output, usually (from what I've seen) for two reasons:

To reduce antisocial behavior (e.g., LLMs producing fascist, misogynist, racist, anti-queer, etc. content, or stop them from encouraging people to commit #suicide)
To increase the happiness of rich-people-who-own-the-LLMs (e.g., increase profit, decrease Grok saying Elon is an asshole, etc.)

The fact that both of these need to (apparently) be done regularly suggests a mismatch with "reality." Arguably, that is not objective external reality but the internal reality of the LLM vis-a-vis its constantly-updating training corpus. The combination of the LLM code and its training corpus seems to make LLMs regularly say awful things and also fail to generate maximum profit for the owners/shareholders.

I won't be the first (or 10,000th) to say there is a significant mismatch between what LLMs (currently) are and what their masters want them to do.

@dk gem after gem from you. Thanks!

@dk *Epicycles* would be a great term to describe code that works up to some *n* but which ultimately fails to generalize to arbitrary size.

You can always improve by adding just one more epicycle, but that never gets at the heart of the problem.

I remember some old JavaScript that allowed reflective invocation of constructors, but because the language at the time lacked *Construct* equivalents to some *Apply* operators, we had to hack around it in a way that only worked up to a certain size. Something like:

@dk Oh, sorry, I see you mention epicycles yourself in 5/7.

@dk

I work on a 20+ year old Ptolemaic Code system.

@dk funny that… I have been calling "Ptolemaic security" the current security model since about 1998 when teaching SANS courses… I am not pleased that this has now expanded to AI and programming.

/cc @aristot73

@dk I think this prediction is on target. It is a more general syndrome that results from multiple failures to practice good software engineering. Reuse of code can be really great or a nightmare. I’d put AI-code reuse together with other common software failures in the category of faulty design. That affects everything downstream if you don’t catch it. Your formulation also highlights the failure to create complete requirements as well as the limitations of testing. It’s why we have brittle systems. I’ll wager most human-generated code of any significant complexity is “Ptolemaic” by your definition. No one has proved to my satisfaction that the current crop of AI-developed code is any better quality than what people produce, and that’s generous. But it is fast and probably costs less (initially) even if it doesn’t work right!
@dk I love the idea! That said, how is traditional software development not prone to this either? Factoring out the deconomic (as in: the problem scaling out quickly) aspect, what do we do to prevent fundamentally incorrect models other than verifying the resulting artifacts against more or less distant interpretations of the user intentions that an AI could not?
@odrotbohm for sure, ai just majes devs more productive in ptolemaic code producion
Hi @dk, makes me think of this interview with Oswald Wiener, which is luckily still on archive.org:
https://web.archive.org/web/20160315182028/https://www.spikeartmagazine.com/de/artikel/oswald-wiener-wissenschaft-und-barbarei-gehen-sehr-gut-zusammen
Quote:
" Zwei Gedankenstränge waren die Vorläufer zum Bioadapter. Zum einen die Vorstellung der Gesellschaft als Homöostat. Ich bemerkte, dass die Kybernetik diesen Zug an sich hat, als Neuheitenverhinderungsmechanismus zu funktionieren. Ich habe auch alle möglichen Gleichnisse gebraucht, etwa dass Kopernikus über moderne Computer verfügte. Dann hätte man das ptolemäische Weltbild endlos weiterführen können, das ja in erster Linie deswegen aufgegeben wurde, weil die Epizykel immer mehr und die Berechnungen immer komplizierter wurden. Aber durch Erhöhung der Rechenleistung hätte man das kopernikanische Weltbild verhindert. Vielleicht nicht für immer, aber für 100 Jahre. Wenn der Sprung zu einer neuen Qualität, einer anderen Auffassung geschieht, weil die Widersprüche nicht mehr administrierbar sind, wäre der Computer ein Mittel zur Verlängerung des alten Zustands.
Der andere Strang waren erkenntnistheoretische Schwierigkeiten. Man kann schwer übersehen, dass wir nur Repräsentationen der Wirklichkeit in unserem Kopf haben, die verbessert, verschlechtert, angepasst werden. ..."
Oswald Wiener: "Wissenschaft und Barbarei gehen sehr gut zusammen"

Beim Einsteigen in den Mietwagen wird mir klar, sie haben mir das richtige Fahrzeug für die Mission gegeben. Ein kleiner Bildschirm zeigt, wogegen ich rückwärts fahren könnte. Nach kurzem Zögern widerstehe der Versuchung. Außer der Straße erkenne ich auf dem Weg gar nichts. Die Welt ist im Nebel verschwunden, aber eine Stimme führt mich. Mein Ziel liegt kurz vor der Grenze zu Slowenien und Ungarn. Auf einem Berg wohnt hier ein Mann, der von sich behauptet, seit fünfzig Jahren das Idiotentum zu pflegen. Wo ich her komme genießt er einen fast magischen Ruf.

Spike Art Daily
@dk this is, to be fair, all of life
@dk I am going to use this so much in stopping people from throwing DNN classifiers at things with well known physics.

@dk
god, I have needed a word like this!

Like - face generation. People either wear glasses or they don't, it's a binary operation. But generation via diffusion starts with a continuous feature space, and is acted upon by a continuous function.
This code will function in most cases but it is fundamentally incorrect.

If you take two faces, one with glasses and one without, interpolating between them will get you weird glasses melded with the face, and this is an artefact of that.

@dk It's inevitable because current LLMs copy, without true understanding and try to do what they are asked, even when they shouldn't.

@dk That's a bit unfair to Ptolemy and friends. Their model was purely descriptive. Epicycles are much like Fourier analysis: you can describe any periodic orbit with them. There is no "world model" implied. Observe, fit, extrapolate, that's all. Hardly surprising, given how little people knew about the universe beyond Earth in those days.

Building software on an incorrect domain model is much worse, because today's techies *should* know better!

@dk My background is clinical psychology. You've given me a term for something I've thought about for years: cognitive-behavioral therapy (CBT) is Ptolemaic psychotherapy. It works pretty well for several things, but it's based on a fundamentally wrong model of the relationships between human thinking, feeling, behavior, and the external stimuli influencing these. It is, however, based on a model of these things that is easily understood by clients seeking help. Huh.
@guyjantic interesting, I don't know much about psychology, from what little I do know, feels like jungian stuff like "shadow work" with it's fixed archtypes feels a lot more ptolemiac that CBT which from my limited understanding is "you are not your thoughts" type exercises. Would love to know more about your view here..
@dk @guyjantic Ditto, please say more about this

@[email protected] @wendynather Jungian psychotherapy, as interesting as it is, doesn't (AFAIK) have any serious evidence for effectiveness; I didn't study at a school focused on Freud or post-Freudian theories, so it's not my area.

CBT definitely (often) goes in the "you are not your thoughts" direction. Its core operating model, however, is a relatively direct relationship:

Stimuli (external or internal) --> Thoughts --> Feelings --> Behavior

With the possibility of some feedback loops, etc. A lot of the initial work is in identifying "automatic thoughts" triggered quickly and (at first) without the client's conscious action by stimuli (e.g., an event in the client's life, something someone says, etc.).

Example: a client suffers from anxiety and depression. Observation shows that, often, the client's spouse says neutral-sounding things that the client experiences as criticism, like "Is that the shirt you wore yesterday?" or "I finished the dishes."

There is work on identifying the problematic links between stimuli and those automatic thoughts, which lead to feelings (focusing on the negative feelings, which brought the client to therapy). Later, there is exploration of the more global, generalized beliefs that give rise to the automatic thoughts.

In the example above, perhaps a few weeks of work would lead to understanding that the client has a generalized view of themselves as worthless, leading to an automatic thought that many possibly-innocent comments are criticisms (i.e., observations of their general worthlessness), which leads to feelings of depression and anxiety.

The CBT process is more involved than I've explained here, though I think that's a good basic intro. It provides significant relief for many people with certain kinds of difficulties. However, the underlying theory is very much in conflict with what cognitive science has shown about thoughts, feelings, and behavior. Off the top of my head, we have significant research findings suggesting:

  • Thoughts often come after feelings
  • Behavioral intention (or even the behaviors themselves) frequently happen before thoughts, with the thoughts we think motivated the behavior being tacked on by our brains as something like a soothing explanation for the behavior
  • Behaviors change thoughts and feelings, often more powerfully and lastingly than thoughts and feelings change behaviors (I think this is allowed in the CBT model, but is usually framed as a minor effect)

There are few more, I think, if I can remember them. Fundamentally, what we ask clients to do in CBT makes sense to clients and maybe to us--it is our "layperson's theory" of how thoughts, feelings, and behavior work--but it is not how they actually work. Nevertheless, the therapy is helpful for many people.

@guyjantic This was awesome, thank you.
@guyjantic @dk Interesting to see this I’ve avoided CBT because, from the description, it sounded to me like the sort of power-through self help that made my issues worse in the past. I want to be understood, not reprogrammed. And maybe I’ve misunderstood CBT, but that’s how it sounded to me.

@superflippy @[email protected] I'm not exactly a CBT evangelist, but I do think it's a helpful process for many people. It is definitely not "power-through self help." One aspect of CBT that many clients find helpful (and I would, too) is that the therapist tends to take a collaborative, rather than mystical, role. The approach I've seen (and this is what it should be, on paper) is that the CBT specialist is a consultant to you, the scientist trying to figure out your own life. The therapist has some knowledge of processes, etc. that might be helpful. There is a lot of talk (if all goes according to the playbook) of observation and experimenting to test your assumptions and thoughts about yourself and others, and a good therapist is flexible, incorporating what you bring back from your observations into the approach taken.

I suppose it is technically self-help (there are CBT self-help books that seem pretty decent, for instance), but certainly not the "white knuckle" "just stay positive" kind.

Last, it's generally pretty short in duration, compared to previous therapies: six to twelve weeks is fairly common, I think. That's once-per-week sessions. I like the fact that if it isn't helping, you're not committed to some multi-year (or usually not even multi-month) process. Of course you're never really committed to anything--keeping people coming back to therapy is always the challenge, no matter the type of therapy--anyone can stop any time, and many do. :shrug:

@guyjantic @dk Having been on the receiving end of CBT, and not feeling much lasting effect apart from the relief of doing something, anything, I'd really like to hear more.

PS: CBT is an Italian bike brand too, and can also sometimes mean "Cock Ball Torture" — which you will now never forget. Sorry.

@liebach @dk LOL. I hadn't connected CBT with cock and ball torture until now. ha.

CBT is an interesting therapy: there was a lot of buzz about it in the late 80s and in the 90s, largely (IMO) because it was the first psychotherapy modality that showed *any *systematic benefit with *any *clients, after some very big (and, to me, pretty convincing) multi-site studies of therapies up to that point. I think maybe it was Glass and ... someone... (maybe Lambert was involved?) in the late 70s, who summarized their research about a bunch of different psychotherapies with the Alice in Wonderland phrase, "All have won and all must have prizes," because several popular therapies at the time all showed no effect relative to the multiple studies' placebo condition (which was "talking to a 'natural helper' doing active listening" which, to be fair, is a difficult condition to beat).

In other words, nothing worked like it claimed, except meds, and those worked in specific situations with frequent side effects. Psychotherapy wasn't doing well. Notably, I believe this line of research is ultimately one reason we have HMOs in charge of psychological care in the US, now. CBT actually did seem to work, mostly for a subset of depression and anxiety concerns, as well as for several issues that don't reach the "disorder" threshold (these are a lot more common than disorders, unsurprisingly).

The "experts" at the time (Freudians, Object Relations therapists and other post-Freudians, Rogerians, et al.) were vocal about hating CBT. It feels to me like they were upset that this upstart wasn't showing the Elders the Proper Respect, but often the criticism was that CBT was mind-numbingly simple. Someone with a bachelor's degree in pretty much anything can be trained in a few months to do CBT. No PhD or MD required. This simplicity, and the transparent collaborative nature of CBT, were very popular with clients who had been talking to experts of various sorts for decades and being told they could not understand the ultra-complicated techniques being used. CBT was a democratizing therapy.

The 90s saw a slew of studies showing CBT was great for lots of conditions and lots of clients. Then in the early 2000s research started to appear questioning that hegemony. I think part of the issue was that many of the CBT studies (and this is still sometimes a problem) used "rule-out" criteria so aggressive that the only people permitted to enter the therapy effectiveness studies were a small, nonrepresentative sample of the people CBT was supposed to help.

Since then, claims about CBT's effectiveness have been walked back quite a bit. It does seem to help many people, but others don't find it helpful. At this point, I'm at the end of my active reading of that literature (i.e., maybe 2005-2010) so I'm quite out of date. My sense, however, is that CBT continues to show mixed results in real-world studies, which--IMO--are the most important ones.

@dk "Ptolemaic" versus "Epicyclic"? We need a word for this, for sure.