I hereby coin the term "Ptolemaic Code" to refer to software that appears functional but is based on a fundamentally incorrect model of the problem domain. As more code is generated by AI, the prevalence of such code is likely to increase.
1/7

#TheGeneralTheoryOfSlop

Like the ancient Ptolemaic model of the solar system, which tried to overcome its fundamentally incorrect understanding of the solar system by adding complex "epicycles" to force its Earth-centred model to match observed reality,
2/7

@[email protected] Jumping back a level of analysis or two (and therefore maybe no longer being valid): I'm thinking of the tweaks LLM masters demand their engineers make to LLM output, usually (from what I've seen) for two reasons:

To reduce antisocial behavior (e.g., LLMs producing fascist, misogynist, racist, anti-queer, etc. content, or stop them from encouraging people to commit #suicide)
To increase the happiness of rich-people-who-own-the-LLMs (e.g., increase profit, decrease Grok saying Elon is an asshole, etc.)

The fact that both of these need to (apparently) be done regularly suggests a mismatch with "reality." Arguably, that is not objective external reality but the internal reality of the LLM vis-a-vis its constantly-updating training corpus. The combination of the LLM code and its training corpus seems to make LLMs regularly say awful things and also fail to generate maximum profit for the owners/shareholders.

I won't be the first (or 10,000th) to say there is a significant mismatch between what LLMs (currently) are and what their masters want them to do.