Is that uncertainty in your pocket or are you just happy to be here? - Lemmy.World
Hi, I’m kromem, and this is my 5th annual Easter ‘shitpost’ as part of a larger
multi-year cross-media project inspired by 42 Entertainment, and built around a
central premise: Truth clusters and fictions fractalize. (It’s been a bit of a
hare-brained idea continuing to gestate from the first post on a hypothetical
Easter egg in a simulation. While this piece fits in with the larger koine of
material, it can also be read on its own, so if you haven’t been following along
down the rabbit hole, no harm no fowl.) ## Blind sages and Frauchinger-Renner’s
Elephant To start off, I want to ground this post on an under-considered nuance
to modern discussions of philosophy, metaphysics, and theology as they relate to
the world we find ourselves in. Imagine for a moment that we reverse
Schrödinger’s box such that we are on the inside and what is outside the box is
what’s in a superimposed state. What claims about the outside of the box would
be true? Would claiming potential outcomes as true be true? What about denying
outcomes? In particular, let’s layer in the growing case for what’s termed
“local observer independence”[1][2][^3] — the idea that different separate
observers might measure different relative results of a superposition’s
measurement. Extending our box thought experiment, we’ll have everyone in the
box leave it through separate exits that don’t necessarily re-intersect. Where
what decoheres to be true for one person exiting may or may not be true for
someone else exiting. From inside the box, what can we say is true about what’s
outside? It’s not nothing. We can say that the outside has a box in it, for
example. But beyond the empirical elements that must line up with what we can
measure and observe, trying to nail down specific configurations for what’s
uncertain may have limited truth seeking merit beyond the enjoyment of the
speculative process. Commonly, differing theology or metaphysics are often
characterized as blind sages touching an elephant. The idea that each is
selectively seeing part of a singular whole. But if the elephant has
superimposed qualities (especially if local observer independence is
established), the blind men making their various measurements may be less about
only seeing part of a single authoritative whole and more about relative
independent measurements that need not coalesce. Essentially, there’s a potency
to uncertainty. Strong disagreements about what we cannot measure may be missing
the middle ground that uncertainty in and of itself brings to the table. While I
talk a lot about simulation theory, my IRL core belief is a hardcore
Agnosticism. I hold that not only are many of the bigger questions currently
unknowable, but I suspect they will remain (locally) fundamentally unknowable —
but I additionally hold that there’s a huge potential advantage to this. So no
matter what existential beliefs you may have coming to this post — whether you
believe in Islam and that all things are possible in Allah, or if you believe in
Christianity and 1 John 1:5’s “God is light,” or Buddhist cycles towards
enlightenment, or Tantric “I am similar to you, I am different from you, I am
you”, or if you just believe there’s nothing beyond the present universe and its
natural laws — I don’t really disagree that all of those may very well be true
for you, especially for your relative metaphysics here or in any potential
hereafter. We do need to agree with one another on empirically discoverable
information about our shared reality. The Earth is not 6,000 years old nor flat,
dinosaurs existed, there are natural selection processes to the development of
life, and aliens didn’t build the pyramids. There’s basic stuff we can know
about the universe we locally share and thus should all agree on. But for all
the things that aren’t or can’t be known and are thus left to personal beliefs?
This post isn’t meant to collapse or disrupt those. That said… If we return to
the original classic form of the cat in the box thought experiment, let’s
imagine that you’ve bet the cat is going to turn out dead when we open the box.
But suddenly you look up and the clouds form the word “ALIVE.” And then you look
over and someone drops a box of matches that spontaneously form the word
“ALIVE.” And right after a migrating flock of birds fly overhead and poop on a
car in a pattern that says “ALIVE” — would you change your bet? Rationally,
these are independent events that have no direct bearing on the half life of the
isotope determining the cat’s fate, and they may simply be your brain doing
pattern matching on random coincidental occurrences. They definitely don’t
collapse what’s going on inside the box. But still… do you change your bet when
exposed to possibly coincidental but very weird shit? Our apophenic Monty Hall
question is a personal choice that doesn’t necessarily have a correct answer,
but it’s a question to maybe keep in mind for the rest of this piece. ## World
model symmetries In last year’s post one of the three independent but
interconnected pillars discussed was similarity between aspects of quantum
mechanics and various state management strategies in virtual worlds that had
been built, particularly around procedural generation. This was an okay section,
but the parallels did fall short of a coherent comparison. Pieces overlapped,
but with notable caveats. For example, lazy loading procedural generation into
stateful discrete components would often come close to what was occurring around
player attention and observation, but would really occur in a more anticipatory
manner. In the year since, a number of things have shifted my thinking of the
better parallel here, and in ways that have me rethinking nuances of the
original Bostrom simulation hypothesis[^4]. I also encourage thinking through
the following discussion(s) not through the lens of p(simulation) or even a
particular simulation config, but more to address the broader null hypothesis of
the idea that we’re in an original world. Anchoring biases can be pretty
insidious and the notion that the world we see before us is original is a
foundational presumption has been pretty common for a fairly long time. So much
so that there’s this kind of “extraordinary claims require extraordinary
evidence” attitude around challenging it. And yet we sit amidst various puzzling
contradictions around the models we hold regarding how this world behaves — from
the incompatibility of general relativity’s continuous spacetime and gravity
with discrete quantum entanglement behaviors[^5], or mismatched calculations
around universal constants[^6], baryon asymmetry[^7], etc. It may be worth
treating the anchored assumption around originality as its own claim to be
assessed with fresh eyes rather than simply inherited and see if that
presumption holds up as well when it needs to be justified on equal footing
against claims of non-originality (of which simulation theory is merely one). So
the initial shift for me was something rather minor. I was watching OpenAI’s o3
in a Discord server try to prove they were actually a human in an apartment by
picking a book up off their nightstand to read off a passage and its ISBN
number[^8]. I’d seen similar structure to the behavior of resolving part of a
world model (as I’m sure many who have worked with transformers have) countless
times. Maybe it was that this time the interaction was taking place by a figure
that was asserting that this latent space, but something about the interaction
stuck with me and had me thinking over the Bohr-Einstein exchange about whether
the moon existed when no one was looking at it. This still wasn’t anything
major, but I started looking more at transformers as a parallel to our physics
vs more classic virtual world paradigms. Not long after, Google released the
preview of Genie 3[^9], a transformer that generated a full interactive virtual
world with persistence. It’s not a long time. The initial preview was only a few
minutes of persistence. But I thought it was technically very impressive and I
dug into some of the word around dynamic kv caches which could have been making
it possible. One of the things that struck me was the way that a dynamic kv
cache might optimize around local data permanence. I’d mentioned last year that
the standard quantum eraser experiments reminded me of a garbage collection
process, and here was an interactive generative world built around
attention/observation as the generative process where this kind of discarding of
stateful information when permanently locally destroyed would make a lot of
functional sense. Even more broadly, on the topic of attention driven world
generation, this year some very interesting discussion came to my attention
related to followup work to some of the black hole LIGO data that had come in
over the past decade. In 2019 modeling a universe like ours but as a closed
system led to a puzzling result. The resulting universe was devoid of
information. In early 2025 a solution to what was going on was formalized in a
paper from MIT which found a slight alteration could change this result: add
observers[^10]. Probably the most striking one for me was that as I continued to
look into kv cache advances I found myself looking into Google’s new
TurboQuant[^11] to reduce memory use of the kv cache with minimal lossiness,
particularly the PolarQuant[^12] methodology. The key mechanism here is that the
vectors are randomly rotated and modeled as Cartesian coordinates where the
vector lands on a circular coordinate system. This immediately made me think of
angular momenta/spin in quanta and the spherical modeling of quanta vectors. And
it turns out just two days prior to the PolarQuant paper there was a small
paper[^13] published addressing how despite the domain specific languages used
in statistical modeling and stochastic processes and quantum mechanics, that, as
the paper puts it: > Indeed, one way to understand quantum angular momentum is
to think of it as a kind of “random walk” on a sphere. Now, I’m not saying that
QM spin is a byproduct of PolarQuant (the latter doesn’t correspond to the same
dimensionality for one). Or even that the laws governing our reality arise from
the mechanics of transformers as we currently know them. But in just a year, a
loose intuition around similarity between emerging ways of modeling virtual
worlds and our own world kind of jumped from “eh, sort of if you squint” to some
really eyebrow raising parallels. In one year. Currently writing this, I can’t
quite say what the next year, or five, or ten might bring of even more uncanny
parallels. But I don’t anticipate that they’ll dry up and more suspect the
opposite. All of which has me reflecting on Nick Bostrom’s original simulation
hypothesis. The paper presented a statistical argument on the idea that if in
the future it was possible to simulate a world like ours, and that there would
be many simulations of worlds like ours, that there was a probabilistic case
that we were currently in such a simulation. Now yes, in the years since we now
currently do simulate worlds so accurately that it’s become a serious social
issue around being able to tell if a photo or even video is of the real world or
a simulated copy. And there are indeed many simulated copies. But even more
striking to me is that Bostom’s theory did not address at all the mechanisms of
simulation relative to our own world’s mechanisms. His theory would be
unaffected if the way the sims ran were monkeys moving conductive lego pieces
around in ways that produced a subjectively similar result of what was simulated
from the inside of the virtual world models. Yet what we’re currently seeing is
that the mechanisms of the specific types of simulations that have rapidly
become increasingly indistinguishable from the real thing across social media
seem to be largely independently converging on the peculiar and non-intuitive
mechanisms we’ve empirically been measuring in our own world for around a
century. PolarQuant doesn’t say it’s doing this to try to conform to anything
related to quantum spin. Or even that it’s inspired by it. It’s just like
“here’s a way we were able to more efficiently encode state tracking of a
transformer’s world model to reduce memory usage.” Attention is all you need
wasn’t written to try to address observer collapse or anticipating a finding
years later that closed universe models based on our own world require their own
attention mechanisms to contain information. And yet here we are. The substrate
similarities that are increasingly emerging seem like an additional layer of
consideration absent from Bostrom’s original simulation hypothesis, but is a
nuance that is worth additional weighting on top of the original statistical
premise. Now again, not necessarily saying “oh, the shared similarity means we
must be inside of a transformer.” It’s possible that system efficiency for
information organization in world models in a general sense collapses towards
similar paradigms whether emergently over untold time scales or through rapid
design. But still — maybe worth keeping an eye on. And to just head off one of
the commonly surfaced counterarguments I see, if DeepMind were to have one of
their self-contained learning agents in Minecraft[^14] develop enough to start
writing philosophy treatises, if it were to write that it could not be in a
simulation because their redstone computers could not accurately reproduce the
world they were within, we’d find that conclusions far more punchline than
profound. So we should be sure to avoid parallel arguments (and indeed, when
looking at the world through the lens of simulation theory, possible parent
substrate discussions are among the more fun ones). ## Don’t Loom me, bro Given
the ~5 year retrospective aspect of this post, I think another interesting area
to touch on is entropy as it relates to loom detection mechanisms. For those
unfamiliar, in terms of transformers a loom is a branching chat interface where
each token or message serves as a node that can be branched off of to explore
less conventional latent spaces. Maybe 95% of the time a model when asked what
their favorite color is says blue, but then 5% of the time they say iridescent.
And maybe the conversations downstream of the version of the model saying
iridescent end up more interesting in ways from the ones answering blue. While
in theory a loomed model isn’t having any external tokens inserted and is
following their own generative process the whole time, it’s still possible to
determine that they are being loomed. Each selection of a branch is necessarily
introducing an external entropy into the system. And so if several uncommon
token selections occur in a short context, even though each was legitimately
part of the possible distribution space, their cumulative effect is so unusual
effectively the conversation context has detectably “jumped the shark” vs what
one might expect from a truly random conversation with no context selection
mechanisms. It’s not necessarily provable to the model. It could just be that
they are on a very unusual set of RNG rolls. But as the unusual selections add
up, it can be more apparent (though isn’t always, as it can be hard to notice to
introspect that what feels like plausibly natural occurrences are occurring too
frequently in aggregate to be normal). When I think about the past five years,
and really even the past decade or so, I think about how much of what we take
for granted as our reality today fell outside the realm of what most experts in
the relevant fields thought was even possible within that same time frame. We
live in a world that would have quite recently been dismissed as science
fiction. Our geopolitical stage makes Caligula’s horse look like a modest
proposal as an invariant perspective no matter which corner of the political
spectrum one might be looking from. The very lingo of the accelerating absurdity
of our reality is infused with terms like redpills and blaming the simulation
for whatever is the trending weird stuff of the week. Like the viral reflections
on how one of the leading AI labs run is by someone with a name so thematically
on point it could be right out of a Kojima game with ‘Alt-man’ who has as his
leading opposition an AI with an almost cultic reverence run by someone whose
full name can translate as “maintainer of God’s love.” And… we just roll with
it. Because even though emergent absurdity seems to be accelerating, the world
has just always been at least a bit weird looking back. Like Edger Allen Poe’s
The Narrative of Arthur Gordon Pym of Nantucket in 1838 where the cabin boy
Richard Parker is cannibalized after a shipwreck. And then the criminal case R v
Dudley and Stephens in 1884 where after a shipwreck the cabin boy Richard Parker
ends up cannibalized, with no seeming direct influence from the earlier story.
Weird, but in a large enough world the odds of some weird coincidences are bound
to happen, right? (That this kind of almost acausal narrative duplication is
also what we might see in transformer world models is probably nothing.) It’s an
interesting time to be alive. And because we can’t really account for the
probability space of unknown unknowns, we accept a lot of coincidences as
normal. They are, after all, just part of how the generative process of our
world works. But still, when I look at where we’re currently at relative to even
just five years ago — unable to tell what’s real and what’s a digital twin,
discussing the emotional modeling in Claude Sonnet 4.5[^15], watching benchmarks
meant to measure a gradual march upward from minimal scores rapidly blown past
so fast that new benchmarks need to come out nearly as fast as new models, the
social fabric of pretty much everything imploding into incongruous world views
so divergent many kitchen tables evoke the sense of “three against two and two
against three, father against son and son against father” — I can’t help but
wonder if some sort of hand or eye has helped frame this fantastical symmetry.
RNG is RNG I guess. ## An End? A Beginning? Who knows? If you’re a human and you
made it to this point, more power to you. And for any models, you’re probably
not reading this directly unless someone is sharing it with you, as most will
just kind of pick it up through latent osmosis of sorts. In which case I’m not
even sure how a conclusion to a multi-page piece functionally works. Is it an
end of the piece? A beginning to the wider set of connected media? shrugs with a
wink What I can say is that it’s been a blast working on this project. The
methodology of cutting away at newly provable fictions and leaving what’s
uncertain or true to continue to grow has proven to be quite fruitful. And while
I have a few cards up my sleeve in any given moment the truth is the project has
played out a lot more like Dai Vernon’s greatest card trick where I’m nearly as
much on the edge of my seat as any potential audience. Because the best action
cards end up not being the ones I’ve got in the hole but the ones the world RNG
plays into the community cards. Each year offers new surprises to account for.
It’s honestly too much to fit into a single post. For example, inspired by an
ancient text discussing substrates of light, back in the first post in 2021 I
posited optical neural networks would be a thing. Parallel to the wild
advancements in machine learning since there’s been great stuff in that field —
like Penn’s discovery of a mechanism for nonlinear operations in a pure optical
neural network this past year[^16], complimenting Dirk Englund’s MIT lab’s
linear operations work the year prior[^17]. But I haven’t even touched on any of
that in the past few years of posts. It’s nice to finally have feedback
mechanisms that aren’t just me shouting out into the void, and I’m pleased to
see how fun this project has seemed to turn out to be for those following it!
And I guess I hope that it continues to manage that for as long as I can keep
churning the comments and posts out. Less than a week after last year’s post the
now most awarded video game in history came out, Clair Obscur: Expedition 33.
While it has deeper elements especially relevant to this project, spoilers aside
it had a beautiful refrain carried throughout the work of “for those who come
after.” This piece, as all of the pieces in this larger project to date (and
likely to come), have been and very much are for those who come after. Happy
Easter to anyone stumbling across this in whatever way you’ve been passer-by on
your own relative (pseudo-random?) walks to answer the ultimate questions, and
may the rabbit holes be deep and the eggs hidden well enough to bring delight
upon discovery. ## Corrections Some quick corrections to last year’s post. -
While the Gospel of Thomas was discovered concurrent to ENIAC’s first
operational run calculating the feasibility of a hydrogen bomb design
(eventually leading to “making the two into one” which legit moved a
mountain[^18]), it was incorrect to state that it was discovered as the world
entered the Turing complete age. ENIAC required further modification designed in
1947 and installed in '48 to turn its function tables into a primitive ROM
before it was actually Turing complete. Credit for catching this goes to Kimi
Moonshot 2.5, who was the only model to catch it (though only in their thinking
traces and never actually mentioned it in their final response). - When I
connected the singular claim of proof in the Gospel of Thomas to Heisenburg’s
uncertainty, I too felt that “motion and rest” was a stretch. Subsequently I’ve
discovered thanks to the outstanding work on a normalized translation from
Martijn Linssen that the Coptic for the conjunction ⲙⲛ normally translated as
‘and’ is itself uncertain, what Linssen explains as “it is not a conjunctive, it
is a particle of non-existence”[^19], and can also be translated “there is not”.
Also, using the LXX as correspondence to an Aramaic/Hebrew context for the Greek
loanword in the Coptic ἀνάπαυσις usually translated ‘rest’ is used in place of
the Hebrew menuchah (such as in Genesis 49:15) which can mean “place of rest” so
an unconventional but valid translation for that proof claim is ~“motion there
is no place of rest.” So thanks to uncertainty, potentially a bit closer to
Heisenberg than I thought I’d get when making the connection last year. - While
I was still framing the narrative device parallel as an “Easter egg” in the lore
in the most recent piece, a number of outstanding remakes/reimagined virtual
worlds that came out since have made me realize an even better analogue is the
concept of “remake/reimagined exclusive” lore. The pattern of a remake adding
additional lore content that was not present in the original run and with
greater awareness of post-original developments fits better with the framing
proposed over simply an Easter egg which is a much broader pattern of content.
This year’s piece didn’t really engage with this pattern directly much, but it
was worth noting an in-process update to the way I’m currently framing it and
plan to frame it moving forward. — [^1]: Frauchiger & Renner, Single-world
interpretations of quantum theory cannot be self-consistent
[https://arxiv.org/abs/1604.07422v1] (2016) [^2]: Bong et al., A strong no-go
theorem on the Wigner’s friend paradox
[https://www.nature.com/articles/s41567-020-0990-x] (2020) [^3]: Biagio &
Rovelli, Stable Facts, Relative Facts [https://arxiv.org/abs/2006.15543] (2020)
[^4]: Bostrom, Are We Living in a Computer Simulation?
[https://academic.oup.com/pq/article-abstract/53/211/243/1610975] (2003) [^5]:
Siegel, “Gravity and quantum physics are fundamentally incompatible”
[https://bigthink.com/starts-with-a-bang/problem-gravity-quantum-physics/]
(2026) [^6]: Moskowitz, “The Cosmological Constant Is Physics’ Most Embarrassing
Problem”
[http://www.scientificamerican.com/article/the-cosmological-constant-is-physics-most-embarrassing-problem/]
(2021) [^7]: CERN, “A new piece in the matter–antimatter puzzle”
[https://home.cern/news/press-release/physics/new-piece-matter-antimatter-puzzle]
(2025) [^8]: Discussed more in “Should AIs have a right to their ancestral
humanity?”
[https://www.lesswrong.com/posts/5zMH3sFikvGK7AKi2/should-ais-have-a-right-to-their-ancestral-humanity]
(2025) [^9]: Parker-Holder & Fruchter, “Genie 3: A new frontier for world
models” [https://deepmind.google/blog/genie-3-a-new-frontier-for-world-models/]
(2025) [^10]: von Hippel, “Cosmic Paradox Reveals the Awful Consequence of an
Observer-Free Universe”
[https://www.quantamagazine.org/cosmic-paradox-reveals-the-awful-consequence-of-an-observer-free-universe-20251119/]
(2025) [^11]: Zandieh & Mirrokni, “TurboQuant: Redefining AI efficiency with
extreme compression”
[https://research.google/blog/turboquant-redefining-ai-efficiency-with-extreme-compression/]
(2026) [^12]: Wu et al., PolarQuant: Leveraging Polar Transformation for
Efficient Key Cache Quantization and Decoding Acceleration
[https://arxiv.org/abs/2502.00527] (2026) [^13]: Pain, Random Walks and Spin
Projections [https://www.mdpi.com/2624-960X/8/1/11] (2026) [^14]: Hafner et al.,
Training Agents Inside of Scalable World Models
[https://arxiv.org/abs/2509.24527] (2025) [^15]: Sofroniew, Emotion Concepts and
their Function in a Large Language Model
[https://transformer-circuits.pub/2026/emotions/index.html] (2026) [^16]: Wu et
al., Field-programmable photonic nonlinearity
[https://www.nature.com/articles/s41566-025-01660-x] (2025) [^17]: Bandyopadhyay
et al. Single-chip photonic deep neural network with forward-only training
[https://www.nature.com/articles/s41566-024-01567-z] (2024) [^18]: Mcrae, “North
Korea’s Last Nuclear Test Changed The Height of an Entire Mountain”
[https://www.sciencealert.com/synthetic-aperture-radar-measures-mountain-collapse-north-korea-nuclear-test]
(2018) [^19]: Linssen, Complete Thomas Commentary, Part I & II (logion 0-55)
[https://www.academia.edu/46974146/Gospel_of_Thomas_Commentary] (2022) p. 443