Does reality require observers?

Amanda Gefter has an article at Nautilus a couple of you have asked me about: Reality Exists Without Observers? Boooo! The title is an accurate summary of her thesis. Gefter is responding to a book by Vlatko Vedral, where he reportedly argues for a reality that doesn’t require observers. In terms of quantum mechanics, Vedral is an Everettian, although he seems to downplay the many-worlds aspect, focusing on the physics. He touts what’s often taken to be an advantage of the interpretation over Copenhagen, that it doesn’t require any special role for the observer.

Gefter’s stance is that this can’t be done, that any attempt to do it inevitably sneaks the observer back in. She also implies that most discussions about Neils Bohr, Werner Heisenberg, and the other thinkers behind Copenhagen, strawman their positions. Since “the Copenhagen interpretation” is actually a diversity of views held by many early quantum scientists, she’s probably not entirely wrong. But arguing about what the Copenhagen interpretation is or isn’t strikes me as just as problematic.

Even if we restrict ourselves to just Bohr’s ideas, the man’s writings are infamously opaque. When trying to decipher his views, it seems possible to come away with a variety of ideas. When I’ve read about them, he comes across to me as an instrumentalist, although one who doesn’t want to admit it. Many science historians argue he was a Kantian, or maybe a neo-Kantian. Within the bounds of this discussion, I’m not sure how significant these distinctions are. What seems clear is he doesn’t take quantum theory to be telling us about reality, only our interactions with it.

When someone starts arguing for an observer centric reality, they can mean at least a couple of different things. One is epistemic, that our knowledge of reality only comes through observers. Bohr reportedly included non-conscious observers in this. But with that stipulation, the view doesn’t seem particularly radical. Another way of saying it is that all information must come through information gathering systems, which doesn’t sound nearly as profound.

But an observer centric reality can also be a stronger claim, an ontic one, that says conscious observers construct reality. The strongest claim of all is that reality doesn’t exist until we consciously observe it, that the observation itself brings it into existence. This is either idealist or solipsist territory.

It’s important to understand that even under the epistemic view, observation has effects on quantum systems. We can observe a supernova twelve million light years away without affecting it. Or we can observe the flow of a river without meaningfully affecting it. In these cases, there’s already enough interaction happening between the system and its environment that we can learn from the effects of those interactions that reach us.

However, we can’t observe a quantum system without interacting with it, and interaction entangles the observer and the observed, changing both. For the quantum system, that typically results in at least decoherence, and in many interpretations, all but one of the possible outcomes disappearing.

Which one of the above is Gefter claiming? I’m not sure, and that, unfortunately, is all too common when trying to parse arguments for this view. If forced to guess, I’d say she’s agnostic on the distinction, as this discussion about the moon seems to show.

Let’s put this moon thing to rest. It’s true. We can’t say the moon is there if no one’s observing it. Neither can we say that the moon’s not there if no one’s observing it. It’s not as if the sky is empty until someone gazes upward and a moon suddenly pops into existence. It’s that we can’t say anything about the moon as an independent object, because quantum theory doesn’t grant us independent objects, only measurements that we can slice into moons.

Not that Copenhagen is Gefter’s preferred interpretation. She actually favors QBism. Historically the name meant quantum Beysianism, in the sense that quantum theory provides degrees credence in various outcomes. It’s a view which focuses on the subjective experience of the experimenter. Or at least that’s how I understand it.

But again, the question becomes, is this just an epistemic view, or an ontic one? It seems to depend on which QBist you ask. On the one hand, it could be seen as a straight instrumentalist approach to quantum theory, a reification of the “Shut up and calculate!” attitude. Interestingly, the originator of that phrase, David Mermin (not Richard Feynman), signed on to QBism at some point.

Many QBist proponents resist the instrumentalist label. Which in turn often leads to accusations of solipsism, which they also resist. As I noted above, this starts to sound a lot like idealism to me, although it’s not clear that’s what they mean either. In the end, most physicists seem to regard QBism as an epistemic interpretation. (I’ve already done a post on why epistemic approaches don’t work for me.)

But what about the ontic view of an observer centric universe? If you already lean toward some form of ontological idealism, then this may well be a natural conclusion. But I don’t think there’s anything in the physics that drive it. A lot of this type of discussion seems to ignore the lessons from quantum computing, where engineers struggle with systems that decohere all the time, much earlier than they would prefer, and with nothing we’d normally call an “observer” driving it. (Unless we say the environment is the observer, but then “observer” seems to lose all distinctive meaning.)

If you think about it, this is no different from the classic double-slit experiment. If we put a polarizing filter at one of the slits, no conscious agent gets the information on which slit the particle goes through any sooner. What conscious agents do see is a change in the results on the back screen, and then infer what happened at the slits.

So the epistemic point about observers seems valid enough. I haven’t read Vedral at length, but I’d be surprised if he disagreed. But the ontic one doesn’t seem particularly well motivated, at least unless your metaphysics already push you in that direction.

But maybe I’m missing something? Is there an in-between ground between the options I listed? Or evidence for the ontic version I’m overlooking?

#consciousness #interpretationsOfQuantumMechanics #manyWorldsInterpretation #philosophy #physics #qbism #quantumMechanics #science

By the way, if you follow that URL to qrng.anu.edu.au, according to the many-worlds interpretation, 2¹⁰²⁴ different versions of you will read 2¹⁰²⁴ different bit strings (all possible 1024-bit strings).

#physics #ManyWorldsInterpretation

If qrandom.io is too busy for you, there's also this:

https://qrng.anu.edu.au/wp-content/plugins/colours-plugin/get_block_binary.php

Just take the first bit from the random bit string to measure your quantum coin flip.

#physics #ManyWorldsInterpretation

In some portion of the universal wave function, it is Hugh Everett III's birthday.

If you're having trouble deciding whether or not to celebrate, use this quantum random number generator and do both! (Set Min to 1 and Max to 2 for the equivalent of a coin flip.)

https://qrandom.io

If you use Apple platforms, there's this fun app (not free): https://apps.apple.com/us/app/universe-splitter/id329233299

cf: https://en.wikipedia.org/wiki/Many-worlds_interpretation

#physics #ManyWorldsInterpretation

Quantum Random Number Generator, true random number and data generator based on quantum physics - qrandom.io

Easily access a quantum random number generator and generate random values for a variety of applications, including generating random numbers, arrays, strings, shuffled cards, dice rolls, and more up to 100 Mb per day. - qrandom.io

Is quantum immortality a real thing?

In discussions about the Everett interpretation of quantum mechanics, one of the concerns I often see expressed is for the perverse low probability outcomes that would exist in the quantum multiverse. For example, if every quantum outcome is reality, then in some branches of the wave function, entropy has never increased. In some branches, quantum computing doesn’t work because every attempt at it has produced the wrong result and people have concluded it doesn’t work. In other branches, you as a macroscopic object might quantum tunnel through a wall.

Of course, for enthusiasts, this comes with a hopeful aspect. Because in some branches, you would go on living indefinitely, no matter how improbable it might be. Hugh Everett himself was reportedly a believer in quantum immortality and so had little concern about the unhealthy lifestyle that led to his early demise in this branch. The idea is that if every outcome happens, then there are versions of you reading this that will live until the heat death of the universe.

This is vividly illustrated in the infamous quantum suicide thought experiment. One version described by Max Tegmark involves rigging up a gun to fire if a certain quantum event happens. Say the quantum event has a 50% chance of happening in any one second. You then put your head in front of the gun and begin the experiment. In half of all worlds where you begin the experiment, you die in the first second, but you go on living in the other half. In half of that remaining half you die in the next second, but go on living in the other half.

For you as the experimenter this goes on indefinitely with increasingly improbable outcomes leading to your survival. Of course, in virtually all worlds you leave behind grieving friends and family who are less convinced. But for you subjectively, if many-worlds is reality, you continue living until the experiment ends.

(Before getting too comforted by the possibility of quantum immortality, it’s important to remember that this is more of a side-life than an afterlife. Most of the versions of you will still experience an approaching death. It’s also worth noting that a you a million years from now would likely have evolved into something utterly strangle and unrecognizable to the you of today. And there’s no guarantee this ongoing existence would be pleasant. Indeed, under many-worlds, some would inevitably be hellish.)

One question that often comes up in discussions about this is whether reality allows for these infinitesimally low probability outcomes, or whether there is some inherent minimal discreteness at the base of reality that prevents it. There’s nothing in the math to indicate it, but of course the math, at least the math we have today, is a description of reality that is likely only an approximation.

However in a recent interview with Curt Jaimungal, David Wallace, a proponent of the many-worlds interpretation, may have provided another reason to doubt these outcomes: quantum interference. (Note: if the embed doesn’t work right, the relevant remarks are at around the 1:21 mark. Also you don’t have to watch the interview to understand this post, but it is an interesting discussion.)

https://www.youtube.com/watch?v=4MjNuJK5RzM&t=4901s

To understand Wallace’s point, it helps to realize some important points about how quantum decoherence works. Decoherence is the process of the quantum particle losing its wave like nature and becoming more particle like. This happens because as it interacts with the environment, the phase relations which keep the wave coherent become disrupted. The wave becomes fragmented. We call the fragments “particles”. This leads to the famous (infamous?) quantum interference effects disappearing. (As shown by the double slit experiment.)

But the word “disappearing” here in reference to the interference effects should be understood to mean “become undetectable”, not that they cease to exist entirely. Under decoherence the interference never goes away entirely. Like the wave overall, it becomes fragmented, and settles into an underlying “noise”. (Note: this is actually a difference in predictions between collapse and non-collapse interpretations that should, in principle, be testable. Of course, figuring out a way to do the test is another matter.)

Wallace’s point is that infinitesimally low probability outcomes should be swamped out by this remnant interference from higher probability outcomes, meaning that they should be prevented from existing. If so the branches where entropy never increased, where quantum computing never works, or to use his example, where he as a macroscopic object quantum tunnels through a wall, shouldn’t exist.

What does this mean for quantum immortality? I don’t know that it wipes it out entirely. Many of the initial survival scenarios may be very low probability, but not profoundly low ones, and so may not be swamped by interference from the other branches. But it does seem like it shortens the duration and overall makes it less certain, even once someone accepts the existence of the other worlds. So there may be versions of you reading this that live for decades or centuries beyond the normal lifespan, maybe even millenia, but probably not until the end of the universe.

Still, the implications are interesting and fun to speculate about. If there is a version of me alive in the far future, I wonder if he (it?) will remember these speculations.

What do you think of Wallace’s point? If we assume many-worlds is reality, does the idea of quantum immortality seem plausible? Or are there other reasons to doubt it?

#InterpretationsOfQuantumMechanics #ManyWorldsInterpretation #Philosophy #QuantumImmortality #QuantumMechanics

Çift Yarık Deneyi Çoklu Dünyalar Yorumunu Yalanladı mı?

YouTube

Many-worlds without necessarily many worlds?

IAI has a brief interview of David Deutsch on his advocacy for the many-worlds interpretation of quantum mechanics. (Warning: possible paywall.) Deutsch has a history of showing little patience with other interpretations, and this interview is no different. A lot of the discussion centers around his advocacy for scientific realism, the idea that science is actually telling us about the world, rather than just providing instrumental prediction frameworks.

Quick reminder. The central mystery of quantum mechanics is that quantum systems seem to evolve as waves, superpositions of many states, with the different states interfering with each other, all tracked by a mathematical model called the wave function. But when measured, these systems behave as localized particles, with the model only able to provide probabilities on the measurement result. Although the measurement results as a population show the interference patterns from the wave function. This is often called the “wave function collapse”.

Various interpretations attempt to make sense of this situation. Many deny the reality of what the wave function models. Others accept it, but posit the wave function collapse as a real objective event. Some posit both a wave and particle existing throughout. The Everett approach rejects wave function collapse and argues that if we just keep following the mathematical model, we get decoherence and eventually the same observations. But that implies that quantum physics apply at all scales, meaning that it’s not just particles in superpositions of many states, but measuring equipment, labs, people, planets, and the entire universe.

Reading Deutsch’s interview, it occurred to me that my own structural realist outlook, a more cautious take on scientific realism, is reflected in the more cautious acceptance I have of Everettian quantum mechanics. People like Deutsch are pretty confident that there is a quantum multiverse. I can see the reasoning steps that get them there, and I follow them, to a point. But my own view is that the other worlds remains a possibility, but far from a certainty.

I think this is because we can break apart the Everettian proposition into three questions.

  • Does the mathematical structure of quantum theory provide everything necessary to fit the current data?
  • If so, can we be confident that there won’t be new data in the future that drives theorists to make revisions or add additional variables?
  • What effect would any additions or changes have on the broader predictions of the current bare theory?
  • My answer to 1 is yes, with a moderately high credence, maybe around 80%. I know people like Deutsch and Sean Carroll have this much higher. (I think Carroll says his is around 95% somewhere on his podcast.) And I think they have defendable reasons for it. Experimentalists have been stress testing bare quantum theory for decades, with no sign of a physical wave function collapse, or additional (hidden) variables. Quantum computing seems to have taken it to a new level.

    But there remain doubts, notably about how to explain probabilities. I personally don’t see this as that big an issue. The probabilities reflect the proportion of outcomes in the wave function. But I acknowledge that lot of physicists do. I’m not a physicist, and very aware of the limitations of my very basic understanding of the math, so it’s entirely possible I’m missing something, which is why I’m only at 80%.

    (Often when I make the point about the mathematical structures, it’s noted that there are multiple mathematical formalisms: wave mechanics, matrices, path integrals, etc. But while these are distinct mental frameworks, they reportedly always reconcile. These theories are equivalent, not just empirically, but mathematically. They always provide the same answer. If they didn’t, we’d see experimental physicists trying to test where they diverge. We don’t because there aren’t any divergences.)

    If our answer to 1 is yes, it’s tempting to jump from that to the broader implications, the quantum multiverse. (Or one universe with a much larger ontology. Some people find that a less objectionable description.)

    But then there are questions 2 and 3. I have to say no to 2. The history of science seems to show that any claims that we’ve found the final theory of anything is a dubious proposition, a point Deutsch acknowledges in the interview. All scientific theories are provisional. And we don’t know what we don’t know. And there are the gaps we do know about, such as how to bring gravity into the quantum paradigm. It seems rational to wonder what kind of revisions they may eventually require.

    Of course 3 is difficult to answer until we get there. I do doubt any new discoveries would drive things toward the other interpretations people currently talk about, or overall be less bonkers than the current predictions. Again given the history of science, it seems more likely it would replace the other worlds with something even stranger and more disconcerting.

    So as things stand, there’s no current evidence for adding anything to the structure of raw quantum theory. That does imply other worlds, but the worlds remain untestable for the foreseeable future.

    To be clear, I don’t buy that they’re forever untestable. We can’t rule out that some clever experimentalist in the future won’t find a way to detect interference between decohered branches, to recohere them (which has been done but only very early in the process), or some other way we haven’t imagined yet.

    My take is the untestability of the other worlds means that Everettian quantum mechanics, in the sense of pure wave mechanics, shouldn’t be accepted because we like the worlds, or rejected because we dislike them. For now, the worlds should be irrelevant for a scientific assessment. The only question is whether anything needs to be added to the bare theory, a question, it should be noted, we can ask regardless of whether we’re being realist or antirealist about any of this.

    All of which means that while my credence in austere quantum mechanics is 80%, the credence for the other worlds vacillates somewhere around 50%. In other words I’m agnostic. This resonates with the views I’ve seen from a number of physicists, such as Stephen Hawking, Sidney Coleman, John Preskill, and most recently, Brian Cox, which accept the Everett view but downplay the other worlds. Even Sean Carroll notes in one of his AMAs that he doesn’t really care so much about the other worlds, but the physics at the core of the theory.

    But maybe I’m missing something. Are the questions I raised above as easy to separate as I’m thinking? Or are there problems with pure wave mechanics I’m overlooking?

    #InterpretationsOfQuantumMechanics #ManyWorldsInterpretation #Philosophy #PhilosophyOfScience #QuantumMechanics #Science

    David Deutsch | There is only one interpretation of quantum mechanics

    The many-worlds interpretation of quantum mechanics says that all possible outcomes of quantum measurements are physically realised in different worlds. David Deutsch explains the philosophy behind the many-worlds interpretation and argues that not only is it the best interpretation of quantum mechanics – it is the only interpretation.

    IAI TV - Changing how the world thinks

    I can't believe no one else mentioned it (at least in my feed) so I'll have to do it. OTD in 1930, Hugh Everett III was born (at least in this neighborhood of the wave function). He originated the Many-Worlds interpretation of Quantum Mechanics (at least in this neighborhood of the wave function). Let's all (including our alternate selves) wish him a happy 94th birthday (at least in the branches where he's still alive—unlike this one).

    #QuantumMechanics #ManyWorldsInterpretation

    The Everett theory of quantum mechanics is testable in ways most people don’t realize.

    Before getting into how or why, I think it’s important to deal with a long standing issue. Everettian theory is more commonly known as the “many worlds interpretation”, a name I use myself all the time. But what’s often lost in the discussion is that “world” here is a metaphor for describing something very complex. Bryce DeWitt knew what he was doing when he gave it that name. It quickly and vividly conveys an approximation of the overall idea.

    But that comes at a cost. People lose track of the idea that “worlds”, or “universes” in some descriptions, are just a conceptual crutch, a metaphor. This leads them to glom onto details of the metaphor and have questions and concerns that are really more about the metaphor than the actual theory. For example, concern about whole universes “springing into being” is really an issue with the metaphor.

    Chad Orzel actually wrote an article about this some years ago when discussing Sean Carroll’s book on the subject. At the time I misunderstood his point, then later thought he was wrong, that the metaphor could be clarified sufficiently. Well, I’ve come full circle and now fully agree with him. So I’m going to try to do the rest of this post without evoking the metaphor. Of course, it’s difficult to talk about stuff like this without using some metaphors, but hopefully avoiding ones loaded with baggage will help.

    So what then is Hugh Everett’s theory really about? It’s about trying to understand the ontology of quantum mechanics, with a motivation, at the time it was formulated, of getting closer to a reconciliation with general relativity. 

    The conventional understanding in collapse interpretations, is that there are two processes at play. 

    One is what happens for an isolated quantum system. It’s the continuous, linear, and deterministic evolution of pure wave mechanics, including interference between the various states in superposition, which is tracked by a mathematical tool called the wave function. Think of it as an accounting of everything that happens in the double slit experiment up until the location of the particle is known.

    The second is what happens on measurement. It’s an abrupt, instant, discontinuous change, where all but one of the states disappear, resulting in a particular location, spin state, or whatever is being measured. The result is random and unpredictable. The wave function can be used to derive the probability of each possible value, but not what the actual answer will be in any individual measurement. Today this is typically called the collapse of the wave function, since all but one of the states it’s tracking, and their interference effects, disappear.

    The second process is very mysterious. Like any mysterious process, there are people who insist it’s just fundamental and we need to get over it. And a hard core instrumentalist might insist it gives us what we need. But a laboratory recipe isn’t always helpful when attempting to understand the implications for gravity and cosmology. Everett wanted to get at an improved ontology.

    His solution is counter-intuitive. He saw the mistake as assuming that the second process, the wave function collapse, is real, instead of just being a gap in our accounting. Everett advocated removing it from the ontology, to only rely on the first process, the continuous and deterministic evolution of the wave function. His argument is that doing so explains the same observations, but with a leaner, more parsimonious set of rules.

    To see how requires a simple understanding of quantum entanglement. Consider if we have two particles, both in a superposition of spin up and spin down. We might write the state of each particle (in a very simplified manner omitting amplitudes and other formal notation) as:

    particle state = (up) + (down)

    The plus sign just indicates that in a wave function, we’d add the two states together, with any overlap leading to interference. Now, what happens if we have these two particles interact in the right manner? If we do, they become correlated in certain ways, that is, entangled. (Quantum computing leans heavily on this effect.) So they now have an overall combined wave function state, an overall superposition with four elements.

    combined state = (up)(up) + (up)(down) + (down)(down) + (down)(up)

    If we add a third particle into the mix, we end up with eight elements in the overall superposition:

    combined state = (up)(up)(up) + (down)(up)(up) + (down)(down)(up) + (down)(down)(down) + (up)(down)(up) + (up)(up)(down) + (down)(up)(down) + (up)(down)(down)

    Notice that each addition into the entanglement multiplies the states of the overall entangled set by the number states brought in by the new particle. Again, nothing controversial here. This is used heavily by quantum computing. If we conduct a measurement on any of the three entangled particles above, we see the entire group apparently collapse into just one of those eight states. 

    But Everett is saying to do away with the wave function collapse as part of the ontology. So let’s back up to just one particle again and look at this. The conventional collapse interpretation, using the second process above, looks something like this when we introduce interaction with an observer.

    combined state = ( (up) + (down) )(observer)

    …which collapses to…

    combined state = (up)(observer-sees-up)

    or

    combined state = (down)(observer-sees-down)

    In other words, interaction with the observer has collapsed the states down to one, either spin up or spin down. However, if we do as Everett advises and do away with the second process, then we have to depend on the first process above, the wave function dynamics, to figure out what happens. So instead, we get something like:

    combined state = ( (up) + (down) )(observer)

    …leading to…

    combined state = (up)(observer-sees-up) + (down)(observer-sees-down)

    In other words, the observer, as a quantum system themselves, has become entangled with the particle, and so their state now includes seeing the particle spin up and seeing it spin down. Each element of the observer only sees one state because both they and the particle are also entangled with the surrounding environment. (The entropic jostling from that environment fragments any wave effects and makes them very hard to detect in a process called decoherence.) 

    So, under Everett, the appearance of the wave function collapse is what a quantum system looks like to an observer that just became entangled with it. In other words, collapse can be thought of as entanglement from the inside.

    This implies that the observer and their environment are in a superposition of an ever increasing number of states. Again, we get this by just applying the same rules we used for the individual particles above. 

    You might object that using the theory for something as large and complex as an observer is a big assumption. And it would be, if it didn’t lead to the same observations as the (now discarded) second process above.

    So what does that mean for testing Everettian theory? Remember Everett advocates dropping the second process above for understanding the ontology, and only relying on the first. So any falsification of the first process, of pure wave mechanics, would falsify Everettian theory. This might involve discovering the right hidden variables, including any kind of an actual physical state collapse. And a successful reconciliation with general relativity could falsify it as well, particularly the proposal that just recently came out.

    Everett himself also saw the other unseen states of the environment as detectable in principle. Although an understanding of modern decoherence theory shows just how challenging it would be. Still, “challenging” is different from “impossible”. This could someday adjudicate between Everett and Carlos Rovelli’s relational quantum mechanics.

    So, some aspects of Everettian theory, arguably the most pivotal ones, are testable. Of course, some aren’t, at least not currently, but that’s true of just about any scientific theory. Under Popperian philosophy, theories are judged by their testable predictions, not their untestable ones, nor by any metaphysical implications we may find disturbing. 

    Unless of course I’m missing something?

    (This post is a vast simplification (probably oversimplified). If you’re interested in the gory details, check out Hugh Everett’s original thesis online, or a more contemporary synthesis in a SEP article about it that distinguishes it from many of the later many-world variants.)

    Featured image credit

    https://selfawarepatterns.com/2024/01/14/testing-everettian-quantum-mechanics/

    #InterpretationsOfQuantumMechanics #ManyWorldsInterpretation #Philosophy #PhilosophyOfScience #Physics #Quantum #QuantumMechanics #Science

    Many Worlds, But Too Much Metaphor

    The way physicists talk about the Many-Worlds Interpretation makes vivid use of metaphor, but introduces confusion. Really, it's just a bookkeeping trick.

    Forbes

    In this video, Matt O’Dowd tackles the issue of probabilities in the many-worlds interpretation of quantum mechanics.

    A quick reminder. The central mystery of quantum mechanics is that quantum particles move like waves of possible outcomes that interfere with each other, until a measurement happens, when they appear to collapse to one localized outcome, the famous wave-particle duality.

    This is the measurement problem, which interpretations of quantum mechanics try to solve. One the oldest and most popular, Copenhagen, asserts that this duality is fundamental, and that further investigation is misguided. Pilot-wave posits both a particle and a wave the entire time.

    Many-worlds take the structure of quantum theory as complete, that quantum physics applies to us and the environment as much as particles, resulting in a universe that is itself a wave of all possible outcomes. We only see one outcome of the measurement because we’re the version that sees that outcome, with a version of us seeing each possible outcome.

    A longstanding objection to many-worlds is how to talk about probabilities. Probabilities seem reasonable in an interpretation where there’s only one outcome. But if every outcome happens, in what sense is it meaningful to talk about the probability of any one outcome? Aren’t they all 100% probable?

    This objection has never bothered me, mostly because I see probabilities as relative to an observer and their limited knowledge. That’s easier to see when looking at at something like the weather forecast, where probabilities more obviously reflect our limited knowledge.

    As O’Dowd explains, we can see the probabilities in many-worlds as self locating uncertainty, a view Sean Carroll champions. In the process of explaining this, O’Dowd discusses the nature of worlds in the theory, something I’ve tried to tackle before (here and here) but mostly failed at. Maybe his card stack metaphor works better for most people.

    The video runs about 19 minutes.

    PBS Space Time: Can The Measurement Problem Be Solved?

    (Here’s a link to the video in case the embed doesn’t display.)

    In the end, this is a devilishly difficult concept to explain. Which makes the video tough to follow. It might help if you have time to watch it multiple times.

    It’s worth noting that there are other proposed solutions to the probability problem. But I think this one makes the most sense, although the others aren’t necessarily wrong. It comes down to your philosophy of probability. The claims of being able to derive the Born Rule in many-worlds are controversial. But at worst the theory has to simply accept the rule as a postulate, similar to the other interpretations.

    What do you think? Did O’Dowd’s approach help? If not, any thoughts on where it fumbles? Or about where the explanation itself might be wrong?

    https://selfawarepatterns.com/2023/12/02/many-worlds-probabilities-and-world-stacks/

    #InterpretationsOfQuantumMechanics #ManyWorldInterpretation #manyWorlds #ManyWorldsInterpretation #Physics #Quantum #QuantumMechanics #Science

    The nature of splitting worlds in the Everett interpretation

    This post is about an aspect of the Everett many-worlds interpretation of quantum mechanics. I’ve given brief primers of the interpretation in earlier posts (see here or here), in case you ne…

    SelfAwarePatterns