Everything is a quantum wave?

In the last post, I discussed Amanda Gefter’s critique of Vlatko Vedral’s view that observers have no special role in reality. Conveniently, Vedral published an article at IAI discussing his view: Everything in the universe is a quantum wave. (Warning: possible paywall.) Vedral puts his view forward as a radical new interpretation of quantum mechanics.

As a quick reminder, the central mystery of quantum mechanics is that quantum particles seem to act like waves, including portions of the wave interfering with itself, but when measured, behave like tiny localized balls. This is known as the measurement problem.

There are numerous interpretations of what’s happening here. But they seem to take one of three broad strategies. The first simply rejects that the waves are real, instead insisting that they are only probabilities, albeit probabilities which evolve deterministically and interfere with each other. In other words, it’s all happening in our mind. In its stronger incarnations, this has idealist or semi-idealist aspects, claiming that observation or interaction creates reality. These are the approaches in the epistemic versions of the Copenhagen Interpretation and its descendants, like QBism and RQM (relational quantum mechanics).

The second strategy is to add new structure to wave mechanics. Due to Bell’s theorem, these additions must be non-local in nature, that is, they must involve “spooky” action at a distance. The ontic version of Copenhagen takes this approach when it adds a physical collapse, as do its variations and descendants like consciousness-causing-the-collapse and other objective collapse theories. Another version of the second strategy is what are historically called “hidden variable” approaches, like Bohmian Mechanics (pilot-wave theory), where there is both a wave and a particle the entire time, with the wave guiding the particle.

The third strategy is to accept the mathematical structure of quantum theory as a full account, or one only requiring a few ancillary assumptions. This became easier with the development of decoherence theory in the 1970s, an extrapolation of quantum wave mechanics, in essence quantum entanglement en masse, that explains why quantum interference disappears at larger scales. It’s the approach Hugh Everett proposed, which eventually became known as the many-worlds interpretation.

And it’s the strategy Vedral uses for his interpretation, which he characterizes as “many-worlds on steroids.” Although he dislikes talking in terms of other worlds, noting that the classical worlds are only a small slice of the possibilities. He prefers to talk in terms of one world but with quantum mechanics being universal, applying at all scales.

Vedral makes a point I made in the last post, that under this universal quantum waves approach, an observation is just two quantum systems becoming entangled, that is, becoming correlated in certain ways. A reminder: entanglement is when two quantum systems have each of their states in superposition become correlated with each of the states in the other system. In other words, for each state in the first system, there is a correlated state in the second. The two systems are now part of the same wave function.

Vedral notes this could be characterized as the quantum particle observing the measuring device as much as the device is observing it. In this view, entanglement is what the apparent collapse looks like from the outside, and collapse is what entanglement looks like from the inside. So contra Gefter’s stance, there’s no special role for observers, at least unless by “observer” we mean everything.

As I noted in the last post, I like Vedral’s approach here of focusing on the physics rather than getting into multiverse language, which as I’ve noted before, often ends up being a distraction. But it’s hard for me to see how his view is radically different from the standard Everettian one. It’s worth noting that Everett’s original proposal was a theory of the universal wave function, essentially the “everything is a quantum wave” view Vedral is advocating. Everett didn’t talk in terms of a multiverse. It was Bryce DeWitt in the 1970s who characterized that way, although Everett saw it as just an alternate way of describing his view.

One difference from contemporary many-worlds views, which Vedral shares with Everett, is that the quantum nature of macroscopic objects is not beyond testability. Everett reportedly maintained that the quantum states of macroscopic objects were in principle detectable. I haven’t read Vedral’s book, but it sounds like a large part of it is finding ways to test his view.

This seems resonant with the progress being made in experimental research, where tiny macroscopic objects can now be held in a quantum superposition, which is putting increasing pressure on ontic collapse theories. And Vedral mentions the ongoing efforts in quantum computing, which is stress-testing quantum theory in ways scientists of earlier decades could only dream of. In the end, we need data, and these efforts are providing more of it.

As a minimalist Everettian myself, I find a lot in Vedral’s discussion compelling. But as he notes in his article, the various interpretation camps are like entrenched armies in World War I, unlikely to be moved except by the strongest experimental results. Even then, I suspect Max Planck’s observation that science moves forward “one funeral at a time” will likely be true here as it always has.

What do you think of Vedral’s views? Does the idea of everything being a quantum wave make sense? Or are there difficulties both he and I are overlooking with this approach?

#InterpretationsOfQuantumMechanics #ManyWorldInterpretation #MWI #Philosophy #PhilosophyOfScience #Physics #QuantumMechanics #Science

Does reality require observers?

Amanda Gefter has an article at Nautilus a couple of you have asked me about: Reality Exists Without Observers? Boooo! The title is an accurate summary of her thesis. Gefter is responding to a book by Vlatko Vedral, where he reportedly argues for a reality that doesn’t require observers. In terms of quantum mechanics, Vedral is an Everettian, although he seems to downplay the many-worlds aspect, focusing on the physics. He touts what’s often taken to be an advantage of the interpretation over Copenhagen, that it doesn’t require any special role for the observer.

Gefter’s stance is that this can’t be done, that any attempt to do it inevitably sneaks the observer back in. She also implies that most discussions about Neils Bohr, Werner Heisenberg, and the other thinkers behind Copenhagen, strawman their positions. Since “the Copenhagen interpretation” is actually a diversity of views held by many early quantum scientists, she’s probably not entirely wrong. But arguing about what the Copenhagen interpretation is or isn’t strikes me as just as problematic.

Even if we restrict ourselves to just Bohr’s ideas, the man’s writings are infamously opaque. When trying to decipher his views, it seems possible to come away with a variety of ideas. When I’ve read about them, he comes across to me as an instrumentalist, although one who doesn’t want to admit it. Many science historians argue he was a Kantian, or maybe a neo-Kantian. Within the bounds of this discussion, I’m not sure how significant these distinctions are. What seems clear is he doesn’t take quantum theory to be telling us about reality, only our interactions with it.

When someone starts arguing for an observer centric reality, they can mean at least a couple of different things. One is epistemic, that our knowledge of reality only comes through observers. Bohr reportedly included non-conscious observers in this. But with that stipulation, the view doesn’t seem particularly radical. Another way of saying it is that all information must come through information gathering systems, which doesn’t sound nearly as profound.

But an observer centric reality can also be a stronger claim, an ontic one, that says conscious observers construct reality. The strongest claim of all is that reality doesn’t exist until we consciously observe it, that the observation itself brings it into existence. This is either idealist or solipsist territory.

It’s important to understand that even under the epistemic view, observation has effects on quantum systems. We can observe a supernova twelve million light years away without affecting it. Or we can observe the flow of a river without meaningfully affecting it. In these cases, there’s already enough interaction happening between the system and its environment that we can learn from the effects of those interactions that reach us.

However, we can’t observe a quantum system without interacting with it, and interaction entangles the observer and the observed, changing both. For the quantum system, that typically results in at least decoherence, and in many interpretations, all but one of the possible outcomes disappearing.

Which one of the above is Gefter claiming? I’m not sure, and that, unfortunately, is all too common when trying to parse arguments for this view. If forced to guess, I’d say she’s agnostic on the distinction, as this discussion about the moon seems to show.

Let’s put this moon thing to rest. It’s true. We can’t say the moon is there if no one’s observing it. Neither can we say that the moon’s not there if no one’s observing it. It’s not as if the sky is empty until someone gazes upward and a moon suddenly pops into existence. It’s that we can’t say anything about the moon as an independent object, because quantum theory doesn’t grant us independent objects, only measurements that we can slice into moons.

Not that Copenhagen is Gefter’s preferred interpretation. She actually favors QBism. Historically the name meant quantum Beysianism, in the sense that quantum theory provides degrees credence in various outcomes. It’s a view which focuses on the subjective experience of the experimenter. Or at least that’s how I understand it.

But again, the question becomes, is this just an epistemic view, or an ontic one? It seems to depend on which QBist you ask. On the one hand, it could be seen as a straight instrumentalist approach to quantum theory, a reification of the “Shut up and calculate!” attitude. Interestingly, the originator of that phrase, David Mermin (not Richard Feynman), signed on to QBism at some point.

Many QBist proponents resist the instrumentalist label. Which in turn often leads to accusations of solipsism, which they also resist. As I noted above, this starts to sound a lot like idealism to me, although it’s not clear that’s what they mean either. In the end, most physicists seem to regard QBism as an epistemic interpretation. (I’ve already done a post on why epistemic approaches don’t work for me.)

But what about the ontic view of an observer centric universe? If you already lean toward some form of ontological idealism, then this may well be a natural conclusion. But I don’t think there’s anything in the physics that drive it. A lot of this type of discussion seems to ignore the lessons from quantum computing, where engineers struggle with systems that decohere all the time, much earlier than they would prefer, and with nothing we’d normally call an “observer” driving it. (Unless we say the environment is the observer, but then “observer” seems to lose all distinctive meaning.)

If you think about it, this is no different from the classic double-slit experiment. If we put a polarizing filter at one of the slits, no conscious agent gets the information on which slit the particle goes through any sooner. What conscious agents do see is a change in the results on the back screen, and then infer what happened at the slits.

So the epistemic point about observers seems valid enough. I haven’t read Vedral at length, but I’d be surprised if he disagreed. But the ontic one doesn’t seem particularly well motivated, at least unless your metaphysics already push you in that direction.

But maybe I’m missing something? Is there an in-between ground between the options I listed? Or evidence for the ontic version I’m overlooking?

#consciousness #interpretationsOfQuantumMechanics #manyWorldsInterpretation #philosophy #physics #qbism #quantumMechanics #science

What physicists believe about quantum mechanics

A few years ago David Bourget and David Chalmers did a follow up survey to the 2009 one polling philosophers on what they believe about various questions. One of them was quantum mechanics, particularly the measurement problem and its various interpretations. Over the decades there have been surveys of physicists themselves on this question, but most, if not all, were with a very small sample size, usually only the attendees at a particular conference.

As part of the Quantum Centennial (the celebration of 100 years of quantum mechanics) Nature has done a fairly large survey of the community of quantum researchers with over 1100 respondents. The results are interesting, although not particularly surprising.

Copenhagen still comes out on top with 36%. It’s interesting that it’s stronger with experimentalists than with theorists (half vs a third). I suspect the experimentalists are hewing to a very pragmatic version of the interpretation. Which highlights a concern that the term “Copenhagen interpretation” means different things to different people. The article acknowledges this, noting that 29% of those who selected Copenhagen favored an ontic version of the wave function vs 63% who came down epistemic.

15% are Everettians (or “consistent-history” advocates, who I suspect object to being lumped in with the many-worlders), 7% Pilot-wave, 4% Spontaneous collapse, 4% Relational Quantum Mechanics, and a smattering in other views.

Overall 47% of respondents see the wave function as just a mathematical tool, with 36% taking a partial or complete realist take (my view), and 8% taking it to only represent subjective beliefs about experimental outcomes.

45% see a boundary between classical and quantum objects (5% see it as sharp) while 45% don’t (my view).

Just before the paywall, there is a question about the observer in quantum mechanics, with 9% saying it must be conscious. Another 56% said there had to be an observer, but that “observer” can just be interaction with a macroscopic environment, and 28% arguing that no observer at all is needed. (I think interaction with the macroscopic environment and the resulting decoherence is key, but it seems misleading to call that environment an “observer”.)

All interesting. Of course, how popular or unpopular a view is has no real bearing on whether it’s reality. Prior to Galileo’s telescopic observations in 1609, an Earth-centered universe was the most popular cosmology. Only a miniscule handful of astronomers accepted Copernicus’ view about the Earth orbiting the sun. Until the quantum-measurement equivalent of the telescope comes along, all we can do is reason as best as possible with the current data.

The results here are interesting to compare with what the philosophers thought on the Bourget-Chalmers survey. On quantum mechanics, philosophers were 24% agnostic, 22% hidden variable theories, 19% many-worlds, 17% collapse, and 13% epistemic. Once we take into account all the various forms of “Copenhagen interpretation”, these seem in a similar ballpark, except that philosophers are more open to hidden variable approaches. (It may be easier to favor hidden variables if you’re not the one who has to find them.)

My own view comes down to a preference for structural completeness (or at least more structurally complete models), which to me currently favors a cautious and minimalist take on the Everettian approach (as I described a few months ago). However, my credence in this conclusion is only 75-80%. That the survey indicates most physicists aren’t super confident in their own conclusions here makes me feel better.

This reminds me of a new approach that Jacob Barandes has been promoting on various podcasts (see this recent Sean Carroll episode as an example). Barandes calls it Indivisible Stochastic Quantum Mechanics. I won’t pretend to understand exactly what he’s trying to accomplish with it, but it involves rejecting the wave function completely, and replacing it with something more stochastic from the beginning. Which strikes me as less structurally complete than the wave function, and so a move in the wrong direction. But maybe I’ll turn out to be wrong.

Anyway, now we have a firmer idea of where the physics community currently stands on quantum interpretations, or at least a firmer one than we did before. How would you have answered the survey questions? (There’s actually a small quiz in the article which is worth taking to see the logic leading to particular interpretations.)

#InterpretationsOfQuantumMechanics #Philosophy #PhilosophyOfScience #Physics #QM #QuantumMechanics #Science

Is quantum immortality a real thing?

In discussions about the Everett interpretation of quantum mechanics, one of the concerns I often see expressed is for the perverse low probability outcomes that would exist in the quantum multiverse. For example, if every quantum outcome is reality, then in some branches of the wave function, entropy has never increased. In some branches, quantum computing doesn’t work because every attempt at it has produced the wrong result and people have concluded it doesn’t work. In other branches, you as a macroscopic object might quantum tunnel through a wall.

Of course, for enthusiasts, this comes with a hopeful aspect. Because in some branches, you would go on living indefinitely, no matter how improbable it might be. Hugh Everett himself was reportedly a believer in quantum immortality and so had little concern about the unhealthy lifestyle that led to his early demise in this branch. The idea is that if every outcome happens, then there are versions of you reading this that will live until the heat death of the universe.

This is vividly illustrated in the infamous quantum suicide thought experiment. One version described by Max Tegmark involves rigging up a gun to fire if a certain quantum event happens. Say the quantum event has a 50% chance of happening in any one second. You then put your head in front of the gun and begin the experiment. In half of all worlds where you begin the experiment, you die in the first second, but you go on living in the other half. In half of that remaining half you die in the next second, but go on living in the other half.

For you as the experimenter this goes on indefinitely with increasingly improbable outcomes leading to your survival. Of course, in virtually all worlds you leave behind grieving friends and family who are less convinced. But for you subjectively, if many-worlds is reality, you continue living until the experiment ends.

(Before getting too comforted by the possibility of quantum immortality, it’s important to remember that this is more of a side-life than an afterlife. Most of the versions of you will still experience an approaching death. It’s also worth noting that a you a million years from now would likely have evolved into something utterly strangle and unrecognizable to the you of today. And there’s no guarantee this ongoing existence would be pleasant. Indeed, under many-worlds, some would inevitably be hellish.)

One question that often comes up in discussions about this is whether reality allows for these infinitesimally low probability outcomes, or whether there is some inherent minimal discreteness at the base of reality that prevents it. There’s nothing in the math to indicate it, but of course the math, at least the math we have today, is a description of reality that is likely only an approximation.

However in a recent interview with Curt Jaimungal, David Wallace, a proponent of the many-worlds interpretation, may have provided another reason to doubt these outcomes: quantum interference. (Note: if the embed doesn’t work right, the relevant remarks are at around the 1:21 mark. Also you don’t have to watch the interview to understand this post, but it is an interesting discussion.)

https://www.youtube.com/watch?v=4MjNuJK5RzM&t=4901s

To understand Wallace’s point, it helps to realize some important points about how quantum decoherence works. Decoherence is the process of the quantum particle losing its wave like nature and becoming more particle like. This happens because as it interacts with the environment, the phase relations which keep the wave coherent become disrupted. The wave becomes fragmented. We call the fragments “particles”. This leads to the famous (infamous?) quantum interference effects disappearing. (As shown by the double slit experiment.)

But the word “disappearing” here in reference to the interference effects should be understood to mean “become undetectable”, not that they cease to exist entirely. Under decoherence the interference never goes away entirely. Like the wave overall, it becomes fragmented, and settles into an underlying “noise”. (Note: this is actually a difference in predictions between collapse and non-collapse interpretations that should, in principle, be testable. Of course, figuring out a way to do the test is another matter.)

Wallace’s point is that infinitesimally low probability outcomes should be swamped out by this remnant interference from higher probability outcomes, meaning that they should be prevented from existing. If so the branches where entropy never increased, where quantum computing never works, or to use his example, where he as a macroscopic object quantum tunnels through a wall, shouldn’t exist.

What does this mean for quantum immortality? I don’t know that it wipes it out entirely. Many of the initial survival scenarios may be very low probability, but not profoundly low ones, and so may not be swamped by interference from the other branches. But it does seem like it shortens the duration and overall makes it less certain, even once someone accepts the existence of the other worlds. So there may be versions of you reading this that live for decades or centuries beyond the normal lifespan, maybe even millenia, but probably not until the end of the universe.

Still, the implications are interesting and fun to speculate about. If there is a version of me alive in the far future, I wonder if he (it?) will remember these speculations.

What do you think of Wallace’s point? If we assume many-worlds is reality, does the idea of quantum immortality seem plausible? Or are there other reasons to doubt it?

#InterpretationsOfQuantumMechanics #ManyWorldsInterpretation #Philosophy #QuantumImmortality #QuantumMechanics

Many-worlds without necessarily many worlds?

IAI has a brief interview of David Deutsch on his advocacy for the many-worlds interpretation of quantum mechanics. (Warning: possible paywall.) Deutsch has a history of showing little patience with other interpretations, and this interview is no different. A lot of the discussion centers around his advocacy for scientific realism, the idea that science is actually telling us about the world, rather than just providing instrumental prediction frameworks.

Quick reminder. The central mystery of quantum mechanics is that quantum systems seem to evolve as waves, superpositions of many states, with the different states interfering with each other, all tracked by a mathematical model called the wave function. But when measured, these systems behave as localized particles, with the model only able to provide probabilities on the measurement result. Although the measurement results as a population show the interference patterns from the wave function. This is often called the “wave function collapse”.

Various interpretations attempt to make sense of this situation. Many deny the reality of what the wave function models. Others accept it, but posit the wave function collapse as a real objective event. Some posit both a wave and particle existing throughout. The Everett approach rejects wave function collapse and argues that if we just keep following the mathematical model, we get decoherence and eventually the same observations. But that implies that quantum physics apply at all scales, meaning that it’s not just particles in superpositions of many states, but measuring equipment, labs, people, planets, and the entire universe.

Reading Deutsch’s interview, it occurred to me that my own structural realist outlook, a more cautious take on scientific realism, is reflected in the more cautious acceptance I have of Everettian quantum mechanics. People like Deutsch are pretty confident that there is a quantum multiverse. I can see the reasoning steps that get them there, and I follow them, to a point. But my own view is that the other worlds remains a possibility, but far from a certainty.

I think this is because we can break apart the Everettian proposition into three questions.

  • Does the mathematical structure of quantum theory provide everything necessary to fit the current data?
  • If so, can we be confident that there won’t be new data in the future that drives theorists to make revisions or add additional variables?
  • What effect would any additions or changes have on the broader predictions of the current bare theory?
  • My answer to 1 is yes, with a moderately high credence, maybe around 80%. I know people like Deutsch and Sean Carroll have this much higher. (I think Carroll says his is around 95% somewhere on his podcast.) And I think they have defendable reasons for it. Experimentalists have been stress testing bare quantum theory for decades, with no sign of a physical wave function collapse, or additional (hidden) variables. Quantum computing seems to have taken it to a new level.

    But there remain doubts, notably about how to explain probabilities. I personally don’t see this as that big an issue. The probabilities reflect the proportion of outcomes in the wave function. But I acknowledge that lot of physicists do. I’m not a physicist, and very aware of the limitations of my very basic understanding of the math, so it’s entirely possible I’m missing something, which is why I’m only at 80%.

    (Often when I make the point about the mathematical structures, it’s noted that there are multiple mathematical formalisms: wave mechanics, matrices, path integrals, etc. But while these are distinct mental frameworks, they reportedly always reconcile. These theories are equivalent, not just empirically, but mathematically. They always provide the same answer. If they didn’t, we’d see experimental physicists trying to test where they diverge. We don’t because there aren’t any divergences.)

    If our answer to 1 is yes, it’s tempting to jump from that to the broader implications, the quantum multiverse. (Or one universe with a much larger ontology. Some people find that a less objectionable description.)

    But then there are questions 2 and 3. I have to say no to 2. The history of science seems to show that any claims that we’ve found the final theory of anything is a dubious proposition, a point Deutsch acknowledges in the interview. All scientific theories are provisional. And we don’t know what we don’t know. And there are the gaps we do know about, such as how to bring gravity into the quantum paradigm. It seems rational to wonder what kind of revisions they may eventually require.

    Of course 3 is difficult to answer until we get there. I do doubt any new discoveries would drive things toward the other interpretations people currently talk about, or overall be less bonkers than the current predictions. Again given the history of science, it seems more likely it would replace the other worlds with something even stranger and more disconcerting.

    So as things stand, there’s no current evidence for adding anything to the structure of raw quantum theory. That does imply other worlds, but the worlds remain untestable for the foreseeable future.

    To be clear, I don’t buy that they’re forever untestable. We can’t rule out that some clever experimentalist in the future won’t find a way to detect interference between decohered branches, to recohere them (which has been done but only very early in the process), or some other way we haven’t imagined yet.

    My take is the untestability of the other worlds means that Everettian quantum mechanics, in the sense of pure wave mechanics, shouldn’t be accepted because we like the worlds, or rejected because we dislike them. For now, the worlds should be irrelevant for a scientific assessment. The only question is whether anything needs to be added to the bare theory, a question, it should be noted, we can ask regardless of whether we’re being realist or antirealist about any of this.

    All of which means that while my credence in austere quantum mechanics is 80%, the credence for the other worlds vacillates somewhere around 50%. In other words I’m agnostic. This resonates with the views I’ve seen from a number of physicists, such as Stephen Hawking, Sidney Coleman, John Preskill, and most recently, Brian Cox, which accept the Everett view but downplay the other worlds. Even Sean Carroll notes in one of his AMAs that he doesn’t really care so much about the other worlds, but the physics at the core of the theory.

    But maybe I’m missing something. Are the questions I raised above as easy to separate as I’m thinking? Or are there problems with pure wave mechanics I’m overlooking?

    #InterpretationsOfQuantumMechanics #ManyWorldsInterpretation #Philosophy #PhilosophyOfScience #QuantumMechanics #Science

    David Deutsch | There is only one interpretation of quantum mechanics

    The many-worlds interpretation of quantum mechanics says that all possible outcomes of quantum measurements are physically realised in different worlds. David Deutsch explains the philosophy behind the many-worlds interpretation and argues that not only is it the best interpretation of quantum mechanics – it is the only interpretation.

    IAI TV - Changing how the world thinks

    Avoiding the structural gaps

    A long standing debate in quantum physics is whether the wave function is real. A quick reminder: quantum entities appear to move like waves, including portions interfering with each other. These waves are modeled with the wave function. But once measured, quantum objects manifest as localized points or field excitations. The wave function can’t predict the measurement outcome, only probabilities on what the result will be.

    A popular move here is to decide the wave function isn’t real, that it’s just a mathematical contrivance. Doing so seems to sidestep a lot of uncomfortable implications. But it leaves us trying to explain the statistical outcomes of measurements that show patterns from portions of the wave interfering with itself. Those effects, along with entanglement, are heavily used in quantum computing. If the wave function isn’t modeling something real, then it’s usefulness in technology starts to look like a magic incantation.

    Of course, accepting wave function realism leaves us with something that seems to operate in a higher dimensional “configuration space.” And we end up having to choose between unsettling options, like an objective wave function collapse on measurement, a pilot wave guiding the particle in a non-local manner, or just accepting pure wave mechanics despite its implications.

    Valia Allori has an article at IAI arguing against quantum wave function realism. (Warning: you might hit a paywall.) The main thrust of her argument, as I understand it, is that we shouldn’t allow ourselves to be lured farther away from the manifest image of the world (the world as it intuitively appears to us) when there are viable alternatives.

    Her argument is in opposition to Alyssa Ney’s argument for wave function realism, which touts as one of the benefits that it reclaims locality. Allori argues that this is aiming to satisfy an intuition we develop in three dimensional space, that there aren’t non-local effects, “spooky actions at a distance”. But wave function realism only preserves locality across configuration space, which Allori views as a pyrrhic victory.

    Overall, Allori seems to view this as a conflict between two different sets of intuitions. On one side, we have views that are closer to the overall manifest image of reality, one with three dimensions, but at the cost of non-local phenomena. She doesn’t view this as ideal, but deems it preferable to the idea of a universal wave function existing in near infinite dimensions. In her view, embracing theories too far away from the manifest image puts us on the path that leads to runaway skepticism, where nothing we perceive can be trusted.

    But I think looking at this in terms of intuitions is a mistake. When it comes to models of reality, our intuitions have historically never been particularly useful. Instead they’ve often led us astray, causing us to insist the earth was the center of the universe, humans were separate from nature, or that time and space were absolute, all ideas that had to be abandoned in the face of empirical realities. The reason to prefer locality isn’t merely to privilege one intuition over others, but to prefer theories that provide a structurally complete accounting.

    A while back I described this as a preference for causally complete theories. But causation is a relation across time that is made asymmetrical by the second law of thermodynamics, that entropy always increases. The more fundamental reality are the structural relations. A theory which can account for all (or at least more of) those relations should, I think, be preferred to theories that have larger gaps in their accounting.

    By that standard, I perceive wave function antirealism to have huge gaps, gaps which proponents of the idea seem comfortable with, but I suspect only because, as Allori does, they deem it a lesser evil than the alternative. Of course, objective collapse and pilot-wave theories also have gaps, but they seem smaller, albeit still weaknesses that I think should make them less viable.

    Pure wave mechanics seems like the option with the fewest gaps. Many would argue that accounting for probabilities remains a crucial gap, but that seems like more of philosophical issue than a scientific one, how best to talk about what probabilities mean. In many ways, it highlights issues that already exist in the philosophy of probability.

    Overall then, my take is that the goal isn’t to preserve the manifest image of reality, but to account for it in our scientific image. Preferring theories that are closer to the manifest image just because they are closer, particularly when the theories have larger gaps than the alternatives, seems to amount to what is often called “the incredulous stare”, simply rejecting an proposition because it doesn’t comport with our preexisting biases.

    But maybe I’m overlooking something? Are there reasons to prefer theories closer to the manifest image? Is there a danger in excessive skepticism as Allori worries? Or is preferring a more complete accounting itself still privileging certain intuitions over others?

    #InterpretationsOfQuantumMechanics #ManyWorldInterpretation #Philosophy #PhilosophyOfScience #QuantumMechanics #Science

    Objective-collapse theory - Wikipedia

    The Everett theory of quantum mechanics is testable in ways most people don’t realize.

    Before getting into how or why, I think it’s important to deal with a long standing issue. Everettian theory is more commonly known as the “many worlds interpretation”, a name I use myself all the time. But what’s often lost in the discussion is that “world” here is a metaphor for describing something very complex. Bryce DeWitt knew what he was doing when he gave it that name. It quickly and vividly conveys an approximation of the overall idea.

    But that comes at a cost. People lose track of the idea that “worlds”, or “universes” in some descriptions, are just a conceptual crutch, a metaphor. This leads them to glom onto details of the metaphor and have questions and concerns that are really more about the metaphor than the actual theory. For example, concern about whole universes “springing into being” is really an issue with the metaphor.

    Chad Orzel actually wrote an article about this some years ago when discussing Sean Carroll’s book on the subject. At the time I misunderstood his point, then later thought he was wrong, that the metaphor could be clarified sufficiently. Well, I’ve come full circle and now fully agree with him. So I’m going to try to do the rest of this post without evoking the metaphor. Of course, it’s difficult to talk about stuff like this without using some metaphors, but hopefully avoiding ones loaded with baggage will help.

    So what then is Hugh Everett’s theory really about? It’s about trying to understand the ontology of quantum mechanics, with a motivation, at the time it was formulated, of getting closer to a reconciliation with general relativity. 

    The conventional understanding in collapse interpretations, is that there are two processes at play. 

    One is what happens for an isolated quantum system. It’s the continuous, linear, and deterministic evolution of pure wave mechanics, including interference between the various states in superposition, which is tracked by a mathematical tool called the wave function. Think of it as an accounting of everything that happens in the double slit experiment up until the location of the particle is known.

    The second is what happens on measurement. It’s an abrupt, instant, discontinuous change, where all but one of the states disappear, resulting in a particular location, spin state, or whatever is being measured. The result is random and unpredictable. The wave function can be used to derive the probability of each possible value, but not what the actual answer will be in any individual measurement. Today this is typically called the collapse of the wave function, since all but one of the states it’s tracking, and their interference effects, disappear.

    The second process is very mysterious. Like any mysterious process, there are people who insist it’s just fundamental and we need to get over it. And a hard core instrumentalist might insist it gives us what we need. But a laboratory recipe isn’t always helpful when attempting to understand the implications for gravity and cosmology. Everett wanted to get at an improved ontology.

    His solution is counter-intuitive. He saw the mistake as assuming that the second process, the wave function collapse, is real, instead of just being a gap in our accounting. Everett advocated removing it from the ontology, to only rely on the first process, the continuous and deterministic evolution of the wave function. His argument is that doing so explains the same observations, but with a leaner, more parsimonious set of rules.

    To see how requires a simple understanding of quantum entanglement. Consider if we have two particles, both in a superposition of spin up and spin down. We might write the state of each particle (in a very simplified manner omitting amplitudes and other formal notation) as:

    particle state = (up) + (down)

    The plus sign just indicates that in a wave function, we’d add the two states together, with any overlap leading to interference. Now, what happens if we have these two particles interact in the right manner? If we do, they become correlated in certain ways, that is, entangled. (Quantum computing leans heavily on this effect.) So they now have an overall combined wave function state, an overall superposition with four elements.

    combined state = (up)(up) + (up)(down) + (down)(down) + (down)(up)

    If we add a third particle into the mix, we end up with eight elements in the overall superposition:

    combined state = (up)(up)(up) + (down)(up)(up) + (down)(down)(up) + (down)(down)(down) + (up)(down)(up) + (up)(up)(down) + (down)(up)(down) + (up)(down)(down)

    Notice that each addition into the entanglement multiplies the states of the overall entangled set by the number states brought in by the new particle. Again, nothing controversial here. This is used heavily by quantum computing. If we conduct a measurement on any of the three entangled particles above, we see the entire group apparently collapse into just one of those eight states. 

    But Everett is saying to do away with the wave function collapse as part of the ontology. So let’s back up to just one particle again and look at this. The conventional collapse interpretation, using the second process above, looks something like this when we introduce interaction with an observer.

    combined state = ( (up) + (down) )(observer)

    …which collapses to…

    combined state = (up)(observer-sees-up)

    or

    combined state = (down)(observer-sees-down)

    In other words, interaction with the observer has collapsed the states down to one, either spin up or spin down. However, if we do as Everett advises and do away with the second process, then we have to depend on the first process above, the wave function dynamics, to figure out what happens. So instead, we get something like:

    combined state = ( (up) + (down) )(observer)

    …leading to…

    combined state = (up)(observer-sees-up) + (down)(observer-sees-down)

    In other words, the observer, as a quantum system themselves, has become entangled with the particle, and so their state now includes seeing the particle spin up and seeing it spin down. Each element of the observer only sees one state because both they and the particle are also entangled with the surrounding environment. (The entropic jostling from that environment fragments any wave effects and makes them very hard to detect in a process called decoherence.) 

    So, under Everett, the appearance of the wave function collapse is what a quantum system looks like to an observer that just became entangled with it. In other words, collapse can be thought of as entanglement from the inside.

    This implies that the observer and their environment are in a superposition of an ever increasing number of states. Again, we get this by just applying the same rules we used for the individual particles above. 

    You might object that using the theory for something as large and complex as an observer is a big assumption. And it would be, if it didn’t lead to the same observations as the (now discarded) second process above.

    So what does that mean for testing Everettian theory? Remember Everett advocates dropping the second process above for understanding the ontology, and only relying on the first. So any falsification of the first process, of pure wave mechanics, would falsify Everettian theory. This might involve discovering the right hidden variables, including any kind of an actual physical state collapse. And a successful reconciliation with general relativity could falsify it as well, particularly the proposal that just recently came out.

    Everett himself also saw the other unseen states of the environment as detectable in principle. Although an understanding of modern decoherence theory shows just how challenging it would be. Still, “challenging” is different from “impossible”. This could someday adjudicate between Everett and Carlos Rovelli’s relational quantum mechanics.

    So, some aspects of Everettian theory, arguably the most pivotal ones, are testable. Of course, some aren’t, at least not currently, but that’s true of just about any scientific theory. Under Popperian philosophy, theories are judged by their testable predictions, not their untestable ones, nor by any metaphysical implications we may find disturbing. 

    Unless of course I’m missing something?

    (This post is a vast simplification (probably oversimplified). If you’re interested in the gory details, check out Hugh Everett’s original thesis online, or a more contemporary synthesis in a SEP article about it that distinguishes it from many of the later many-world variants.)

    Featured image credit

    https://selfawarepatterns.com/2024/01/14/testing-everettian-quantum-mechanics/

    #InterpretationsOfQuantumMechanics #ManyWorldsInterpretation #Philosophy #PhilosophyOfScience #Physics #Quantum #QuantumMechanics #Science

    Many Worlds, But Too Much Metaphor

    The way physicists talk about the Many-Worlds Interpretation makes vivid use of metaphor, but introduces confusion. Really, it's just a bookkeeping trick.

    Forbes

    Are quantum states and the overall wave function real? Or merely a useful prediction tool?

    The mystery of quantum mechanics is that quantum objects, like electrons and photons, seem to move like waves, until they’re measured, then appear as localized particles. This is known as the measurement problem.

    The wave function is a mathematical tool for modeling, well, something related to this situation. Exactly what that something is, is a matter of long standing debate. Erwin Schrödinger, developed the wave function after being inspired by Louis de Broglie’s hypothesis that matter has a wave like nature, similar to light. Schrödinger’s original intention was to model the electron itself. 

    But Max Born took his equation and discovered that squaring the amplitude of the wave at any particular location gave the probability of finding the particle in that spot. Which converted Schrödinger’s equation about physical waves into a straight calculation tool. This might seem like a natural move. No one actually ever measures a quantum wave function, only particles. And the wave function describes a high dimensional abstract configuration space, which makes its relation to reality unclear. 

    Still, Schrödinger wasn’t happy about the move, and continued arguing for some version of wave function realism. Which began the long standing debate between seeing the wave function and its quantum states as modeling something real, or just calculating the probabilities of future measurements.

    In a recent conversation, someone compared my reasoning on this and consciousness, where I largely see the limitations of introspection as dissolving the hard problem of consciousness and the need for exotic solutions to it. They wondered why I don’t make a similar move for quantum mechanics, and just go epistemic, a stance epistemists see as dissolving the measurement problem.

    I do occasionally review the arguments to see if I’ve overlooked anything about the epistemic view. Certainly it would appear to make the need for things like a physical collapse, non-local causality, a quantum multiverse, or other metaphysical “costs” unnecessary. The only “collapse” would be an informational one, an update in our state of knowledge. Definitely a grounded option to be taken if feasible.

    But my block on this remains the whole reason we talk about wave-particle duality in the first place, the wave interference patterns revealed in the double slit experiment or the Mach–Zehnder interferometer. Crucially, in these experiments, the apparatus can be set to send one particle at a time and the landing location of each particle recorded, with the result that the interference pattern still accumulates. 

    In other words, the one particle (which can be photons, electrons, or even large molecules) seems to interfere with itself. The only way that seems possible is if the particle goes through both slits at the same time. The question for people who assert the epistemic view is, how can they account for this evidence?

    A frequent response over the years has been Robert Spekkens’ toy model, a hidden variable model showing a different physical reality that the wave function could model statistically, but not be an accurate description of. The argument is that this alternate model, and similar efforts, can successfully account for interference effects.

    Hidden variable theories, which either expand the structure of quantum theory or propose alternative structures, are constrained by various no-go theorems, the most famous being Bell’s theorem, which requires that they be causally non-local. This seems to complicate any efforts to reconcile them with special relativity, something that was done with straight quantum theory by 1930. An alternative theory with a smaller scope of usefulness doesn’t strike me as likely to be the more accurate description of reality.

    But the nail in the coffin for the toy model and similar approaches is the PBR theorem by Matthew Pusey, Jonathan Barrett, and Terry Rudolph. In short, this theorem demonstrates that pure quantum states in the wave function, if they are referencing any objective reality at all, must describe something real. Any different reality would lead to predictions incompatible with quantum theory.

    The PBR theorem does have a couple of assumptions. One is that the preparation of two separate quantum systems can be independent of each other. This seems similar to the “free will” assumption in Bell’s theorem (which is actually about independent preparation of measurement choices). It seems like this assumption can be violated in the same way the Bell one can, with some version of retrocausality. So superdeterminism remains an option, albeit a long shot one in most physicist’s eyes.

    But the second assumption is more basic, that the wave function is referring to something physical and objective, even if it’s not an accurate description of it. When I was an instrumentalist toward quantum mechanics, this was largely my view. I never doubted that there was some physical reality there, just not the straightforward one described by quantum states with all its bonkers implications.

    The second assumption can be violated by going explicitly anti-realist, and simply asserting that there is nothing objective happening at all, that the measurement outcomes just happen, that they are fundamental interactions, brute facts of the world. In this view, the wave function is nothing but a prediction related to future measurements. Since the outcome is something fundamental, there’s nothing left to investigate. We just need to get used to it and stop asking questions.

    Of course, there are a lot of people who are willing and eager to bite this bullet. It’s one of the postulates of neo-Copenhagen interpretations like RQM (relational quantum mechanics) and QBism (quantum Bayesianism). It’s worth noting that these stances come with their own metaphysical “costs”, whether it be the semi-idealism of QBism’s participatory reality, or the sparse “flash” ontology of RQM.

    However, while the hidden objective reality view might have at least aspired to provide an answer to my interference question, the anti-real stance seems to outright ignore it, or assert that the question is meaningless. When you hear about the “shut up and calculate” phrase, this is where it comes from.

    I find the incuriosity laden in this position difficult to understand. My interference question remains. But I also now want to know why the wave function is so useful, particularly when it comes to something as complex as quantum computing. If there’s nothing going on prior to the measurement outcome, then why are there detectable patterns at all? It seems like the “no miracles” argument for scientific realism, or at least structural realism, applies here. 

    So, at least for now, I remain in the quantum state realist camp.

    But maybe I’m missing something. Are there explanations for the interference effects in the epistemic view I’m overlooking? Or reasons to just dismiss the concern?

    Featured image source

    https://selfawarepatterns.com/2024/01/06/those-inconvenient-quantum-interference-patterns/

    #InterpretationsOfQuantumMechanics #Philosophy #PhilosophyOfScience #Physics #QuantumMechanics #Science

    Matter wave: Difference between revisions - Wikipedia

    In this video, Matt O’Dowd tackles the issue of probabilities in the many-worlds interpretation of quantum mechanics.

    A quick reminder. The central mystery of quantum mechanics is that quantum particles move like waves of possible outcomes that interfere with each other, until a measurement happens, when they appear to collapse to one localized outcome, the famous wave-particle duality.

    This is the measurement problem, which interpretations of quantum mechanics try to solve. One the oldest and most popular, Copenhagen, asserts that this duality is fundamental, and that further investigation is misguided. Pilot-wave posits both a particle and a wave the entire time.

    Many-worlds take the structure of quantum theory as complete, that quantum physics applies to us and the environment as much as particles, resulting in a universe that is itself a wave of all possible outcomes. We only see one outcome of the measurement because we’re the version that sees that outcome, with a version of us seeing each possible outcome.

    A longstanding objection to many-worlds is how to talk about probabilities. Probabilities seem reasonable in an interpretation where there’s only one outcome. But if every outcome happens, in what sense is it meaningful to talk about the probability of any one outcome? Aren’t they all 100% probable?

    This objection has never bothered me, mostly because I see probabilities as relative to an observer and their limited knowledge. That’s easier to see when looking at at something like the weather forecast, where probabilities more obviously reflect our limited knowledge.

    As O’Dowd explains, we can see the probabilities in many-worlds as self locating uncertainty, a view Sean Carroll champions. In the process of explaining this, O’Dowd discusses the nature of worlds in the theory, something I’ve tried to tackle before (here and here) but mostly failed at. Maybe his card stack metaphor works better for most people.

    The video runs about 19 minutes.

    PBS Space Time: Can The Measurement Problem Be Solved?

    (Here’s a link to the video in case the embed doesn’t display.)

    In the end, this is a devilishly difficult concept to explain. Which makes the video tough to follow. It might help if you have time to watch it multiple times.

    It’s worth noting that there are other proposed solutions to the probability problem. But I think this one makes the most sense, although the others aren’t necessarily wrong. It comes down to your philosophy of probability. The claims of being able to derive the Born Rule in many-worlds are controversial. But at worst the theory has to simply accept the rule as a postulate, similar to the other interpretations.

    What do you think? Did O’Dowd’s approach help? If not, any thoughts on where it fumbles? Or about where the explanation itself might be wrong?

    https://selfawarepatterns.com/2023/12/02/many-worlds-probabilities-and-world-stacks/

    #InterpretationsOfQuantumMechanics #ManyWorldInterpretation #manyWorlds #ManyWorldsInterpretation #Physics #Quantum #QuantumMechanics #Science

    The nature of splitting worlds in the Everett interpretation

    This post is about an aspect of the Everett many-worlds interpretation of quantum mechanics. I’ve given brief primers of the interpretation in earlier posts (see here or here), in case you ne…

    SelfAwarePatterns

    It’s been a while, but I’ve occasionally mentioned on the blog that Cecil B. Demille’s The Ten Commandments (the 1950s color version) is one of my favorite movies. And this has remained true even as I’ve come to see it as straight fantasy.

    An interesting fact from when I first saw it as a very young boy. I initially thought Yul Brenner’s Ramses was two different characters. This was because there were several scenes with him outside in armor, and other scenes of him inside in more comfortable attire. To my five year old self, it looked like two different guys. Until the scene after Ramses’ son has just died, when he decides to go after the Israelites. Inside-Ramses calls for his armor, and in the process transforms onscreen into outside-Ramses, making me realize they were one and the same.

    Over the years, I’ve encountered many other entities which initially looked like separate things, but turned out to just be the same thing seen from different perspectives or in different contexts. Time and time again, I’ve learned to be on the lookout for underlying patterns that might indicate I’m looking at different aspects of the same thing. (I think this is why I have little trouble conceptualizing consciousness as functionality.)

    Which is why found this video from Matt O’Dowd interesting. He explores a proposition that David Deutsch has often expressed, that the pilot-wave interpretation of quantum mechanics is just a special case of the many-worlds interpretation.

    PBS Space Time: Are Many Worlds & Pilot Wave THE SAME Theory?

    O’Dowd, around the thirteen minute mark, does note one seemingly structural difference between the two, the guiding equation of pilot-wave, which tells the particle where to go. It’s not needed under many-worlds because under it, a version of the particle goes everywhere the wave function is non-zero. As he notes, many-worlds is pilot-wave minus any one version of the particle being the one true real one.

    I don’t know much about the guiding equation. I do wonder if, under many-worlds, it could be seen an expression of the relationship between particles in one particular world. Or if there’s simply no room for it in that theory.

    I think one reason Deutsch emphasizes the similarities between the two theories, and the one difference, is it seems to answer a common question for the idea of pure wave mechanics: waves of what? According to Deutsch, it’s waves of the different versions of the particle. This leads him to hold a particle first ontology, which seems like a minority view among Everettians (many-worlders).

    Although ultimately this may just be “a six of one, half a dozen of the other” type thing. Are waves, waves of particle versions? Or are particles just fragments of waves? Under any degree of wave function realism, the answer could just be “yes”.

    Unless of course I’m missing something?

    https://selfawarepatterns.com/2023/09/30/are-many-worlds-and-pilot-wave-the-same-theory/

    #InterpretationsOfQuantumMechanics #ManyWorldInterpretation #Physics #PilotWave #QuantumMechanics #quantumPhysics

    Pharaoh's Heart was hardened - The Ten Commandments 1956

    YouTube