Class #9: Quantum Mechanics and Quantum Computing

Since we didn’t have time in Wednesday’s class to get to the “meaty” debate about quantum computing’s possible relevance to the interpretation of quantum mechanics, this week’s discussion topics will be unusually broad.

Does quantum mechanics need an “interpretation”?  If so, why?  Exactly what questions does an interpretation need to answer?  Do you find any of the currently-extant interpretations satisfactory?  Which ones, and why?

More specifically: is the Bayesian/neo-Copenhagen account really an “interpretation” at all, or is it more like a principled refusal even to discuss the measurement problem? What is the measurement problem?

If one accepts the Many-Worlds account, what meaning should one attach to probabilistic claims?  (Does “this detector has a 30% chance of registering a photon” mean it will occur in 30% of universes-weighted-by-mod-squared-amplitudes?)

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

52 Responses to Class #9: Quantum Mechanics and Quantum Computing

  1. bobthebayesian says:

    If we accept Occam’s razor, there may be several compelling features about Many-Worlds that make it preferable. It is a theory that is just as consistent with our observations (prediction-wise) as any of the others, and yet it gets to postulate only one rule (the Born probabilities) without any need to stipulate a second measurement rule. At first, this may not seem too controversial (and indeed it may not actually be), but the measurement rule is troubling in that it is the only thing about Quantum Mechanics that is not time-symmetric, unitary, or linear (there are probably a host of other “elegant” properties that it “ruins” but I am forgetting them). I think it is ironic that many people positively regard Einstein’s devotion to mathematical elegance as a tool for detecting correct theories, but find use of that idea in the case of Many-Worlds unsettling. Of course, this is not by any means a conclusive reason to believe in Many-Worlds, but I do think it shifts *extra* burden onto the other interpretations to justify why we specifically *require* an ontologically basic concept of “measurement” to obtain a correct theory. Since the QBayesian approach won’t really even discuss this, I do not consider it a real interpretation but just a fancy extension of the “shut-up-and-calculate” idea.

    I am also interested in what Many-Worlds says about personal identity and consciousness. I have a small “theory” that I like to think about which I call the “Alfred Hitchcock theory of consciousness.” For those who have not seen the old TV horror show called ‘Alfred Hitchcock Presents,” check out this brief YouTube clip of the show’s opening title screen:
    ( http://www.youtube.com/watch?v=Dj15EGFWgXA&feature=related ). Also, I fully agree that quantum effects are not needed for the biological act of consciousness to emerge. That’s not what I am talking about here.

    Specifically, I think about Many-Worlds telling us that something like that Hitchcock silhouette is going on. As blobs of quantum amplitude (that significantly affect you) evolve, the particles in your brain are also evolving according to the Born probabilities. In this sense, at every “time instant” the thing that is “you” is just walking into a new silhouette, and there are infinitely many other versions of “you” that happened to step into all the different possible other silhouettes. I’m not saying this is the grand truth of the universe or anything — only that if we do accept Many-Worlds, it starts to play trippy side-effects on what we think of as “me” or consciousness. The fact that I have certain memories is partly because “I” happen to reside on the Everett branch in which the necessary particles in my brain to “have those memories” all went where they were supposed to. That is, not enough decoherence happened to affect enough different particles to actually cause me to walk into a silhouette in which I would have different memories. But, in principle, if I am riding the crest of a quantum wave that is splitting all the time, there is no reason to think that at a given time instant I wasn’t in, say, a vegetative state for the last 26 years and then at the very last “instant” I happened to find myself in the exactly right splitting to impart memories into my brain structure as if I had been vivid, walking around, being a college student, etc. etc.

    If what I just wrote seems completely ineffectual, then it’s probably because I explained it poorly, because it really starts to make you feel trippy about your own consciousness and self if you think about it according to what I actually mean. Of course, the reason why none of this affects us on a day-to-day basis is that there are so many particles in our brains that the decoherence we’d need to see is infinitesimally tinier than the probability I would be killed by a simultaneous lighting strike and asteroid collision. So out of all of the Everett branches I could split into (all the silhouettes I could walk into), the proportion of which lead to “weird” observations is surely less than 1/Graham’s number or something like that.

    What this means to me is that I should be very careful to remove all anthropocentric perceptions before asking questions about quantum interpretations. So to the posed question, “Does “this detector has a 30% chance of registering a photon” mean it will occur in 30% of universes-weighted-by-mod-squared-amplitudes?,” I would say we’re already on thin ice because really, the probabilistic part of this issue is just “which Everett branch will “I” happen to find myself in at the time I look at the detector?” In 30% of the Everett branches, “I” will see such a registering. In some Everett branches, I will accidentally go blind at that very instant and see nothing. In others, the machine will magically break due to a cosmic ray and fail to register when really it should, etc. etc. Probably most of the amplitude is concentrated into two events and all other branches are less than noise. But even so, it’s wrong to impart probabilities onto the object’s themselves, because asking what “I” will witness is just as much a part of the experiment as anything else.

    Lastly, I wanted to mention a comment that was in WPSCACC. Scott summarizes a particular counter to Many-Worlds by saying, “In other words, to whatever extent a collection of universes is useful for quantum computation, to that extent it is arguable whether we ought to call them “parallel universes” at all (as opposed to parts of one exponentially-large, self-interfering, quantum-mechanical blob).”

    I am a little confused by this because my understanding of Many-Worlds would say that the “parallel universes” *are* just one exponentially-large, self-interacting blob. It just happens to be the case that a lot of subspaces within that blob nicely factorize to make them more or less multiplicatively independent, except for these small “self-interfering” pieces. And the parts that more or less neatly factorize would (as a logical deduction from toy examples like the double slit experiment) seem to have basic, gross structure that we would immediately describe as a slightly altered copy of our own universe. It’s all just one amplitude blob, but happens to factorize into subspaces that we would anthropocentrically immediately see as “universes.” I guess I am confused because I do not see how this is an objection to the physical realism of Many-Worlds… I think that Deutsch would completely agree with the “one-blob-many-worlds” idea, but I am probably missing something.

    Of course, none of the above is meant to be decisive about Many-Worlds. Just a lot of interesting things to think about, and some real issues that do need to be addressed by any other interpretation. Also, and this in no way decides anything, I think it’s good to have some perspective. At the current time, a slight majority of physicists actually believe MW, including Hawking and Deutsch. Feynman also believed it, and Weinberg is often described as believing it with a few reservations that the others named didn’t share. Again, that doesn’t count as evidence for anything, but hopefully shows you don’t have to be crazy to believe it (haha, *if* you think Deutsch, Hawking, Feynman, and Weinberg aren’t/weren’t crazy!)

    • Scott says:

      Thanks for the interesting reaction, bobthebayesian (who’s apparently NOT bobtheQbayesian 🙂 )! A few responses, in no particular order:

      (1) As long as we’re making “arguments from authority” :-), the views of Hawking, Feynman, and Weinberg on QM are all somewhat complicated, so it would be best to rely on direct quotes from them if possible. I once read Weinberg saying that MWI is “like democracy, terrible except for the alternatives.” (For whatever it’s worth, I completely share that sentiment about MWI.) For his part, Feynman clearly had sympathy for MWI, but he also famously said, “I think it’s safe to say that nobody understands QM.” That’s very unlike the attitude of most modern MWI proponents, including Deutsch! The latter generally believe that they understand QM perfectly (or as well as they understand, say, Copernican astronomy), and that, just like in the Copernican case, it’s only parochial, anti-multiverse prejudice that prevents others from understanding it too.

      (2) Regarding WPSCACC: I completely agree that the “blob” aspect of quantum computing is perfectly understandable within MWI, and for exactly the reason you say. But I was addressing a different question: whether QC demonstrates MWI, i.e. whether it should persuade any reasonable person to think in MWI terms even if the person wasn’t previously doing so. And I was pointing out one almost-immediate difficulty there: that, to whatever extent we know that a quantum computation required an exponentially-large interference pattern, to that extent we also know that the “branches” of the computation never succeeded in establishing independent identities as “worlds.” (For a more careful development of this response, see this recent paper by Michael Cuffaro.)

      (3) Yes, I completely agree that MWI “really starts to make you feel trippy about your own consciousness and self” when you think about it carefully, and that that’s precisely the aspect of it that many people find troubling. (But then many people go further, and throw really bad anti-MWI arguments into the mix, for example that it’s “weird”—what important scientific discovery isn’t?—or that it “violates Occam’s Razor,” when a strong case can be made for exactly the opposite.) As I see it, the question then is whether we should be satisfied with MWI’s clear advantages in simplicity and elegance, or whether we should continue to search for a less “trippy” explanation. (After all, there are many simple, elegant theories whose “only” flaw is their failure to account for various aspects of our experience!)

      (4) Just a quibble, but 1/(Graham’s number) is overreaching! 🙂 Assuming Bousso’s cosmological entropy bound, the probability of matter rearranging itself in a given observable way will never be smaller than ~1/exp(10122), provided it’s nonzero.

      • bobthebayesian says:

        Thanks! I see better what you’re saying in item (2) now and I will read the linked paper asap. The first thought that comes to my mind is this: what are we to believe about a quantum computation in the unlikely case that it gives us the wrong answer?

        If we expect it to factor our numbers and it doesn’t do it correctly, the MW interpretation suggests we should view this as though we happen to find ourselves in one of the unlikely Everett branches that leads to one of the low-amplitude outcomes (assuming we designed our algorithm to successfully shift most amplitude onto the correct outcomes). What would the other interpretations have us believe about it, and would they be reasonable alternatives to this MW idea? Clearly we can just repeat the computation until we have 1-epsilon certainty that we’ve seen the right answer. But the fact that the algorithm can ever possible realize an incorrect output at all seems to be the key to Deutsch’s question “where was the integer factored?” (all of this is totally aside from noise in the device, cosmic rays, etc., which could all effect a classical computer too).

        I agree that if we were so good at quantum software development that we could arrange an algorithm to always cancel all amplitude on incorrect outputs, then running it would not help us believe we’d walked into a specific Everett branch. We could just as easily believe we’d engineered amplitude to knock out other amplitude as we move along in a single universe. But if we believe that different outcomes of the computation are possible before we execute the program, then doesn’t this mean we believe there are different realities we can find ourselves in depending on the Born probabilities?

        I’m very grateful to have the chance to discuss this, because this is perhaps the main thing I feel most confused about regarding quantum computing.

      • bobthebayesian says:

        (The main thing that confuses me as it relates to this debate, that is… there’s certainly a ton of confusing stuff in general.)

      • Scott says:

        Bob: Interesting, I’d never thought about the philosophical relevance of exact vs. bounded-error quantum algorithms!

        I guess the first, “technical” thing I can say is that there are quantum algorithms that plausibly achieve superpolynomial speedups and that succeed with probability 1, though of course they assume physically-preposterous error-free quantum gates. See for example this paper by Mosca and Zalka, which gives a zero-error version of Shor’s discrete log algorithm. Even the original Bernstein-Vazirani paper gave a black-box problem (Recursive Fourier Sampling) that’s solvable with n quantum queries exactly, but requires ~nlog(n) classical queries even with bounded probability of error.

        Now, on to your main question: suppose you were a neo-Copenhagenist or a QBayesian. Then why couldn’t you say the following (I’m just thinking out loud here):

        “I measured the QC’s state, and I observed a wrong answer. Observing the right answer, I simply regard as a hypothetical: something that could have happened, and in fact would have happened with probability 1-ε, but didn’t happen. It’s not something that ‘really happened in a 1-ε fraction of universes,’ whatever the hell that means. The situation is not much different from that of a classical coin that had a 1/2 probability of landing heads, but actually landed tails. If we don’t insist on parallel-universes language in the case of the classical coin, then why should we insist in the case of the quantum algorithm? Of course, one important difference is that, in the quantum case, I have to calculate the probability of a right answer by applying the Born rule to one self-interfering, quantum-mechanical blob, and the probability of a wrong answer by applying the Born rule to a different self-interfering blob. But then there’s a measurement—a primitive operation, according to my conception of science—and only one blob gets picked. From that point forward, the ‘other’ blob will only ever feature in my explanations of the universe around me as giving the probability for an unrealized hypothetical, so I’m perfectly justified in regarding it as such, just as I would in the classical case.”

      • bobthebayesian says:

        I have been thinking about this quite a bit. I am wondering if the problem doesn’t lie in this portion, you say: “Observing the right answer, I simply regard as a hypothetical: something that could have happened, and in fact would have happened with probability 1-ε, but didn’t happen.”

        This makes an ontological claim about the amplitude that was assigned to all of the possibilities that I did not see. With the classical coin example, we don’t have that problem because the fundamental reasons why the coin’s outcome is uncertain are in principle understandable. That is, we know specific things that, if measured with enough precision, would change the odds of the coin flip. We could build a little device with a camera and train classifiers to recognize Heads from Tails with > 50% accuracy, assuming our camera could compute certain already-understood physical quantities (like the force of the flip, air resistance, etc.)

        In fact, human beings can actually train themselves to flip a fair coin with targeted success rates. As Persi Diaconis said, “If you hit a coin with the same force in the same place, it always does the same thing.” (See also). Professional magicians can train themselves to flip a coin with >80% bias.

        The difference is that the outcome of a quantum computation is not like a coin flip in the sense that all we know are amplitudes. If we believe the amplitude is real, then some account must be given for where all the amplitude went for the events we don’t observe. It’s no ontological problem to say that all the probability “disappears” from other hypothetical events when we observe the outcome of a coin flip, because no one is claiming that that probability is ontologically basic. Unlike a coin flip, for a quantum outcome you cannot chalk it up to just not knowing this-or-that quantity with enough precision. As I understand it, this is precisely why there is a measurement problem in the first place. If we could just say, “oh, well, outcome X was just one hypothetical outcome like Heads or Tails on a coin” then we would be able to say, “Ah, here is quantity Z that, if we could measure it with much more fidelity, would tell us whether our quantum computation was more likely to come out as X or Y.”

        The problem I see with the traditional approaches is that they say things very much like your quote above, “I simply regard as a hypothetical: something that could have happened, and in fact would have happened with probability 1-ε, but didn’t happen.” This seems like perfectly innocuous language, but the “but didn’t happen” part is innocently concealing the whole problem. How you *know* it didn’t happen? If it didn’t happen, then what happened to the amplitude? And why should we believe you? Isn’t it strictly simpler to just say that it *did* happen? This is very different than mere probabilities assigned to outcomes of a macroscopic process.

        Many Worlds allows this simplification, but it too suffers a problem here. I think Robin Hanson puts it well when he says:

        “The big problem with the many worlds view is that no one has really shown how the usual linear rule in disguise can reproduce Born probability rule evolution. Many worlders who try to derive the Born rule from symmetry assumptions often forget that there is no room for “choosing” a probability rule to go with the many worlds view; if all evolution is the usual linear deterministic rule in disguise, then aside from unknown initial or boundary conditions, all experimentally verifiable probabilities must be calculable from within the theory. So what do theory calculations say? After a world splits a finite number of times into a large but finite number of branch worlds, the vast majority of those worlds will not have seen frequencies of outcomes near that given by the Born rule, but will instead have seen frequencies near an equal probability rule. If the probability of an outcome is the fraction of worlds that see an outcome, then the many worlds view seems to predict equal probabilities, not Born probabilities. … We have done enough tests by now that if the many worlds view were right, the worlds where the tests were passed would constitute an infinitesimally tiny fraction of the set of all those worlds where the test was tried. So the key question is: how is it that we happen to be in one of those very rare worlds? Any classical statistical significance test would strongly reject the hypothesis that we are in a typical world.”

        I think for the traditional interpretations to describe bounded-error QC, they have to explain “where did the extra amplitude go.” Or, why should we believe, in the face of experiment, that quantum amplitude is not “really there” and is instead just a parochial calculational tool? Why not the strictly simpler approach that quantum amplitude is really there, and then grapple with why we see the Born probabilities when a typical world would not? To me, the latter is an improvement over the former, though not without its problems. And this really is why Many Worlds is “terrible, except for the alternatives.”

      • Scott says:

        bobthebayesian: I’m glad you’ve stated your view with such precision and clarity, since I can now explain exactly where I disagree with you.

        Imagine that the laws of physics were completely classical, but they involved probability at a fundamental level. E.g., there were certain particles that under certain circumstances decayed with 50% probability, with no “hidden degrees of freedom” anywhere in the universe that determined whether the decay would happen (unlike the in case of the coin flip), but also no superposition or interference (unlike in the quantum case).

        In such a case, I would essentially shrug my shoulders, and say the universe was probabilistic. I wouldn’t be the slightest bit tempted to describe what was going on using Many-Worlds language (as I am the slightest bit tempted in the quantum case). Nor would I stay up at night wondering “what happened to the probability mass I didn’t observe.” What about you?

      • bobthebayesian says:

        This is a big difference. I’m not sure it is a valid thought experiment to envision a world where causeless probability happens at a fundamental level. By “causeless” I mean “we can tell a priori that there are no hidden degrees of freedom governing it.” This is different than the EPR results, because it is in fact amplitude (the physical thing) that gives rise to quantum probabilities (though we don’t know how and certainly could be wrong about this).

        The hard part to come to grips with is that it’s not properties of the local environment of particles, but rather what that local environment is made out of (i.e. amplitude) that gives rise to the probabilities. The Born rule is puzzling, but I don’t think a scientist could ever possibly be satisfied by shrugging shoulders and saying, “oh well, that just means amplitudes give rise to probabilities and the universe is inherently random,” or something along those lines.

        The other thing is that we can observe real, non-trivial objects in superposition, like buckyballs. I don’t understand how we can see a mixture of the different possible outcomes and believe anything other than that the amplitude for different outcomes is physically real. I’m not saying that I’m right, only that, truly, I do not understand how one could interpret that kind of observation in any other way. I would be happy to read more serious accounts of how the other interpretations reconcile that.

        I’m not convinced that there could be any sort of observation that leads me to believe the universe is inherently random. That there is no physical quantity that, when measured or computed, gives rise to the observed probabilities. For me, this more or less removes your suggested world from consideration. If I could prove there was not a hidden variable explanation for some uncertainty, and also that there was not some pool of outcomes that could superposed to yield uncertainty, then I would merely be asking, “well, then what does explain this uncertainty?” To me, that’s the job of a scientist. Maybe we have to go back a little into the philosophy of statistics to decide if such a “causeless probability” is really compatible with a scientific worldview.

        Again, I want to be clear that these are things that I fail to understand, which is not at all to claim that I am correct about them.

      • I am not a physicist, but it seems to me that we generally take things like electric charge to be fundamental, because we haven’t found any mechanism that somehow grants it to particles. We can try unifying electromagnetism with the other forces, but there’s still this quantity attached to each particle that we just have to measure. So why couldn’t the analogous situation hold for probabilities?

      • bobthebayesian says:

        I think in terms of applying physics to solve problems or applying probability to solve problems, your point of view is definitely correct. But in terms of fundamental explanations and theoretical physics, I’m not so sure. For example, why don’t we just believe that objects have intrinsic masses? Why are we looking for the Higgs boson? Once we discover it (or change our models when we don’t) little will change at the big picture level on which most mass-related properties of physics rest. But we still seem to care a great deal about giving a real account of where mass comes from, rather than being content to believe that it’s “just intrinsic.”

      • bobthebayesian: There seems to be a bait-and-switch going on here. Yes, whenever a more fundamental explanation is available—whether it’s Newtonian mechanics, the Higgs boson, whatever—of course we should go for it! But any theory in any area of science will have to stop somewhere, and posit some objects as primitive. So I think the real question is, what types of things are we at least willing to accept at a primitive level? Personally, I’m at least as willing to accept probabilities as I am to accept space, time, energy, etc. — but maybe that’s a function of dealing all the time with probabilistic algorithms (for which it’s completely irrelevant where the probabilities come from), and maybe you take a different view.

      • bobthebayesian says:

        I agree that this is the case in any specific theory. But how does one improve upon a given theory without challenging its assumptions, some of which are about which objects are actually primitive. I’m just saying that I don’t see any reason to be conclusively satisfied with a given set of primitives, whether they are probabilities or particles, or whatever. What would it mean to have a theory in which you literally knew (as much as it is physically possible for a human mind to know something) that you could not improve your explanation ever by entertaining previously un-entertained ideas about what the primitives should be? I agree this can happen in mathematics where we work forward from definitions. I’m not convinced in physics where we’re trying to solve the inverse problems that tell us what the definitions were in the first place.

      • I’ll put it this way: there are things that it would be nice to explain better if possible, and then there are things that keep you awake at night. For me, classical probabilities (supposing they arose in fundamental physics in some non-quantum way) are in the former category while amplitudes are in the latter. (Well, at least they kept me awake at night as a grad student. These days I sleep pretty well, whether justifiably or not. 🙂 )

      • sbanerjee says:

        As BobTheBayesian points out and as Scott has also mentioned in this thread, MWI has some “trippy” implications for our perception of personal identity and consciousness. MWI’s trippyness may or may not be implied depending on what philosophy defines your perception of personal identity and consciousness. If the Buddhist perception of personal identity is taken into account, then the issues of quantum measurement and possible many worlds has no effect on one’s personal identity. The Buddhist perception is that there is no personal identity, we are supposedly just combinations of different things working in harmony – like a chariot is but a combination of wheels, a seat, etc. What is there in the Buddhist view of things, is that there is action and reactions. With the MWI, BobTheBayesian points out that ‘at every “time instant” the thing that is “you” is just walking into a new silhouette, and there are infinitely many other versions of “you” that happened to step into all the different possible other silhouettes.’ In the Buddhist perception, this would be fine as long as all actions in all of the silhouettes carry through in an ‘expected’ manner. I don’t mean to argue with religious philosophy, I just want to point out that issues of personal identity/consciousness might not be something that stops the acceptance of the MWI because some perspectives towards personal identity/consciousness work great with MWI.

  2. I’m really helpless with the technical details of QM, but for me the big question is can “occurs in 30% of universes-weighted-by-mod-squared-amplitudes” be interpreted as anything like a genuine frequency. I mean, there are two intuitive ways to understand what porbabilites *are*:degrees of credence, and frequencies. Credence is off the table in this context, so can we interpret the mod-squared-amplitude as frequency in fully the same sense that “6 out of 10 marriages end in divorce” is a frequency?

    • bobthebayesian says:

      I don’t understand why credence is off the table here.

      • What is there to be wrong about? If you know that when you make a measurement x will occur in 30% of universes-weighted-by-mod-squared-amplitudes and z will occur in 60% of universes-weighted-by-mod-squared-amplitudes you know everything there is to know about the measurement.

        Then again I might be very confused on some basics. But Huw Price argues for this general point forecfully in a paper:

        philsci-archive.pitt.edu/3886/1/Everett-at-50.pdf

      • bobthebayesian says:

        The way I see it is that you know a probability distribution, which is just quantified uncertainty. If we knew *why* the Born probabilities are the way they are, or why they are a function of mod-squared amplitude and not something different, then we might be able to do physical calculations and determine *exactly* under what physical conditions you will emerge in the 30% x scenario and under what other circumstances you will emerge in the 60% z scenario, etc. Indeed, knowing what physically induces the Born probabilities is a tremendous open problem, but not one that is ruled out from being understood in terms of physical explanation rather than pure observational frequency.

        It’s similar to a coin flip. Naively, there is a 50/50 chance of either outcome. But in physical truth, if we could model the strength of the thumb that flips the coin, the air resistance, etc., precisely enough, then perhaps we could predict the outcome much better than with 50/50 odds, but still somewhat less than perfectly. This would be an explanation of the result, rather than purely frequency. We’re just much further from understanding how we could do this for the Born probabilities (i.e. what QM concepts are similar to “strength of the thumb that flips the coin” or “air resistance” when it comes to evolving states).

      • bobthebayesian says:

        Also, Robin Hanson offers an interesting speculative idea that avoids some of the decision theory problems that Price focuses on in the paper you linked: (http://hanson.gmu.edu/mangledworlds.html). I think this approach is interesting, but really speculative. But the main thing is that there a lot of ways to meet this problem, most of which avoid all of the decision theory problems that Deutsch’s preferred approach suffers.

      • But what is it that you are uncertain *about*? It’s not like you don’t know if A will happen or B will happen — you know that both will happen. To quote Lev Vaidman:

        “There is a serious difficulty with the concept of probability in the context of the MWI. In a deterministic theory, such as the MWI, the only possible meaning for probability is an ignorance probability, but there is no relevant information that an observer who is going to perform a quantum experiment is ignorant about. The quantum state of the Universe at one time specifies the quantum state at all times. If I am going to perform a quantum experiment with two possible outcomes such that standard quantum mechanics predicts probability 1/3 for outcome A and 2/3 for outcome B, then, according to the MWI, both the world with outcome A and the world with outcome B will exist. It is senseless to ask: “What is the probability that I will get A instead of B?” because I will correspond to both “Lev”s: the one who observes A and the other one who observes B.[6]”

      • bobthebayesian says:

        But I think the matter of ignorance is exactly which “Lev” you will correspond to. I don’t understand why he says, “It is senseless to ask: “What is the probability that I will get A instead of B?” because I will correspond to both “Lev”s: the one who observes A and the other one who observes B.” You precisely *won’t* correspond to both Levs.

        If you knew why the Born rule was correct, then it would remove your ignorance about how you will evolve, and you would not see A with probability 1/3 or B with probability 2/3 … you would see A with probability 1 or 0 and you would know which without making the measurement. Then you cease to be ignorant about the outcome and probability would not apply. In this case, if you knew such physics, then you could define an objective version of yourself by tracing out all the Everett branches you would take ad infinitum. There would be objectively different versions of yourself and rather than seeing Peli_{t} as one individual that splits into a bunch of others, you would see, at “one instant in time” a bunch of different Peli’s that all just happen to overlap with each other prior to a particular choice of t.

        I agree that in MW, only ignorance probability makes sense. But then again, as a Bayesian, I already think that about all probabilities. Another very important / trippy part of all this is when you start to think of timeless physics. That might actually be the best way to address this probability issue, but I do not understand it very well at all yet. Maybe check this out: (http://lesswrong.com/lw/qp/timeless_physics/)

        Essentially, and again this is just what I think is the right projection of MW interpretation here, I’m not an expert enough to assert that I’m correct about it, the thing you are ignorant about is which version of yourself you happen to already be (specifically *not* which version you “will become” but just which, of the infinitely many overlapping versions up to this point, do you happen to already be). Currently all you can do is specify an answer with some probability. Just like you can specify some probability that a coin will land on one side or the other. If you knew more about the mechanics of the coin, you remove ignorance and your probability estimate shifts closer to being a point mass. Probabilities are not about things that happen in the world. They are only about your state of knowledge.

        Somehow I still feel like I am missing something because it seems like we’re just coming at this from two different angles. I don’t see anything unusual about assigning probabilities to Everett branches. I don’t know the physics that causes the branching nor the physics that determines to which branch the experience of being “me” will go. Therefore, I assign probabilities to these two outcomes based on the data I have. observed, i.e. that the Born rule works, inductively. I don’t understand how Vaidman claims there is nothing for the observer to be ignorant about.

        Think of the double slit experiment. When no detector is placed by a slit, I observe an interference pattern. When a detector is there, I see only two stripes. By physically placing the detector there, I so arrange matter that I can only possibly evolve into certain Everett branches, where the photon is either in state 1 or state 0 definitely (went through slit 1 or went through slit 0). I remove uncertainty. If there is no detector, I allow myself to evolve into Everett branches where the photon is in linear superposition of states, so there are more branches possible. The one that “I” will evolve into is determined by the amplitudes of the different paths the photon can travel, and I physically have different credence about different paths according to Born’s rule. There surely is something to be ignorant of there, namely which photon-branch the molecules in my brain will be entangled into. In principle, there could be some physics that lets me calculate this explicitly with perfect certainty. I don’t know that physics, so I have to assign credence based on observation.

      • Right, I think we’re getting somewhere here: the key question is whether there are facts about persons above and beyond facts about person-slices. Why do you think there are distinct overlapping persons, rather than just splitting chains of person-slices?

  3. I find it curious that when discussing “interpretations” of physical theories our brains seem to be particularly bothered by the presence of uncertainty in the laws of physics, and we keep trying to find a “physical” meaning to it.

    For example, in the case of “classical” probabilities, consider a fair coin that, when tossed, it lands heads “with probability half”. We seem to be most comforted when we realize that we do not really have to worry about what probability *is*, because all that’s happening is that we happen to lack enough information to trace the trajectory of the coin and exactly predict its outcome. If we did have enough such information, then we would not have to worry about what probability *is* and could predict the outcome perfectly. (And then the apparent probability would simply be the ignorance of information crucial for prediction, as already discussed in the comments above.)

    In the case of quantum-mechanical probabilities, however, we seem to be “stuck” with the weigh-by-mod-squared-amplitudes rule, and have no better explanation for it other than MWI. And the probabilities just won’t go away. And thus we struggle to interpret what the uncertainty in these quantum-mechanical outcomes might mean.

    But now suppose that we did find an explanation that removes uncertainty from the theory, and let’s say that the resulting theory is even elegant.

    Why would we be happy with a theory with no uncertainty?
    Won’t we have to also explain what the laws themselves *are*? (Even if now we don’t have to worry about uncertainty.)
    Why do we struggle so much to give a “physical” meaning to uncertainty/probability, while we might be happy to leave, say, a deterministic theory “in the abstract”? E.g., what *is* a unitary transformation? Why not worry about that too?

    • bobthebayesian says:

      I think these are really good questions. One short answer that comes to my mind is that a lot more is riding on whatever quantum amplitude “is” than whatever a unitary transformation “is.” But even at that, I think we do tend to impose physical interpretations on things from functional analysis, including unitary operators. This was a big motivation for the development of Lie theory and its extensions. Another point of view is that unitary transformations “are” whatever we define them to be, mathematically. Whereas probabilities of quantum outcomes are thrust upon us whether we like it or not. I personally feel more motivated to uncover the “why” behind something if it seems to be imposed by nature rather than postulated as part of mathematics, although not everyone shares this view and not everyone needs to.

      But it definitely is a problem for Many Worlds that all of our interpretations don’t lead to experimentally distinguishable consequences. If you can explain every outcome equally well, then you have zero knowledge… and I agree with Scott that many of the extremely evangelical Many World advocates seem to believe, in some limited sense, that this theory “explains it all.” I see Many Worlds as a good bad theory. You need bad theories to make progress, and it’s the best one we have. Once we find ways to test certain parts of it, it will probably be at least partly wrong. But that’s a good thing. I wish there were more good bad theories, in a lot of domains.

    • I think this puts its finger on a very important point, which is that we have a very deterministic conception about how the universe works. One of the expectations before quantum mechanics came along was that any imprecision on the part of our theories was simply because we didn’t know enough about our starting states. Now, on principle, there is no way we can know about them. I think this bothers a lot of people.

  4. The crux of the issue of “interpreting quantum mechanics” is that, sometimes, successful scientific theories imply very strange things. And when faced with this sort of behavior, we have to decide how we’re going to treat these claims.

    For example, we could say that quantum mechanics is simply a convenient instrument for making certain types of physical predictions, and not perceive it to be making some ontological claims (such as the existence of the wavefunction.) Certainly it has not been the case that we’ve been able to reduce other scientific explanations to it: quantum mechanics is woefully inadequate for making predictions even on the level of chemistry, including empirical facts such as the aufbau (n+1) principle, the Hund principle and the Pauli principle. Physicists have strenuously attempted to derive these conclusions from the postulates of quantum mechanics, and have failed. We might take this to be an indication that we shouldn’t worry too much about philosophy of quantum mechanics, that it is all very tentative and something better will come along soon.

    Of course, the problem of reduction is nothing new, even for Newtonian mechanics (which becomes intractable even as quickly in the three-body case.) So, another approach me may take is try to use our existing intuitions to make sense of science, to “tame” the unbounded imagination of quantum mechanics in some sense. We interpret the theory in a way that makes sense to us, and if it all seems to strange, we reject it (like Einstein did.) I think the many-worlds interpretation is an outgrowth of this perspective: human beings possess a very strong capacity for counterfactual reasoning, and so imagining the “splitting” of timelines is a very natural intuition. But, as discussed in class, it often leads to preposterous claims about what physics should do, without any basis in “the mathematics”.

    The alternative is to claim that we should retrain our philosophical intuitions with the science. Arguably, this is what every theoretical physicist in training spends a large portion of their time doing: getting close up and personal with the equations and developing a private understanding of how they ought to work. But perhaps this is asking too much of someone who was not brought up on quantum mechanics, of someone who did not live in a world of quantum mechanical effects.

    • Scott says:

      “quantum mechanics is woefully inadequate for making predictions even on the level of chemistry, including empirical facts such as the aufbau (n+1) principle, the Hund principle and the Pauli principle”

      Hi Edward, I confess I don’t know what you mean by the above. Quantum mechanics is like an operating system for physics: if you want to make actual predictions, you need to install “application software” on top of it, like nuclear physics or quantum electrodynamics. But all of those things are built on QM and none of them contradict it.

      The Pauli exclusion principle, in particular, has a very simple and beautiful explanation in terms of interference of amplitudes (I can explain it if you’re interested).

      Incidentally, though, I do like your presentation of MWI as an attempt to make quantum mechanics seem less strange! That’s the exact opposite of how most MWI proponents and critics alike view MWI: the proponents revel in the supposed “strangeness,” while the critics object to it. But to me, MWI has always felt more like an attempt to fit familiar sci-fi imagery onto a theory whose actual mathematics is stranger than any fiction.

      • “Hi Edward, I confess I don’t know what you mean by the above. Quantum mechanics is like an operating system for physics: if you want to make actual predictions, you need to install “application software” on top of it, like nuclear physics or quantum electrodynamics. But all of those things are built on QM and none of them contradict it. ”

        I think the remark I made was intended to consider the question, “Has chemistry been reduced to quantum mechanics?” I’m not familiar enough with QED to really know how the “operating system” metaphor applies and how it doesn’t apply.

        The remark about Pauli exclusion comes from a 1995 paper by Scerri: http://www.jstor.org/pss/20117979. If in fact this situation has changed recently, I would love to know, and I know a philosopher (or two) who would be interested in this information.

        I think we are very much in agreement about the status of MWI imagery. 🙂

      • Scott says:

        From reading the first page of that paper by Scerri, it seems to be chock-full of the exact sort of confusion that I was trying to dispel with my operating-system metaphor. Let me try again: saying you failed to reduce chemistry to QM is exactly like saying you failed to reduce astronomy to Newtonian mechanics, because you didn’t manage to derive the masses of the planets from F=ma or Gmm/r2. In both cases, we’re talking about questions that the fundamental theory simply wasn’t designed to answer, and that no serious person ever claimed it could answer.

        In the case at hand, though, the Pauli exclusion principle actually DOES follow straightforwardly from quantum mechanics, together with one additional fact: the behavior of fermions. Recall the principle says that two identical fermions can never be in the same place at the same time. This can be understood as follows: by definition, the quantum states of identical fermions are antisymmetric, in the sense that if you perform a physical operation that swaps two identical fermions, you get the same quantum state, except that the amplitude gets multiplied by -1. But if you have two identical fermions in the same place at the same time, then even the identity transformation swaps them! That means that the amplitude for such a configuration must equal minus itself, or in other words must be zero.

    • D.R.C. says:

      This reminds me a lot of this xkcd (http://xkcd.com/435/). If we really need to make predictions at a certain level, we use the appropriate level of abstraction to do it most efficiently, so we don’t wind up with extremely difficult problems since we may not exactly have complete knowledge of the transitions between them. In theory, everything should be able to be explained in concept A can also by any more abstract concept (assuming a 1d scale, which may or may not be true, otherwise one of its “ancestors”), since concept A is just a special case of that. Of course, pure mathematics is not going to be very efficient at solving anything related to neuroscience, and biology might be more efficient than mathematics, it will probably not be as efficient as just using neuroscience to begin with. People tend to develop theories around certain ideas before others and center learning around that. These are not always found in the “correct” order of abstraction. For instance, knowledge of biology has been around for a very long time (even if it was just at the level of “if I stab someone in the neck, they die really quickly”), but so has economics (“I have something that someone else wants, how can I get the best thing for myself?”).

  5. Katrina LaCurts says:

    Here is the specific problem I have with measurement in quantum mechanics (I’m going to avoid how I feel about the various interpretations, because I still haven’t decided):

    My initial understanding of the measurement problem was this: “Quantum mechanics has different rules for when you look and when you don’t, and that’s totally weird.” The major problem I had was how to define what it meant to look, i.e., to take a measurement. After reading Penrose, I had thought that in order to measure, we had to “blow things up” to a classical state, and that that’s how we defined a measurement.

    So to me, this just seemed like an engineering problem: our measurement-devices are “too big” to measure a quantum system without significantly interfering with the system, so certainly things change when we take a measurement (because we’ve interfered). Work should be done to figure out how to measure things with “smaller” devices.

    But, as I understand it now (after reading some more), this is not true; even these “smaller” measurements will cause problems. This left me again with the same problem: what is a measurement? So my next decision was that a measurement consisted of a photon bouncing off of something. This is about as small of measurement I can imagine, but at a quantum level, I still see the photon as interfering with the system. So to some extent, nothing seemed weird to me; again, we interfered with the system, and things changed.

    But then, I thought, photons bounce off of things all of the time. So maybe the previous definition is incorrect, and we should define measurement abstractly, as getting information out of the system. Then, is the wavefunction “collapse” something artificial, akin to the probabilities of a coin flip “collapsing” into 0 or 1 once we observe the coin flip?

    • Scott says:

      Hi Katrina,

      In one sense, you’re absolutely right: a measurement of a photon (call it A) by a human being, or a large measuring device, can be modeled in exactly the same way as a physical interaction between photon A and a second photon B. In both cases, quantum mechanics tells us that the combined system evolves from unentangled to entangled, as the second system (the human, the measuring device, or photon B) gains information about the original state of photon A. And in both cases, someone looking at photon A only will just see a photon whose “wavefunction has collapsed” — there’s no way, by looking at photon A only, to tell whether photon A was measured by a macroscopic object, or whether it was simply entangled with a second photon B. (You can only detect that two systems are entangled by measuring both of them.)

      The issue is this: It’s easy to understand what it means for photon A to become entangled with a second photon B, and in fact one can perform experiments that directly verify that behavior. But what does it mean for photon A to be entangled with a macroscopic measuring device, or a human brain? Well, it would mean that the measuring device or the human brain would have to enter a superposition of states, corresponding to the different possible outcomes of measuring photon A. That leads directly to the Many-Worlds Interpretation, with all of what bobthebayesian called its “trippy” consequences for personal identity.

      The obvious alternative would be to hold that, somewhere between the level of photons and human brains, “the buck stops,” and an actual measurement takes place with a single classical outcome. But of course, if you believe that, you then face the problem of explaining where exactly the buck stops, and why.

      Now, in principle, one could test the claim of the Many-Worlds Interpretation that macroscopic measuring devices and human brains evolve into superpositions of states, by performing experiments on the measuring devices or human brains that looked for quantum interference effects. In reality, though, we’re far, far away from being able to do such experiments, and might remain so for thousands of years or even forever. (Recall from class that, so far, the largest objects for which the double-slit experiment has been performed are various types of buckyballs! Though subtler quantum interference effects have also been seen in superconducting currents consisting of billions of electrons.)

      Anyway, hope that clarified things a bit.

  6. D says:

    Does quantum mechanics need an “interpretation”? If so, why? Exactly what questions does an interpretation need to answer? Do you find any of the currently-extant interpretations satisfactory? Which ones, and why?

    I think that it’s part of human nature to try to understand “what’s going on” at a physical level, and thus we constantly search for an “interpretation” that makes sense–even if it’s nothing more than an analogy to something we’re familiar with. The problem is that for quantum mechanics we don’t even seem to be sure what the right analogy is; any that have been proposed seem to have various failings.

    When in 8.01 I can talk about particles being ideal unitary objects that interact in specified ways (even though I know they’re composed of subatomic particles). In 8.02 I can talk about electricity as if it were a continuous fluid (even the term “current” brings to mind a liquid), even if I know it’s the motion of electrons; I can envision magnetic or electric fields as invisible forces because I’m familiar with another invisible force (gravity). Electromagnetism operates by a different set of mathematical rules, but there’s a clear analogy to something in the ordinary experience of everyone on the planet.

    Moving up the course 8 curriculum to 8.033, we reach relativity. Here things certainly start to get weird, and what is talked about is outside the realm of one’s typical experience, but the statements are simple: “Time isn’t constant for all observers–as you move faster, your clock slows down relative to others.” “Energy either is mass or has mass.” “If you’re moving, you’ll observe distances to be skewed.” Strange and unintuitive, but at least there’s a clear central thesis (there is no “privileged” reference frame) and the implications that follow can be described in terms people understand (I might think it’s weird that a clock slows down, but I know what it means).

    Once we get to 8.04 and 8.05, though, we don’t necessarily even have good analogies for what’s happening in the physical world. We can describe it mathematically, sure, but what’s “really happening” that our equations are describing? Should I think of these electrons as particles or waves? Many people could handle either one, but “both, as it seems convenient” is asking a lot. What causes a “measurement” (or “collapse”) of a quantum state, anyway? What makes entanglement with a particle that’s part of my quantum computer not a “measurement”, but entanglement with an identical particle that flew in from outside a “measurement”? I can understand a particle being in one location or another (or one energy state or another), and I can do the math that tells me what superposition the particle is in, but what–physically–does that correspond to?

    Ale and Edward, above, suggest that nondeterminism is a part of why many find quantum theory difficult to grasp. I think that’s a part of it, but it’s not just that there’s probability or nondeterminism involved–it’s that we don’t even really know what the particles we’re talking about are, even by analogy. The nondeterminism is just one obvious way in which they don’t behave like many things we’re more familiar with.

    • wjarjoui says:

      D, I really like the point you make, and how you lay out how intuition about the physical world decreases as we approach more and more QM.

      I think answering your question “what-physically-does that correspond to?” is hard because as humans we are not used to reasoning about the world through QM. I believe we are classical-level entities and hence over the course of history we developed notions and theories about other entities that we interact with in the classical-level. Only when our technology was advanced enough to allow us to interact with and observe objects in the quantum-level were we able to develop QM. I believe our understanding of QM will go as far as we are able to experience our world in the QM level. Hence the more we are able to observe interactions in the universe in QM-level, the more able we will be to answer your question. This is my take on it, and I could be wrong, but it certainly makes sense in a lot of ways: people can usually understand better what they have to deal with constantly.
      This might be a stretch, but I think a parallel with Searle’s CRT can be drawn from what you said as well: we have a seemingly working model of QM, but what will it take us to understand it? Perhaps the same as it would take the worker in Searle’s room, who has a model of Chinese, to understand the Chinese language?

  7. kasittig says:

    I believe that the most difficult and confusing part of quantum mechanics is that it feels entirely unintuitive, which is not something that I have learned to expect from my physical systems. Consider classical mechanics – even without any idea about what is going on behind the scenes, babies are still able to pick up on basic concepts like gravity and force. Describing the physically world mathematically feels almost like overkill, as I can just go out and experiment to figure out what’s going on. Classical mechanics are all around me all of the time, and so I have an excellent grasp on what is reasonable and what is not reasonable in this regard.

    I’m not sure whether it’s the nondeterminism, as Ale and Edward suggest, or whether it’s the fact that I don’t know what the particles in quantum mechanics are, as D states, but naively (and perhaps this is silly), it just seems strange that something so fundamental could require so much math before I’m able to understand it. And it seems even stranger that these mysterious particles that I don’t really have any intuition about are actually governing the behaviors of all of the everyday objects that I feel comfortable interacting with in predictable and consistent ways. I’m not sure if an analogy would be helpful in my understanding, as I feel as though most humans develop an intuition about the physical world through repeated experimentation, and it appears that quantum particles are simply too small for us to carry out any sort of meaningful, simple experiments.

  8. Cuellar says:

    Maybe someone can help me with a question: In the Many-Worlds, is consciousness necessarily in a single world? When we do a measurement, we become entangled with the particle. Does that mean that our consciousness also splits into two? I’m having trouble with understanding how there is continuity and identity in the mind. It seems that there is a multiplicity of ‘me’ in the different worlds but at the same time, our conscience has an identity. How do we solve this duality?

    • MIke says:

      Cuellar,

      I’m no expert but my understanding is that (i) a measurement does not require the presence of an conscious observer, only of irreversible processes, (ii) branches of the universal wavefunction differentiate when different components of a quantum superposition “decohere” from each other, and (iii) when a conscious observer makes a measurement of, say, a particle, the observer and particle become entangled and the wave function of the particle and the observer decohere and differentiate in respect of possible outcomes. The lack of effect of one branch on another implies that no observer will ever be aware of any “splitting” process.

      You wonder how to reconcile this with your subjective feeling that you have only a single identity. As Michael Clive Price has explained, “[a]rguments that the world picture presented by this theory is contradicted by experience, because we are unaware of any branching process, are like the criticism of the Copernican theory that the mobility of the earth as a real physical fact is incompatible with the common sense interpretation of nature because we feel no such motion. In both cases the argument fails when it is shown that the theory itself predicts that our experience will be what it in fact is. (In the Copernican case the addition of Newtonian physics was required to be able to show that the earth’s inhabitants would be unaware of any motion of the earth.)”

      • bobthebayesian says:

        To extend Mike’s very good reply, the Many-Worlds view would also say that the experience of consciousness is more like “finding out which world-branch ‘you’ happen to be in” than it is like “you actually split.” That’s why I think the splitting language is a bit misleading. What subset of physical reality would you draw a boundary around and declare to be ‘you’? Many-Worlds suggests that the subset you have to draw a boundary around is not only comprised of spatial and temporal dimensions, but also Everett branches, and that your “cohesive experience of consciousness” is ultimately an artifact of all the particles “going where they were supposed to go” (i.e. being entangled such that their outcomes at each time instant are correlated in a way to reproduce thoughts, memories, etc. that “make you you”). You can replace ‘particles’ in the last sentence with any sort of quantum entity, but it usually helps to pick one basic thing and think about things in terms of that.

        In short, consciousness means finding that your factorizable sub-chunk of the universal wave-function just so happens to update into an Everett branch in which all of the physical ingredients needed to producing the physical behavior of consciousness are in the right place to do so.

        Suppose there is a system of N particles in your brain that specifically corresponds to your ability to perceive pain. Further suppose that this functionality is independent of all the rest of the things your brain needs to do, other than that it sends and receives signals from other places. That is, if some of those N particles were perturbed significantly, your brain would still work the same way, just be absent correct pain signals. Clearly, N is very large. Also, evolution employs redundancy, so the pain processing is robust to the correlated failure of up to M < N of the particles, where by correlated failure I mean the particles deviate significantly from what macroscopic physics would predict. Thus, in order to "split" into an Everett branch where your own perception of pain is altered (affecting continuity of consciousness), you'd need to correlated splitting of at least M of the particles under question. When M is really large, this becomes outrageously unlikely.

        Another way to think of it is like this: why can't we repeat Young's Double Slit experiment, but instead of shooting electrons or photons at the two slits, we shoot ink pens? Why don't we see an interference pattern of ink pens? The reason is that to see a macroscopically distinct interference pattern, we need some HUGE number of particles in the ink pen to all simultaneously go down the same unlikely branch. Since the squared amplitude for this event is low, then the joint squared amplitude for this event happening simultaneously for many particles at the same time is even fleetingly lower. But in principle, ink pens do create an interference pattern. The same reasoning, scaled up even more to human minds, suggests that if we shoot human brains through the two slits, then even they have an interference pattern, and hence “which slit the conscious mind went through” is not explicitly defined. It could be a superposition.

        I wouldn’t advise shooting human brains through slits… that’s partly why Deutsch tries so much to conjure a way to do exactly the double slit experiment, but where you “shoot conscious entities through the slit” instead of shooting particles, and he tried very hard to strongarm quantum computing into giving such a test.

      • Cuellar says:

        This is a very good reply. I guess I was wondering how can there be a ‘me’ in a world, While every particle is in every world. But now I see how the Many-Worlds explains this.

  9. nemion says:

    I am not very technically savvy in what comes to quantum mechanics. Nevertheless, I am somewhat interested in the relationship between philosophy of mind and the topics of this class. In particular I find interesting the first paragraph of what kasittig is saying. How is it that quantum mechanics requires such a complicated machinery to be understood and how does it fit into the question and framework we were talking about in induction, learn ability and the Occam’s Razor?

    In what sense is this following or not the principles of Occam Razor? How can we contextualize this information within our experience if it is so counterintuitive that we need to learn a whole mathematical framework to be able to understand it? How does this refute or not (and if not, why not?) Occam’s Razor?

    And in particular I find appealing the following paragraph discussion:

    “…Describing the physically world mathematically feels almost like overkill, as I can just go out and experiment to figure out what’s going on. Classical mechanics are all around me all of the time, and so I have an excellent grasp on what is reasonable and what is not reasonable in this regard….”

    in what sense do we already know what we are aiming to describe using quantum mechanics? How much access do we have through our rational thinking to the facts of the world? Related with the question of logical omniscience, if quantum mechanics (and any physical law) is a fact of the world (in what sense can we claim that physical laws are facts of the world?), then could we access to it through thought and no experimentation? Would the existence of randomness and in determinability in quantum mechanics a refusal to logical omniscience? What philosophical implications does it have the quantum mechanical fact that measurements of reality affect it? Do they pose a limit to what our rules of inference can access? Do they pose a limit to the possibility of omniscience in some way?

    • bobthebayesian says:

      I once heard it put like this: If you did not have any concept for describing what a differential equation was, Newtonian mechanics would look pretty indecipherable. Just by computing rigorous tables, you could easily come up with calculational rules that gave you great predictive power, and this would be a very successful theory. If differential equations really had not even occurred to anyone, or functioned like “grue” in your mathematical language, it would be pretty hard to generalize from your excellent empirically derived calculational rules and see that they were in fact differential equations. I think most physicists would agree that the empirical calculational tool offers you a lot less reach than knowing the differential equation interpretation, and so knowing how to mathematically represent that physics is very valuable — much more valuable than just knowing the empirical calculational tool itself. This sort of thing might be why we tend to try to see structure in things. If it really is the case that there’s nothing more than an empirical calculational tool, that’s kind of like the worst-case scenario and unless we can prove that that’s the case, we only stand to gain by trying to figure out what mathematics succeeds in properly describing why our calculational tool works. This is why I think it is not overkill to seek better mathematical explanations for the physical world.

      • Mike says:

        btb,

        I suspect we have and can go a good deal further than knowing some empirical calculation tool. In fact, a new paper addresses an aspect of this question:

        http://arxiv.org/PS_cache/arxiv/pdf/1111/1111.3328v1.pdf

        Abstract:

        Quantum states are the key mathematical objects in quantum theory. It is therefore surprising that physicists have been unable to agree on what a quantum state represents. There are at least two opposing schools of thought, each almost as old as quantum theory itself. One is that a pure state is a physical property of system, much like position and momentum in classical mechanics. Another is that even a pure state has only a statistical significance, akin to a probability distribution in statistical mechanics. Here we show that, given only very mild assumptions, the statistical interpretation of the quantum state is inconsistent with the predictions of quantum theory. This result holds even in the presence of small amounts of experimental noise, and is therefore amenable to experimental test using present or near-future technology. If the predictions of quantum theory are confirmed, such a test would show that distinct quantum states must correspond to physically distinct states of reality.

        Perhaps it should have been called “Vindication of Quantum Physicality”.

      • bobthebayesian says:

        Thank you for the link. This is a fascinating paper. If I am understanding it, it is saying that if we engineer our measurements correctly, we could arrange for non-zero statistical probability to be assigned to outcomes for which there is zero amplitude, hence we cannot view the physical description of quantum states as purely statistical constructs (all of this is under 3 mild assumptions).

        In particular, the 4th page lays out some consequences of their result:

        “If the quantum state is a physical property of a system (as it must be if one accepts the assumptions above) then the quantum collapse must correspond to a real physical process. This is especially mysterious when two entangled systems are at separate locations, and measurement of one leads to an instantaneous collapse of the quantum
        state of the other. In some versions of quantum theory, on the other hand, there is no collapse of the quantum state. In this case, after a measurement takes place, the joint quantum state of the system and measuring apparatus will contain a component corresponding to each possible macroscopic measurement outcome. This is unproblematic if the quantum state merely reflects a lack of information about which
        outcome occurred. But if the quantum state is a physical property of the system and apparatus, it is hard to avoid the conclusion that each marcoscopically diff erent
        component has a direct counterpart in reality.”

        This would seem to suggest that if these assumptions hold, interpretations that characterize measurement as a separate rule would be forced to give some physical account of what it is and how it can physically act instantaneously over long distances without violating known physical limits. On the other hand, it seems to suggest that Many-Worlds-like interpretations would be in some sense more justified in asserting the physical realism of different outcomes.

        My prior belief is that this paper is not groundbreaking. But if not, then it must be for good reasons that dispute the 3 assumptions they make. Can anyone describe counterarguments to the assumptions of this paper?

        The paper concludes with an interesting remark of the memory overhead for physical systems:

        “On a related, but more abstract note, the quantum state has the striking property of being an exponentially complicated object. Speci cally, the number of real parameters needed to specify a quantum state is exponential in the number of systems n. This has a consequence for classical simulation of quantum systems. If a simulation is constrained by our assumptions — that is, if it must store in memory a state for a quantum system, with independent preparations assigned uncorrelated states — then
        it will need an amount of memory which is exponential in the number of quantum systems. For these reasons and others, many will continue to hold that the quantum state is not a real object. We have shown that this is only possible if one or more of the assumptions above is dropped.”

        While I don’t think anyone from our class will see the statement on exponential memory to be controversial, it is controversial as to whether nature itself has to keep track of all of that (as opposed to amplitudes just being a nice calculational tool that works out to describe what nature will do, but which does not actually correspond to the way in which nature achieves it).

        Might this not give some more credence to Deutsch’s point of view of “where was the number factored”? If we believe the quantum state really exists, then either there is an exponential number of “worlds” (neatly factorizing blobs of amplitude) that each contain a classical amount of storage, or else there is one world that somehow contains an exponential amount of storage (the quantum state) and can’t be doing it with hidden variables…

  10. Mike says:

    Btb,

    I don’t think the referenced paper is “groundbreaking” but I do think that is considered by some to be significant. On Google+, Matthew Leifer, a respected researcher in theoretical physics currently at University College London, and one who has not been unsympathetic to a more epistemic interpretation of QM, replied as follows when I asked what his conclusions were regarding the paper:

    “Well, I knew this paper was coming, so it is not a surprise. Basically, it means that if you believe that quantum states are epistemic then you have two options left:

    1. neo-Copenhagenism: Claim that a deeper realist model was never needed to support an epistemic interpretation of the quantum state. The probabilities are just about measurement results, period.

    2. The ontological states have to be more bizarre than imagined in current approaches. For example, you could have retrocausality or “relational” degrees of freedom (whatever that means). Note that, one could also evade the theorem of this paper by claiming that quantum i.i.d. product states do not correspond to i.i.d. probability distributions in the ontological model. However, doing this does not evade a related theorem by Alberto Montina, which is based on a single system.

    If neither of those options is to your taste, then you might as well become an Everettian or a Bohmian, since you are stuck with the state vector in your ontology in any case.

    Overall, I would say that this result is not too surprising. I think that most people in the “psi-epistemic” camp already had the intuition that a psi-epistemic ontological model formulated in the usual way would not be possible. That is why most of us were already promoting other possibilities, e.g. Fuchs is in the neo-Copenhagen camp and Spekkens often mumbles things about relationalism. Personally, I am quite interested in the idea of retrocausal psi-epistemic hidden variable theories. It is at least a fairly clearly formulated problem to try and come up with one, whereas relationalism seems vague to me, at least as it is applied to quantum theory. If that doesn’t work out then I would probably end up being an Everettian. Despite the attraction of the Fuchsian program, realism has to win out in the end for me.”

  11. Miguel says:

    Regardless of the validity of some interpretation of QM or the other, the speedup achieved by Quantum computation proves that the self-interfering, non-local exponentiality of physical reality is not a mere artifact of ‘measurement’, but rather an empirical fact. How one comes to terms with the ‘trippy’ consequences of this fact is a problem of a different sort: why do we find the idea of ourselves being in an entangled state trippy in the first place?

    It seems to me that the ‘trippiness’ of an objectively quantum universe results from us not taking the consequences far enough: whenever one asks about ‘where was the number factored’ in Shor’s algorithm (or ‘in which of the branches do I perceive a given outcome’), one is sneaking in a decidedly classical view of the locality and existence of the steps of a computation (or of consciousness). Upon decoherence, it does not make sense to talk about a given entity ‘being in’ any particular branch; instead, entities ‘exist’ in a self-interfering way. If one is willing to shed away intuitions of locality and existence (!), then the language of parallel universes, doppelgaengers and measuring rules becomes unnecessary.

    Shedding such intuitions, however, feels very unnatural: since at our mesoscopic scale quantum effects do not predominate, we regularly perceive a classical universe –and thus our nervous systems developed through evolution an intuition of what is ‘trippy’ and what is not that is clearly classical, which finds statements about locality and existence as a priori ‘natural’. But I don’t see this as a fundamental reason to privilege our classical intuitions as intrinsically more ‘objective’ than a quantum view of reality. Consider for instance a wacky form of consciousness interacting in an environment where quantum effects predominate (say, some conscious vortex of plasma in a nebula): to such a consciousness, perhaps entangled states would be ‘natural’ while classical probabilities would seem very unnatural instead! Indeed there are concept classes that are quantum-learnable but not classically learnable (cf Servedio 2001), so maybe the type of language and logic employed by a consciousness evolving in such an environment would be different from that of ours.

    • bobthebayesian says:

      I think you get right to the issue, and this is also what Peli said in his above comment, “Why do you think there are distinct overlapping persons, rather than just splitting chains of person-slices?”

      The question I have is this: suppose you want to draw a boundary around some part of physical reality such that everything inside the boundary “is Miguel” and everything outside is “not Miguel.” Then, if Many-Worlds is true, do you need to draw that boundary only in “classical dimensions” but along one single Everett branch, or do you need to enlarge the boundary to include the factorized blobs of amplitudes that “correspond to Miguel” along many different Everett branches?

      I think in the latter case, it is much easier to reconcile identity by believing the orthodox views about QM (although, really, Many-Worlds is by now sort of an orthodox view too, but ‘orthodox’ here means NeoCopenhagen/QBayesian). But you are exactly right that this is the question. If there are overlapping quantum almost-twins, then personal identity suddenly becomes a more difficult thing. If there’s just one you and amplitudes just describe calculations that help predict what that one version will experience, then things seem relatively more cut and dry (though still by no means trivially easy or anything).

      • if Many-Worlds is true, do you need to draw that boundary only in “classical dimensions” but along one single Everett branch, or do you need to enlarge the boundary to include the factorized blobs of amplitudes that “correspond to Miguel” along many different Everett branches?

        Interesting! If you’re willing to say that the Miguel before making a quantum measurement is “the same person” as the Miguel after making the measurement, then by transitivity, it seems to me you also need to say that the Miguel after making the measurement and observing outcome 0, is “the same person” as the Miguel after making the measurement and observing outcome 1. In other words, I claim as a (trivial) theorem that trans-temporal identity implies trans-Everett-branch identity as well. 🙂 If you agree, then the answer to your question is the latter: we need to enlarge the boundary to include multiple factorized blobs of amplitudes.

      • bobthebayesian says:

        I agree, but I don’t see any reason to give a privileged direction to time. Looking backward in time, the usual Many-Worlds identity boundary drawing would suggest there are a myriad of physically different Miguels that are like the confluence of different rivers. I don’t think time can help us resolve this. It still comes down to whether you think the amplitude that corresponds to “Miguel after making the measurement and observing outcome 0” is real and different than the amplitude that corresponds to “Miguel after making the measurement and observing outcome 1.”

      • Miguel says:

        I think that time necessarily has to be part of such a boundary, insofar as the reality we perceive is classical and irreversible, which prompts our consciousness to construct an intertemporal notion of self.. But this notion of identity is definitely very trippy and unintuitive..

        I just noticed though another context in which the notion of a causally-leaky boundary appears frequently and is not seen as nearly as strange: Markov random fields. Whenever the joint distribution of such processes can be factorized along the cliques of its graph, it makes sense to talk about the cliques as distinct entities in a causal and ‘stochastic’ sense, even though the individual nodes/random variables themselves might be dependent on other nodes outside of the clique.. Hard to say though whether this notion adds or substracts to the strangeness of this whole thing..

  12. Hagi says:

    Most people agree that QM is weird, and in turn our universe seems to be weird. However, this should not be enough to assume it needs work, or an “interpretation”. I seem to disagree with a couple of posts on this thread, but I do not believe that classical physics is that intuitive to us naturally. The fact that it took until 16th-17th century for humans to realize it is a good indication. I remember this study from a psychology class I took, which involved asking subjects the trajectory of a package dropped from a plane. I cannot remember the study exactly, nor could I find it online; but the important result as I remember it is that majority of the subjects were not aware of Newton’s first law, or they could not apply it.

    However, in addition to humans not being very natural at formal thinking, QM further complicates things by not obeying the very basic laws of classical logic. There are different ways of seeing Bell’s inequality and its experimental realizations (my favorite is Itamar Pitowsky’s geometric framework), and the conclusion is that regardless of QM’s validity, classical logic is not correct, at least at some length scales.

    At this point it is natural to think that QM needs an interpretation. Should we see the violation of Bell’s inequality as a deterministic but non-local phenomenon, or as a non-deterministic but local in the special relativistic sense and non-local in the “there is some spooky action at a distance” sense?

    However, I wonder if the discussions on the interpretations of QM are trying to solve more than just the questions about QM. For example, the question of whether the wavefunction is just a statistical estimate, our belief based on the information we have; or is it the physical property of the thing itself. A related example is the question of how we should interpret probabilities. What kind and level of locality should be imposed? Although I do believe these questions should be asked, I do not think they are fundamentally just about QM. I can imagine asking these questions even if we lived in a universe that does satisfy Bell’s inequality and/or deterministic. QM is like the near-death experience that brings out all the existentialist questions out of us, although we should have had them all along.

    • QM is like the near-death experience that brings out all the existentialist questions out of us, although we should have had them all along.

      Thanks—that’s my favorite nugget so far from this thread!

  13. amosw says:

    I have spent some time thinking about how one might go about writing
    a program for a classical Turing machine (or in C) that simulates a
    Many Worlds Universe. The more I think about it the more I realize that
    either I have no idea how to do it, or I that I _do_ know how to do it but
    that I just can’t believe it.

    To be clear: I have some idea about how to write a quantum simulator,
    via numerical integration of the Schrödinger equation.

    Is it really that case that: “that’s it?” All I have to do is
    numerically integrate the Schrödinger equation for the _entire system_
    that I am simulating (say a particle in a harmonic well) and I see
    before my eyes, when I dump the registers, Many Worlds?

    Lastly, it’s worth noting that in 1959 Everett visited Bohr in Copenhagen
    to try to explain that it is possible to have a single wave function
    describing the whole Universe with no need for collapse. Evidently it
    didn’t go very well, as one of Bohr’s students at the meeting described
    Everett as being “undescribably stupid and could not understand the
    simplest things in quantum mechanics”. I like to think that Everett, who
    basically drank and smoked himself to an early grave, is presently having
    the last laugh out there somewhere.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s