**The “Deutsch Argument” for the Many-Worlds Interpretation (“Where Was The Number Factored?”)**

Does the possibility of scalable quantum computers *prove* (or rather, “re-prove”) the Many-Worlds Interpretation, as Deutsch believes? If not, does it at least lend *support* to MWI? Is MWI a good heuristic way to *think*about what a quantum computer does?

On the contrary, does MWI encourage intuitions about how a quantum computer works that are flat-out wrong? (E.g., that it can “try all possible answers in parallel”?)

If scalable quantum computers are built, do you think that will change people’s outlook on the interpretation of quantum mechanics (and on MWI in particular)?

Can MWI be *tested*? What about, as Deutsch once suggested, by an experiment where an artificially-intelligent quantum computer was placed in a superposition over two different states of consciousness?

Does the fact that humans are open systems—that unlike quantum computers, we *can’t* practically be placed in coherent superposition states—mean that MWI and the Copenhagen interpretation are indistinguishable by human observers?

Would a scalable quantum computer test quantum mechanics itself in a new way? Is the possibility of scalable quantum computing (or more generally, of a violation of the Extended Church-Turing Thesis) so incredible that our default belief should be that quantum mechanics breaks down instead?

Could quantum computing be impossible in principle *without* quantum mechanics breaking down? If so, how?

Suppose it were proved that BPP=BQP. Would that influence the interpretation of QM? Would it undercut Deutsch’s case for MWI, by opening the door to “classical polynomial-time hidden-variable theories”?

**The Evolutionary Principle and Closed Timelike Curves**

Is Deutsch’s Evolutionary Principle—that “knowledge can only come into existence via causal processes”—valid? If so, how should we interpret that principle? What counts as knowledge, and what counts as a causal process?

Is Deutsch’s “Causal Consistency” requirement a good way to think about closed timelike curves, supposing they existed? Why or why not?

Does the apparent ability of closed-timelike-curve computers to solve NP-complete problems instantaneously mean that CTCs would *violate* the Evolutionary Principle? If so, how should we interpret this: that CTCs are physically impossible? That the Evolutionary Principle is a poor guide to physics? Or that, if CTCs exist, then some *new* physical principle has to come into effect to prevent them from solving NP-complete problems?

Here are a couple of recent papers that may be useful in evaluating the questions posed by Scott.

The first, “Relativistic quantum information and time machines,” deals with what happens when quantum systems interact with general relativistic closed timelike curves – effectively time machines. See: http://arxiv.org/PS_cache/arxiv/pdf/1111/1111.2648v1.pdf

The second, which deals with the issue of attributing physical reality to the Schrodinger wave function, is entitled “The quantum state cannot be interpreted statistically.” See: http://arxiv.org/PS_cache/arxiv/pdf/1111/1111.3328v1.pdf

The second paper is getting a good deal of attention. See:

http://www.nature.com/news/quantum-theorem-shakes-foundations-1.9392

In this paper, the authors conclude that given only very mild assumptions, the statistical interpretation of the quantum state is inconsistent with the predictions of quantum theory. This result holds even in the presence of small amounts of experimental noise, and is therefore amenable to experimental test using present or near-future technology. If the predictions of quantum theory are confirmed, such a test would show that distinct quantum states must correspond to physically distinct states of reality.

Needless to say, this conclusion, if it holds, could turn out to be very important in the debates regarding the true nature of QM.

I just blogged about this paper here.

How would you guys set out to convince me not to be a Superdeterminist? (http://en.wikipedia.org/wiki/Superdeterminism) I have no strong feelings about the topic, and it’s not my expertise, so I really can be convinced. But my sense of aesthetics (and reading some Huw Price) strongly leads me to this approach. Counterfactual definiteness just feels obviously false to me. Am I confused? Please let’s start from a compatibilist notion of free will. If you are not a compatibilist, then what I say is obviously stupid, and let’s just leave it at that. Apart from that, my brain is open.

The “superdeterminism” idea strikes me as fatally flawed and utterly without explanatory value (even

assumingyou’re a compatibilist). The real issue is this: a superdeterminist believes that the explanation of Bell inequality violations is that “the universe knew in advance how you were going to set the detectors, and happened to arrange the particle spins accordingly.” But if you accept that, then why couldn’t you explain outrightsuperluminal signalling, or even paranormal effects like ESP, in exactly the same way? In other words, once you open the door to this sort of cosmic conspiracy, you then face the mystery of why we don’t seeother, even worse cosmic conspiracies all over the place. And all this to solve a “problem” (accounting for Bell inequality violations) that I don’t even think needed solving in the first place, provided you accept quantum mechanics!If the set of block universes where we are searching for our actual block universe would be totally unconstrained, then indeed we had no right to reject superluminal signalling and ESP and tooth fairies. But this is not the case at all. Personally, I am (only rhetorically, as I am not a physicist) searching among a very constrained set of low Kolmogorov-complexity block universes. Optimally, among structures where a single, local consistency constraint is obeyed uniformly across the whole structure, uniquely determining the whole structure. (I allow, though, that the “local” here is different from what inside observers perceive as local.) And I want this constraint to explain the Born rule, not just be compatible with it. (This last part is important. What I am looking for is definitely not a because-I-told-you-so explanation of quantum phenomena.)

I agree that the version of superdeterminism that you attacked is utterly without explanatory value. But I don’t think anyone ever defended that version. I think a more serious attack on the above flavor of superdeterminism would be to prove some Bell-like theorem, one that does not rely on counterfactual definiteness, stating that such uniform local constraints can not cause or be compatible with the Born rule.

If the world is truly Superdeterministic, your brain isn’t “open” and your beliefs are not the product of free choice on your part. What then would be the point of trying to “convince” you?

See, that is exactly why I wrote the disclaimer that I only look for answers from those who are willing to work in the framework of compatibilism. Why did you disregard the disclaimer? You had no choice? Let me note that compatibilism, unlike superdeterminism, is not fringe science. You are trying to ridicule a position that’s probably in the majority among philosophers of science.

Daniel,

My apologies. I wasn’t trying to ridicule. I must be confused by what type of response you are looking for. My understanding is that those who deny that determinism is relevant to the question of free will are classified as Compatibilists, Superdeterminism is a term that has been used to describe a hypothetical class of theories which are completely deterministic. I don’t understand how one who denies that determinism is relevant to the question of free will (a Compatibilist), has anything useful to say about Superdeterminism that rules out free will. But perhaps I am misunderstanding your use of one or both of the concepts. Again, I apologize — I meant no offense.

Apologies accepted. 🙂 Yes, I used compatibilist to describe someone who would not go on a killing spree even if suddenly it turned out Leibnitz was right. 🙂 Superdeterminism does not rule out free will any more than General Relativity does. Both are deterministic theories (okay, the first is a speculative meta-theory, but still). If someone understands that we can meaningfully talk about free will in a deterministic universe, then one shouldn’t have any obvious philosophical issues with accepting a superdeterministic theory. So let’s just talk about non-obvious philosophical issues.

Maybe we can start the real discussion with the following question: what does that “completely” mean in your characterization as “completely deterministic”? It is deterministic, just like Newton’s, Einstein’s, and Everett’s theories. But it is rightfully considered weirder than those, even weirder than MWI. I see this more as a weirdness trade-off. We have observations that rule out all non-weird theories. In MWI, the splitting worlds are the source of weirdness. In some (currently non-existing) superdeterministic theory, the source of the weirdness is that we have to throw out everything what we believed to know about cause and effect. I think I am willing to throw out all that. I think cause and effect are useful approximate notions when we try to describe the universe above the level of statistical physics (that is, above the level where Time arises), but here we want to describe the levels below that.

Daniel, can I first ask you to explain the superdeterminist account for the Elitzur-Vaidman bomb tester thought experiment? How is it that we can obtain knowledge about usable bombs in this experiment if not for counterfactual definiteness, or at least does it challenge your initial perspective that counterfactual definiteness is “obviously false”?

I’m no expert on this thought experiment so I am really interested. Superdeterminism is appealing to me, but it’s hard for me to reconcile it with something like this. Maybe if we start discussing a particular thought experiment, it will lead us somewhere productive.

Bobthebayesian, thanks for the very good question. And you could have even asked the same thing regarding the simpler single-particle double-slit experiment. I think it has mostly the same issues in this respect.

I will try to give some account, but as superdeterminist physical theories currently don’t exist AFAIK, my account will necessarily be very disappointing, even laughable, when compared to the informative and concise explanation that the Everett interpretation gives. Here is what we see: The experimenter first takes a set of dud and good bombs, and uses her random number generator (or free will) to sort them. Then she places the bombs in an apparatus one by one. Her apparatus blows up some bombs, and tells her that some of the remaining bombs are probably good. She tries out these bombs, and they are, in fact, good.

So what happened? In the language of causality (painfully unsuitable here), her random number generator and her half-silvered mirrors worked together in perfect harmony to trick her into putting good bombs into the positions that she later tested. Now, this is a curiosity-stopper, stupid, fake explanation if there ever was one. But I think it is not completely pointless. It turns the scientific question to be solved into this form: What is the global consistency property of our block universe that causes it to obey the constraints imposed by the results of the thought experiment? More generally, which block universes obey the Born rule? Can these global consistency constraints arise from perfectly uniform, perfectly local consistency constraints? It really is possible that I’m missing something obvious, but these seem like relatively well-defined, attackable scientific questions to me. Does anyone work on such things? Are there impossibility results I am not aware of?

I think that most people who take a more orthodox position to this thought experiment would say that counter-factual definiteness is

exactlythe ‘global consistency property’ at work here. I think the impetus is on the superdeterminist view to give an account of this experiment that challenges this. I don’t think we can accept this idea that the half-silvered mirrors and random number generator “tricked” the experimenter, though. In what sense is ‘tricked’ the appropriate word here, instead of just believing in counterfactuals?We briefly discussed in class a result [AW09] showing that closed time-like curves (CTCs) can be used to efficiently compute all of PSPACE (even without using a quantum computer).

I am wondering what conclusions we may draw about physical reality from this result.

(Indeed, isn’t the consensus in physics that CTCs are a theoretical possibility? I.e., they are a possible “solution”, yet none has been found.)

Shouldn’t [AW09] then imply the following trichotomy?

(1) CTCs do not in fact exist (despite being a theoretical possibility), OR

(2) CTCs do exist, but have not been mathematically modeled correctly in [AW09], OR

(3) CTCs do exist and have been modeled correctly, so that they can be used to compute all of PSPACE.

It seems that any one of the branches of the trichotomy is a “win” (i.e., we learned something).

What do other people think?

Might there be some way to gather further theoretical/physical evidence so to believe one of the three branches more than the other two?

[AW09] = S. Aaronson and J. Watrous, “Closed Timelike Curves Make Quantum and Classical Computing Equivalent”

I make no judgments about the truth or falsehoods of this proposition, but I’ll put forward another possibility. (You might either lump it in with (1) or (2).) This is what some people think about quantum computers (and again, I don’t make any claims about those people’s correctness either):

(4) CTCs exist, but there are physical laws at play that preclude their large-scale construction/operation (for some suitable definition of “large-scale”). Thus, they can exist and could in principle solve some simple problems, but to calculate something that a classical computer could not compute would require an infeasible amount of resources. (For example, imagine if the energy required to construct or maintain the CTC were exponential in the number of bits sent through it.)

Another theory that I have read or head about somewhere (possibly in a sci-fi novel, I don’t actually remember) is that CTC’s could exist, but that there is no way to transmit any non-trivial data through the CTC (similar to how you can’t achieve superluminal signaling even with entangled particles). I’m not sure if the current modeling says anything about exactly what kinda of data could be transmitted, so I could be completely incorrect.

If there were a “CTC that you couldn’t transmit any data through”, then I think a basic philosophical question would arise: in what sense is it even a CTC at all?

I am not a physicist, but by analogy with entangled particles, maybe the system would somehow come to a consistent state, but one that didn’t actually give you any information? It would still be weird, in the sense that spooky action at a distance is weird and seems to defy classical physics, but wouldn’t really violate any established laws.

I wanted to describe a possible framework for expressing/debating beliefs about the feasibility of quantum computing. This will use a non-formalizable complexity class called “PhysP” (discussed, I believe, by Scott in earlier writing; I can’t find a reference at present, but his paper “NP-Complete Problems and Physical Reality” has much relevant discussion).

PhysP consists of the languages we can feasibly decide in the physical universe. Barring closed time-like curves, or other surprises from string theory etc., most of us believe that PhysP lies somewhere between BPP and BQP. (Or more properly, some sort of finite “truncations” of these classes, to acknowledge the finiteness of the universe.)

So my proposed framework for describing beliefs is the following two-part quiz:

1a) Is PhysP = BPP?

1b) Is PhysP = BQP?

2a, 2b) In each case, is the equality/inequality you believe “contingent,” or “necessary?”

What do I mean by “contingent/necessary?” Well, let M be your model of physical laws. Presumably M can be “instantiated” with different initial conditions. For instance, maybe there’s a universe with the same laws as ours, but with twice the initial supply of matter/energy; or an alternate universe where things just turn out differently. If PhysP = BQP no matter what the initial conditions of M, I want to say it is “necessary” that PhysP = BQP; if PhysP = BQP in our conditions, but not in others, I want to say it is a contingent equality. (There are valid questions about whether the universe can be cleanly, objectively separated into a model on the one hand and initial/present conditions on the other; I am going to just ignore this, however.)

Now there’s always going to be “trivial” initial conditions in which life never arises, nothing interesting ever happens, and maybe no computation is ever performed. I want to factor out this possibility, and focus on the subset of initial conditions giving rise to an advanced civilization with “reasonable” control of their environment and access to resources. If any such civilization can do scalable quantum computing, then let us say PhysP = BQP necessarily. On the other hand, if you believe, say, that coherent n-qubit quantum states always require exp(n) energy to prepare (I’m not commenting on the likelihood of this and other suppositions here), then perhaps PhysP is a proper subset of BQP necessarily.

If you don’t believe either of these two things above, but if you believe, for example, that the highly coherent states needed for quantum computing turn out to be irreducibly, fatally vulnerable to a certain level of background cosmic radiation , then perhaps PhysP is a proper subset of BQP, as a contingent fact: i.e., in a universe with the same laws but with less background radiation, we could do BQP computations, but in our world this is forever ruled out. (On the other hand, you might believe cosmic rays preclude quantum computers, yet still believe P = BQP as a mathematical conjecture. In this case PhysP = BQP would be a necessary equality.)

To my mind, one of the more interesting questions to put to quantum computing skeptics (who believe that PhysP != BQP, whether necessarily or contingently) is whether PhysP = BPP; that is, whether the universe is classically simulable. It’s conceivable that general-purpose quantum computing is doomed to failure, yet that there are physical systems where “quantum weirdness” plays an irreducible role that is infeasible-to-simulate on a classical computer. If such systems are possible, how “natural” are they? Could any advanced civilization create one or have access to one? (For example, is the inside of a star such a system, at some level of detail?) If so, perhaps PhysP is *necessarily* a proper superset of BPP, while still falling short of BQP. Are there any people with this belief system? If so, oughtn’t they be very interested in obtaining a clearer description of PhysP?

(Quantum computing skeptics I’m aware of seem more inclined to believe that QM is not fundamental, that the universe actually *is* a classical computer, and so PhysP = P or BPP as a matter of necessity.)

Technical PS: the notion of “necessity” I’m using is loosely related to “natural necessity” as used by many philosophers (e.g. Chalmers, and distinguished from “logical necessity” or “metaphysical necessity”). But these terms are subject to controversy and multiple conflicting usages that I don’t want to get into. Take what I’ve written above as a first stab at a heuristic classification of QC beliefs.

Suppose the universe had started off in a high entropy situation where matter was uniformly spread throughout space. Among other consequences (e.g. life might not have evolved) it would be much harder to actually perform a computation despite the fact that (and presumably we all agree on this) Shor’s algorithm would still work to factor numbers efficiently. You’d have to pump much more entropy out of any local system to then exploit it for a computation, perhaps an intractable amount. Compute capability per amount of fuel consumed would be very low.

Shor’s algorithm and discrete logarithms, etc., suggest that in our universe PhysP = BQP, and it is widely believed that these problems are not in BPP. However, if the entropy idea mentioned above is plausible, then it’s easy to imagine initial conditions for the universe such that much much less than BQP can actually be efficiently computed. Since PhysP is ultimately a matter of actual efficiency in exploiting matter for computation, I don’t think that it is necessary that PhysP = BQP.

Even in our own universe, we could learn that there is some physical law that prevents us from engineering sufficiently high fault tolerance or sufficiently low decoherence for problem instances larger than some fixed size N_{universe}. This would be a massive discovery, but it’s not ruled out.

Another angle that interests me is: are there some ‘important’ algorithms that cannot be recast into a format such that they are amenable to quantum computing? For example, in image processing we know several useful algorithms for things like the Singular Value Decomposition or iterative eigensolvers. But thus far, it’s not well understood how to do basic linear algebra on a quantum computer because many of these algorithms require a non-unitary “throw something away and renormalize” step that forces you to measure something and collapse your nice system. We really need someone to invent QBLAS.

If one could prove that these kinds of linear algebra algorithms

mustsuffer from these non-unitary steps when implemented on a quantum computer, it could severely limit the effective use of general purpose quantum computing outside of cryptography and simulational applications.I don’t think the above is likely, and there may already be results that show something contrary to my idea above, but

ifwe did find something like that, then we might suppose that PhysP is necessarily some hybridization of BPP and BQP, perhaps truncated beneath some fundamental engineering limits.These all seem to me to be differences of scale, not kind. We already know we can’t solve arbitrarily large instances of any problem because they don’t fit in the universe. Because of that fact, PhysP is trivially not identical to any of the theoretical complexity classes we’re talking about. But it is in some sense generalizable by ignoring certain physical limitations; the question is what precisely we’re allowed to ignore.

The high-entropy universe scenario seems no different to me intrinsically from our universe. It’s just that energy gradients are smaller. I don’t think this in itself ought to yield a smaller PhysP. Even in the case of a discovery that we fundamentally can’t build reliable quantum computers past a certain size, we have to be careful not to conflate “can’t efficiently solve problems” with “can’t efficiently solve problems of interest to humans”. Would that size limitation be any different than the one imposed by the size of the universe? Just because 10^120 is so mindbogglingly huge? I think this is one of the factors we get to ignore, if we get to ignore anything at all.

Correct me if I am wrong, but I am pretty sure that you can just simulate a classical computer using a quantum computer (I can’t see a reason that that you would want to, you could always have a separate computer giving any information needed). You would just use 90 degree rotations for a NOT gate (it might be odd because of interference, but I suppose you could measure it first), then you should be able to use the qubits just like classical bits, as long as you never entangle them. Basically, since the universe seems to be quantum, we are simply in a subset that we can work on classical computers, so it would also be a subset of quantum.

D.R.C.: Yes, BPP is a subclass of BQP.

On CTCs, if we were to consider the example in WPSCACC 10.2 regarding the scenario where a time traveler goes back in time to dictate to Shakespeare his plays. The argument made is that somehow this procedure removes the computations that produce the plays – but it is not clear to me why this has to be true. Namely, if we were to suppose that time, without the use of CTCs, goes in one direction, then the first time our time-traveler used CTCs to recite Shakespeare’s plays to him, it must be because Shakespeare actually wrote those plays before. Now after that, the traveler can use CTCs to shotcut the writings of the plays, but that action will not necessarily lead to a universe where something that was not brought about by a causal process – i.e. I don’t see how using Deutch’s account of CTCs will magically allow us to violate the Evolutionary Principle.

The Shakespeare story, while fun, misses one aspect of what’s going on formally: namely, that writing the plays happens purely because it’s the only way to satisfy causal consistency. Imagine that, if Shakespeare hands you something other than

truly great plays, you go back in time and dictate thelexicographically next playsto him! So that there’s some sort of process of “cycling through all possible plays.” But even that isn’t quite right, because there need not be any actual “process”: all that matters is that writing the “true” plays gives a fixed-point of the evolution, and nothing else does. Closely related to that, there’s no such thing as “the first timeour time-traveler used CTCs to recite Shakespeare’s plays to him”—since as we discussed in class, that kind of talk presupposes a second meta-time!So, because of the completely-valid points you raise, instead of the Shakespeare story, it would probably be better to think about the algorithm for solving NP-complete problems efficiently using CTCs. In

thatcase, do you think the Evolutionary Principle is violated?I would argue that in this case [i.e. where you do have an algorithm for solving NP-complete problems efficiently using CTCs] the Evolutionary Principle is not violated. This is the case as long as the algorithms for solving NP-complete problems do not actually assert that the user of the algorithm will be able to access there results of the algorithm. As you state in WPSCACC 10.2, the Evolutionary Principle can be stated as “Knowledge requires a causal process to bring it into existence.” I would argue that while this does indeed have an analogue in the NP Hardness Assumption, one of the main aspects to problems such as the Shakespeare one that make it feel paradoxical is that they do not let knowledge, causal process, and existence [in the present from where the CTC originates per say] lie in a linear progression of time and history… i.e. one cannot easily pinpoint points in a linear progression of time where they can say that here is where the causal process started, here is where the knowledge was produced, and here is where it started to exist. I would argue that an algorithm for solving NP-complete problems efficiently using CTCs would be able to execute causal processes, but they would do so in alternate in-accessible [because of the structure of the CTC] time progressions.

This can be thought of as similar to the issue of quantum computation and the MWI…. if such parallel worlds did exist and one could use the matter in all worlds to do computation… i suppose one can conceive such a thing to happen… but if one actually wants to make use of the knowledge that is being produced by these parallel computations, then the parallel computations could not remain parallel as they would have to interfere and that just gets complicated 🙂

I apologize: I guess WPSCACC didn’t give a satisfactory description of the CTC algorithm. If you read my paper with Watrous, we make explicit why you can actually

extractthe solution to the NP- or even PSPACE-complete problem from the CTC, copying it into the ordinary, “causality-respecting” part of the universe. (Indeed, had thatnotbeen possible, we wouldn’t have been interested in CTCs’ supposed ability to solve hard problems, for exactly the reason you say!)So again I ask: do CTCs violate the Evolutionary Principle or not? 🙂

It seems to me that CTCs do violate the Evolutionary Principle, which is a strong reason to believe they don’t exist and strong motivation to study semi-classical gravity to figure out exactly what the physical conditions would have to be at a location where a CTC becomes theoretically possible.

Does the possibility of scalable quantum computers prove (or rather, “re-prove”) the Many-Worlds Interpretation, as Deutsch believes? If not, does it at least lend support to MWI?I’d say it certainly doesn’t prove MWI, and doesn’t necessarily lend support to it. (It’s likely that supporters of MWI might find that it lends support, but non-supporters don’t necessarily find it convincing.)

Deutsch asks “where” the computation happened. This phrasing seems to assume his conclusion: it is assumed that the qubits do not in themselves have the computational power to, e.g., factor integers, and it is only through the interaction with their counterparts in other worlds that they do. Yet this is not a given; there are other interpretations of QM in which the qubits are given more power within a single universe. Essentially, Deutsch seems to take an almost-classical view

withineach universe, and asserts that the “weirdness” of QM comes about from the interaction of many worlds. By contrast, one can simply claim that the classical theories that are in operation within the universe are incorrect/incomplete; thus, the model of computation we’ve been using classically is insufficient.Is MWI a good heuristic way to think about what a quantum computer does?On the contrary, does MWI encourage intuitions about how a quantum computer works that are flat-out wrong? (E.g., that it can “try all possible answers in parallel”?)“Yes.” MWI can be useful, and can be deceiving. It can be difficult to avoid taking an incorrect view, but it can also provides a useful framework for heuristics/intuition about what the outcome of a quantum computation will be. At this point we don’t have a concrete understanding of what, exactly, is going on at a quantum level (hence all the philosophy!), so the MWI is as good a heuristic as any for intuition about quantum problems–but not more than that.

Can MWI be tested? What about, as Deutsch once suggested, by an experiment where an artificially-intelligent quantum computer was placed in a superposition over two different states of consciousness?I’m having trouble interpreting Deutsch’s thought experiment. My (perhaps flawed) understanding is as follows:

– Deutsch’s thought experiment requires a quantum computer loaded with an artificial intelligence program for consciousness.

– Since humans are open systems, i.e., we naturally interact with the environment at all times, we’re observing things all the time, and wavefunctions are collapsing like crazy around us. Hence, decoherence.

If conscious humans are open systems, why isn’t a conscious artificial intelligence? What kind of “consciousness” does it have? Is it a misinterpretation of mine to suppose that consciousness, at least in part, includes awareness of one’s environment?

Katrina: An important point here is that the AI quantum computer could still be interacted with, but in a

controlledway — for example, via questions and answers in a Turing Test. To necessarily decohere the quantum computer, what’s needed areuncontrollableinteractions with the external environment: for example, random stray particles passing through the computer and carrying away information.Now, I find it an intriguing and tantalizing thought that conciousness might

requireuncontrollable interaction of the sort that causes decoherence — or in other words, that any computer that was controlled enough to maintain quantum coherence, for that very reason wouldn’t be conscious. However, if you want to go that direction, then I think you should at least recognize that thatisthe implication of what you’re saying, and be prepared to defend it explicitly. Are you? 🙂If that’s true, then if you somehow surrounded a human with an impenetrable sphere that prevented all interaction with the outside universe, would it cease to be conscious? Would all of humanity cease to be conscious if you did the same with the Earth? The Milky Way? I haven’t worked this out in detail, but it seems like you’ll run into a reductio ad absurdum at some point.

HJP: Keep in mind that, even if you surrounded a human with such an impenetrable sphere, there would

stillbe a huge amount of decoherence, e.g. from the “relevant” degrees of freedom inside the brain to the “irrelevant” ones. So physically, I can’t see any way that one could maintain a brain in a superposition corresponding to two different states of consciousness,short ofuploading the information in the brain onto a quantum computer (which was specifically designed to implement quantum error-correction, etc.). But if you did that, then my intuition as to whether the resulting entity would be “conscious” (and if so, what its consciousness would be like) becomes much less clear.I don’t think I see what you’re saying. Again, IANAP, but when you put a buckyball into superposition, the protons and electrons are still interacting with each other, but that’s irrelevant when you’re talking about the whole thing. The system inside the sphere can do whatever it likes, but you could put the entire sphere (or its interior? or something) in superposition and the human would be part of that, right?

My point was simply that, if we’re talking about a physical brain, then there clearly are a huge number of physical degrees of freedom that don’t play any direct role in cognition. (To take one example: blood vessels.) Yet in order to create an interference effect between two different brain states, you would need to have complete control over

allof those degrees of freedom. And as a practical matter, I don’t see how you would do that, short of transferring the information in the brain to a different physical substrate (or making massive physical changes to the brain that would essentially amount to the same thing).It may be a little hard to believe that not everything needs to have a cause to come into existence. But consider, for example, the platonist view of mathematics: that mathematical entities do in fact exist, but live in some plane of reality inaccessible to use humans. After all, when we study mathematical objects, we only study particular instantiations of them, and it’s tempting to think that there really is this thing called “complex numbers”, ageless and timeless. But then, one has the question: how do we humans come to know things about complex numbers? Not an obviously causal process. Likewise, consider the question of “where the CTC answer comes from?” It doesn’t appear to have come from anywhere, but in fact it was implicit in the initial parameters of the machine, and just because there is not a causal correspondence doesn’t mean we should reject it.

how do we humans come to know things about complex numbers? Not an obviously causal process.Really? I learned about them by reading books, solving homework problems, messing around with computer algebra programs, etc. — seems pretty causal to me! 🙂

> in fact it was implicit in the initial parameters of the machine

I find this a very interesting thought. Is it also relevant to the questions about classical or quantum computation that we know to be possible?

“Knowledge can only come into existence via causal processes”

This sentence strikes me as overly suspicious. I think Edward could rephrase his discussion in the following way: once we define mathematical objects, how is it that we can know the deduction of logical derivations on them. In particular, in what sense is it causal the knowledge that given the axioms of Euclidean Geometry, the sum of the angles of a triangle is 180. Indeed it can be argued that the proof caused that knowledge, nevertheless, I could not have obtained it without actively operating on the axioms of geometry.

On the other hand, in terms of perceptual experience, there are some philosophers (McDowell, Sellars) that claim experience bears no causal dependence on thought. A snippet of their argument follows: i.e. that the concept of experience is conceptual. Non conceptual content cannot stand in rational relationships with thought. If this is such, knowledge can be accounted for that is not causal in nature, namely knowledge about our perceptions. In this case, for example, the passive experience of blue, is already a concept per se, and not a causally derived thought produced by the sky.

There is even a sense in which we could say knowledge about the world can be traced down to elementary particles of knowledge that is given to us. (Myth of the given, see Austin) in that case, how can knowledge about this given be causal?

How many times does God have to call malloc()?

In almost every discussion of the Many Worlds Interpretation that I

have read, there is a menacing reference to exponential branching.

For example, “[MWI views] reality as a many-branched tree, wherein

every possible quantum outcome is realized.” As a computer scientist

I recoil from such statements, as it calls to mind the image of God

having to allocate memory for a whole new Universe every

time a measurement happens in any one of his existing Universes.

But after looking through some books written for physicists who

actually do simulations of quantum systems, I have the impression

that this exponential branching view is deceptive, though not

outright wrong.

In classical physics we have known for a long time that symplectic

integrators are pretty much required for accurate numerical integration

of time dynamics. It seems that a similar result holds for quantum

simulations: namely “Unitary integrators”. So far so good: we need to

preserve some essential structure of the phase space when we numerically

integrate the equations of motion.

But then my question was: what exactly is the data structure that we

are time evolving? It seems to be a thing called a “density matrix”,

which for a n-qubit system would be a 2^n x 2^n matrix of complex

numbers with some constraints (Hermitian, the trace = 1, etc.)

Numerical integration is then accomplished by producing a new

density matrix from the old by applying a unitary transformation,

which conserves these constraints.

Ok, so presumably Everett had in mind that God writes down a single

density matrix for the entire Universe and then advances it in time

via the unitary transformation corresponding to the Hamiltonian of his

will. Now, I readily admit that this density matrix is insanely

huge: if we think the number of quits in the Universe is 10^77 then

the dimensions of the density matrix are 2^(10^77) x 2^(10^77); but

the point is, God doesn’t have to allocate any more memory after he

has allocated it once. Time evolution can just overwrite the old

density matrix with the new one. This business about “every possible

outcome is realized” is just certain complex numbers in the matrix

becoming larger over time.

I certainly do think that a scalable quantum computer would test QM in a new way, because we have never achieved so much quantum control over physical systems. One way to see this is to realize that most groups in academia that work on experimental realizations of quantum computation could easily rephrase their research goals if scalable quantum computation was proven to be impossible (however the funding might decrease : ) ). Because most of these groups are improving humans’ control over systems with quantum coherence. The abilities we gain from this research is important even if they are not going to be used in quantum computation. One example for this is the NMR quantum computation; it is quite well accepted today that scalable NMR quantum computers can never be built, at least the way they are thought of right now. However it is still worthwhile to improve our quantum control over spins via improving RF signals and etc. To me this shows that even if tomorrow a theorist proved that BQP = BPP, physicists would still be working on ion-traps, NV centers, detecting Majorana fermions, … etc. Regardless of computational complexity issues, humans inability to control decoherence is a fascinating topic. This is not to say that computational complexity issues over quantum decoherence is not at least as fascinating. In addition, even if BQP = BPP, the polynomial speed-up due to quantum coherence for some problems may still be worth pursuing quantum computation.

Following on this, I do think that it may be that quantum computation is unachievable although QM is valid. This is probably the most boring out of all outcomes, but maybe engineering issues will show that humans obtaining quantum computers is just not worth it, due to some exponential disadvantage over its advantages. This is kind of similar to saying although theoretically nuclear bombs have this amazing power, but in terms of its engineering difficulties it is just not worth it. I wonder how relevant this perspective was in early 1940’s.

I think this paper is of interest to anyone wanting to discuss quantum computing in the context of MWI and interpretation in general:

http://philsci-archive.pitt.edu/8837/1/many_worlds_qc.pdf

I have some strong issues with this paper, but they probably need to be first passed through a sieve to weed out my mistaken conceptions. It seems like they are saying that since “worlds” are a semantic concept, not given to us by physics but instead determined by our choices, that Everettian worlds can’t be an explanation for quantum computation and instead that MWI is circular reasoning if you try to use it to explain QC.

But no one is disputing that what we mean by one “world” vs. another is a semantic issue: they are neatly factorizing blobs of amplitude. If we wanted to measure the world in some totally different basis such that the way things factorize makes it impossible to disentangle the fact that different worlds happen to roughly correspond to modified copies of what we commonly consider to be our own world, we could do so. But that’s not useful. The “preferred basis” in quantum computations arises specifically because we’re engineering the whole situation to do some computation for us. I’m not understanding how this is actually a deficiency in Many-Worlds.

For example, in mechanical systems type problems, there is often a very easy basis in which to represent the problem that makes the syntactic work of crunching through the symbols much easier. No matter what representation you choose though, conservation of linear momentum or mass, say, are just as effective at arriving at the solution. I don’t think that this casts doubt on the ‘representation’ of the physics involved. This is also analogous to the Poselet classifier idea: just lump different keypoints together and then see what kinds of lumpings produce the best classifiers. If a “part” of a body happens to be “left-shoulder-left-ear-and-mouth” then so be it, but we still choose to describe such a thing in terms using shoulder, ear, and mouth. Semantic body parts might be weird agglomerations of “human-readable” parts, but the representation is only semantic.

For instance, a discussion of this ‘practically factoring blobs’ idea from the linked paper:

“However, pace Hewitt-Horsman, I do not believe this is enough to justify treating these worlds as ontologically real, for unlike the criterion of decoherence with respect to macro experience, Hewitt-Horsman’s criterion for distinguishing worlds in the context of quantum computation seems quite ad hoc. Declaring that the preferred basis is the one in which the diﬀerent function evaluations are made manifest is like declaring that the preferred basis with respect to macro experience is the one in which we can distinguish classical states from one another. But it is, in fact, a rejection of such reasoning that leads to decoherence as a criterion for world-identiﬁcation in the ﬁrst place.”“Worlds” do not exist except in our descriptions of things. Really there is just one giant amplitude blob that describes every particle everywhere. Thus, decoherence is the ‘right’ way to distinguish worlds. But it is

exactlythe desire to semantically distinguish macro-states that leads us to use the “world” language at all. It happens to be the case that amplitude neatly factorizes in a way such that our semantic descriptions are applicable. That would be true even if we chose to measure the world in different bases, because of decoherence, though we would have to do more work to see that our semantic descriptions apply. Either way, the only sentence here that sticks out to me is the one that says, “I do not believe this is enough to justify treating these worlds as ontologically real.” If the author does not believe this, so be it. But the surrounding argument is not convincing to me.What I agree with, though (as mentioned elsewhere in our blog posts) is the following idea (which I believe is Scott’s main point in the relevant section of WPSCACC):

“… the quantum parallelism thesis need not entail the existence of autonomous local parallel computational processes. Duwell (2007, p. 1008), for instance, illustrates this by showing how the phase relations between the terms in a system’s wave function are crucially important for an evaluation of its computational eﬃciency. Phase relations between terms in a system’s wave function, however, are global properties of the system. Thus we cannot view the computation as consisting exclusively of local parallel computations (within multiple worlds or not). But if we cannot do so, then there is no sense in which quantum parallelism uniquely supports the many worlds explanation over other explanations.”To the extent that an algorithm is designed to shove amplitude onto a particular outcome and to produce the correct answer, to that extent it is a single-world algorithm. To the extent that other outcomes are actually realizable (i.e. bounded-error algorithms), then nothing but a straightforward Many-Worlds decoherence argument is needed to assert that MWI explains it. To simulate the arguments from the other interpretations requires various what I consider to be unjustified non-realism assumptions about the wave function.

If I understand correctly, according to MWI, after a measurement is done, the ‘other world’ is lost forever. In that sense, ‘after-measurement worlds’ are just epiphenomenal universes incapable of altering our universe; thus, the only reason to think about it is because they make our equations look consistent. I’m not saying this is a bad argument, I’m saying that it is irrelevant in practice.

On the other hand ‘before-measurement’ worlds are relevant in practice as we can see in Young’s experiment. This is the phenomenon that matters. And it’s less spooky or science-fictiony than what it sounds like and I think a scalable quantum would be more evidence for it, because it explains it. The double nature of a qubit (not as particle and wave, but as two overlapping states) IS what the MWI is all a bout. It just happens that on viewing the phenomena this way, the theory is consistent with modeling things as different universes interacting. This being the case, it would be unnecessary to just redefine the laws of physics just avoid the MWI.

On another note, I have a quick question about quantum computation on the MWI. If I have .8 probability of measuring 1 from a given qubit, does that mean that in the other world I have probability .2 to measure 1? And if this is true, if when factoring 15 I have .8 probability of getting it right, does that mean there is a world were they get it wrong with .8 probability?

I think this is not at all what MWI says. This is closer to standard collapse-type postulates. MWI says that it only

looks likethe other worlds are lost forever due to (a) the fact that our equations probably are consistent, (b) decoherence, and (c) thermodynamics. But really, those other worlds have just as much ontological right to be called existent as the one we directly perceive. Essentially, MWI says that your comment that “the other worlds matter before the measurement in practice” is correct, and further, that the seemingly innocuous extra stipulation that the worlds cease to be real after the measurement is really not innocuous and is instead a pretty large assumption that inserts non-unitary, non-linear, and non-time-reversible physics into a theory that perhaps doesn’t need them.Regarding the last paragraph, this is a common misconception. MWI would say that if you used some math to figure out that “you” have a 0.8 probability of seeing the qubit as a 1, and a 0.2 probability of seeing it as a 0, then what this really means is that in 80% of all possible branches, the qubit just is a 1 and the probability only means you have that much chance of “being” one of the people in those worlds who sees a 1. In the other 20% of all possible worlds, the qubit just is 0.

It is a very real and very important open question, however, to understand why the probabilities obey Born’s rule, that is, why the probability of outcomes is proportional to the modulus-squared of amplitude. But this is different from understanding what the probabilities are telling you once you believe Born’s rule.