Class #9: Quantum Mechanics and Quantum Computing

Since we didn’t have time in Wednesday’s class to get to the “meaty” debate about quantum computing’s possible relevance to the interpretation of quantum mechanics, this week’s discussion topics will be unusually broad.

Does quantum mechanics need an “interpretation”?  If so, why?  Exactly what questions does an interpretation need to answer?  Do you find any of the currently-extant interpretations satisfactory?  Which ones, and why?

More specifically: is the Bayesian/neo-Copenhagen account really an “interpretation” at all, or is it more like a principled refusal even to discuss the measurement problem? What is the measurement problem?

If one accepts the Many-Worlds account, what meaning should one attach to probabilistic claims?  (Does “this detector has a 30% chance of registering a photon” mean it will occur in 30% of universes-weighted-by-mod-squared-amplitudes?)

Advertisements
Posted in Uncategorized | 52 Comments

Class #8: Occam’s Razor, the Universal Prior, and PAC-Learning

Can we even hope for a non-tautological justification of Occam’s Razor?  Can one explain, non-circularly, why the “Anti-Inductivists” are wrong?

What are the important assumptions made by Valiant’s PAC model, and how reasonable are those assumptions?

What are the relative merits of the PAC-learning and Bayesian accounts of learning?

Do the sample complexity bounds in PAC-learning theory (either the basic m=(1/ε)log(|H|/δ) bound, or the bound based on VC-dimension) go any ways toward “justifying Occam’s Razor”?  What about the Bayesian account, based on a universal prior where each self-delimiting computer program P occurs with probability ~2-|P|?

Does some people’s criticism of PAC-theory—namely, that the number of bits needed to pick out an individual hypothesis h∈H need not be connected to “simplicity” in the intuitive sense—have merit?  If so, what would need to be done to address that problem?

Posted in Uncategorized | 38 Comments

Class #7: Kolmogorov Complexity and Homomorphic Encryption

Does K(x), the Kolmogorov complexity of a string x, provide an objective, observer-independent notion of the “amount of patternlessness” in x?  Or is the notion circular, because of the “additive constant problem”?  Do some choices of universal programming language yield a better version of K(x) than others?  If so, what might the criteria be?

Are scientific hypotheses with low Kolmogorov complexity more likely to be true, all else being equal?  If so, why?  Is that the only reason to prefer such hypotheses, or are there separate reasons as well?

Rather than preferring scientific hypotheses with low Kolmogorov complexity, should we prefer hypotheses with low resource-bounded Kolmogorov complexity (i.e., whose predictions can be calculated not only by a short computer program, but by a short program that runs in a reasonable amount of time)?  If we did that, then would we have rejected quantum mechanics right off the bat, because of the seemingly immense computations that it requires?  At an even simpler level, would we have refused to believe that the observable universe could be billions of light-years across (“all that computation going to waste”)?

Scientists—especially physicists—often talk about “simple” theories being preferable (all else being equal), and also about “beautiful” theories being preferable.  What, if anything, is the relation between these two criteria?  Can you give examples of simple theories that aren’t beautiful, or of beautiful theories that aren’t simple?  Within the context of scientific theories, is “beauty” basically just an imperfect, human proxy for “simplicity” (i.e., minimum description length or something of that kind)?  Or are there other reasons to prefer beautiful theories?

Does fully homomorphic encryption have any interesting philosophical implications?  Recall Andy’s thought experiment of “Homomorphic Man”, all of whose neural processing is homomorphically encrypted—with the encryption and decryption operations being carried out at the sensory nerves and motor nerves respectively.  Given that the contents of Homomorphic Man’s brain look identical to any polynomial-time algorithm, regardless of whether he’s looking at (say) a blue image or a red image, do you think Homomorphic Man would have qualitatively-different subjective experiences from a normal person?  How different can two computations look on the inside, while still giving rise to the same qualia?

Posted in Uncategorized | 40 Comments

Class #6: Proof Protocols, Semantics of Computations, and Waterfalls

In class, someone suggested why traditional mathematical proofs seem to differ from zero-knowledge proofs, probabilistically checkable proofs, (multi-prover) interactive proofs, quantum proofs, and all the other exotic types of proof studied in modern cryptography and complexity theory.  Namely, with a traditional proof, “all the cards are on the table”: there’s nothing the verifier doesn’t get to see, and for which he or she is forced to make some assumption about the laws of physics and/or the limitations of the prover(s).  But what exactly do we mean by “all the cards being on the table”?  Suppose a proof is written out in a conventional way, but it’s a trillion lines long, so in practice it can only be checked by a computer—does that count as “all cards being on the table”?  Or does a conscious person need to be able to understand the whole proof, “at a glance” so to speak?  Where should we draw the line, and why?  (Or is there no need to draw a line?)

Suppose we say that “all the cards on the table” with a conventional proof, but not with a proof that requires a quantum computer to verify.  Does that imply that (as Niels Bohr argued) there’s still a fundamental role for classical concepts, even in a quantum-mechanical universe?

We discussed the arguments of Searle, Putnam, and other philosophers that the “meaning” of a computation is always observer-dependent—since merely by our choice of how to label inputs and outputs, we can interpret even a rock or a waterfall as computing anything at all (say, the next move in a chess game).  Most CS folks have a very strong intuition that this argument is wrong, but why is it wrong?  Does it have to do with some input/output labelings being “inherently more natural” than others?  Or with the computational complexity of reducing chess to rock- or waterfall-simulation—i.e., with our intuition that all the actual work of computing chess moves would have to be done by the reduction itself, rather than by the rock or waterfall being “reduced” to?  Or (as Chalmers says) with the various internal states of the rock or waterfall not having the right causal relations to one another?  Or something else?

Are the various responses to the rock/waterfall argument mentioned above all basically the same?  Can you suggest examples where some of the responses would work, but not others?

Are any of your answers different for the rock (which we could naïvely describe as “not computing anything”) than for the waterfall (which we could naïvely describe as “computing something complicated but uncontrollable”)?

Posted in Uncategorized | 19 Comments

Class #5: Knowledge Representation, Analytic vs. Synthetic, and Zero-Knowledge Proofs

Which is the more fundamental concept: “knowing how” or “knowing that”?  Is it possible to “know” that 37×43=1591, yet not “know” the prime factors of 1591?  If so, what is the thing that you know in the one case but not in the other?

Do you know an infinite number of propositions?

Do statements about computational processes blur the distinction between analytic and synthetic knowledge?  Is the statement that a quantum computer (say) will output a particular answer “more analytic” or “more synthetic”?  Does the answer change if we instead consider the distinction between a priori and a posteriori knowledge?

Does every mathematical proof depend for its validity on implicit physical assumptions (for example, that a demon isn’t changing what’s written on the page as you read it), as Deutsch argues?  Is the situation different for the modern “proof protocols” studied in theoretical computer science and cryptography (zero-knowledge proofs, quantum proofs, etc.), than for traditional proofs?  Or do these proof protocols merely make more vivid a dependence on physics that was already present in traditional proofs?

Posted in Uncategorized | 28 Comments

Class #4: The P vs. NP Problem, Human Mathematical Creativity, and Knowledge of Numbers

Would a proof of P=NP have philosophical implications?  If so, what?

Does it make sense to discuss “HumanP,” loosely defined as the class of decision problems solvable in polynomial time by human beings?  Is “HumanP” meaningless because we can’t give human beings arbitrarily-long inputs, in contrast to idealized computers?  Or does this problem simply underscore that the asymptotic limit isn’t what we “really” cared about in the first place, but is just a theoretically-convenient approximation?

What can be said about the relationship of “HumanP” to P and to NP (in either direction)?  Do we need to distinguish different versions of “HumanP”, depending on what resources the human has available (e.g., paper and pencil, a classical computer, a quantum computer), or whether we’re considering subconscious processes or conscious processes only?

How would a chess program need to be designed so that it chose the right move in the “wall of pawns” position?  How would a SAT-solver need to be designed so that it quickly determined the unsatisfiability of a propositional formula representing PHPn (the Pigeonhole Principle with n pigeons)?

Is the problem with existing SAT-solvers merely that they’re missing relevant “abstract knowledge” (e.g., that the Pigeonhole Principle is true)?  Or is the problem that, even if they had that knowledge, they wouldn’t know how to use it, or to recognize when it should be used?  Or is the problem that they don’t have the capacity to generate such abstract knowledge themselves?  Are these distinctions even meaningful?

Does it make sense to speak about the “largest known prime number”?  By what criterion, if any, can we say that p := 243,112,609 – 1 is a “known” prime, whereas q := the first prime larger than p is not?

Are the above questions all “pseudoproblems” that evaporate as soon as we fix the right formal definitions for notions such as knowledge?  If so, then what are the right formal definitions?  And what questions should we be asking instead?

Posted in Uncategorized | 50 Comments

Class #3: Lookup Tables, Gödel’s Theorem, and Penrose’s Views

Does many people’s reluctance to regard a giant lookup table as intelligent simply have to do with induction—with the fact that they know the lookup table can’t handle inputs beyond some fixed size, whereas a human being’s responses are, in some sense, “infinitely generalizable”?

If so, then what is the sense in which a human’s responses are “infinitely generalizable”?  And how can we reconcile this idea with the fact that humans die and their conversations end after bounded amounts of time—and that we don’t have an idealized mathematical definition of what it means to “pass a Turing test of length n” for arbitrary n, analogous to the definition of what it means to factor an n-digit integer?

Andy Drucker pointed out the following irony: we suggested that calculating an answer via a compact, efficient algorithm would demonstrate far more “intelligence” than simply reading the answer off of a giant lookup table.  But couldn’t one instead say that someone who had to calculate the answer by a step-by-step algorithm was plodding and not particularly intelligent, whereas someone who mysteriously spit out the correct answer in a single time step was a brilliant savant?  What do we mean by the phrase “lookup table,” anyway?

What is the precise relationship between Gödel’s Incompleteness Theorem and Turing’s proof of the unsolvability of the halting problem?

Does the Lucas/Penrose argument succeed in establishing any interesting conclusion?  (For example, does it at least establish that algorithmic processes whose code is publicly known have interesting limitations, compared to processes that either aren’t algorithmic or whose code is unknowable if they are?)

Does the argument succeed in establishing the stronger conclusions about human vs. machine intelligence that Lucas and Penrose want?  If not, why doesn’t it?

Posted in Uncategorized | 31 Comments