]]>

Feel free to post any thoughts you’ve had related to the topics discussed in class #13:

- The concept of a “technological singularity” and what exactly we should interpret it to mean
- What the fundamental physical limits are on intelligence
- Whether it’s useful to regard the universe as a computer, what that statement even means, and whether it has empirical content
- Whether any of the following considerations are obstructions to thinking of the universe as a computer: quantum mechanics; apparent continuity in the laws of physics; the possible infinitude of space; a lack of knowledge on our part about the inputs, outputs, or purposes of the computation

]]>

Do you consider yourself a one-boxer or a two-boxer? Why? Or do you reject the entire premise of Newcomb’s Paradox on logical grounds?

What do you make of the idea that, if Newcomb’s Predictor existed, then you’d need to think in terms of two different “instantiations” of you, which nevertheless acted in sync and were “controlled by the same will”: (1) the “real” you, and (2) the simulated version of you being run on the Predictor’s computer? Does this proposal do irreparable violence to causality, locality, or other principles of physics? If so, is the “benefit” (justifying one-boxing) worth the cost?

What’s the role of probabilistic or game-theoretic considerations in Newcomb’s Paradox? Does the probabilistic variant of the paradox (discussed in class) raise any issues *different* from those raised by the deterministic version?

*If* Newcomb’s Predictor existed, would that create new and serious problems for the everyday notion of “free will”? (Feel free to divide this into two sub-questions: whether it *would* change how people thought about their free will, and whether it *ought* to.)

More generally, what’s the relationship between Newcomb’s Paradox and the “traditional” problem of free will? Are there aspects of one problem that aren’t captured by the other?

Apart from what we discussed in class, can you suggest any computer-science examples or metaphors that clarify aspects of the free-will issue?

]]>

Consider a “brute-force search algorithm,” which cycles through all possible DNA sequences until it finds some that lead to interesting and complex organisms. Many people would say that this algorithm does *not* do the same sort of explanatory work as natural selection—but if so, *why* not? Is it because brute-force search takes exponential time? Or because the goal of “interesting and complex organisms” is too vague? Or both reasons, or something else entirely?

Likewise, what are the similarities and differences between Darwinian natural selection and the anthropic explanation for the apparent “fine-tuning” of physical constants? If the former explanation satisfies us while the latter doesn’t, then why?

Is there a puzzle about the *speed* of evolution? Is it reasonable to want an explanation for why evolution on Earth took roughly 4 billion years, rather than a much longer (or shorter) time? If so, what could such an explanation look like? Can Valiant’s “Evolvability” model shed any light on these questions?

What are the differences between genetic and memetic evolution?

What are the differences between natural selection and *genetic algorithms*, as applied (for example) to find approximate solutions to NP-hard optimization problems?

]]>

]]>

How should we understand the observation that the Iterated Prisoner’s Dilemma, with any *fixed* number of rounds, has “always defect” as its only equilibrium? Is the solution to take into account the players’ bounded memories (as Neyman proposed), or some other cognitive limitation that blocks the “grim inductive reasoning” from the last round back to the first? Or is it simply that the assumed scenario—where you interact with someone for a *fixed, known* number of rounds—was artificial in the first place, so it doesn’t matter if game theory gives us an “artificial” answer?

What’s *your* favored resolution of the Surprise Examination Paradox? Do you agree with Kritchman and Raz, that the problem is basically one of the students *not being able to prove their own rationality*—and therefore, of their not being to declare in advance that, *if* the exam came on Friday, then it wouldn’t be a surprise to them (even though, in actual fact, it *wouldn’t* be)? In what ways is the Surprise Examination Paradox similar to the “Iterated Prisoner’s Dilemma Paradox,” and in what ways is it different?

Do you agree or disagree with the “Common Prior Assumption” (CPA), which says that all rational Bayesian agents should have the same prior probability distribution? If you agree, then what do you make of Aumann’s Agreement Theorem: does it really teach us that “agreeing to disagree” is inherently irrational, or is there some other loophole (for example, having to do with bounded rationality)? If you reject the CPA, what do you make of the argument that, if two people *really* want to be rational, they both need to take into account the possibility of “having been born the other person”? (And for the philosophers: are there any interesting connections here to the views of Spinoza, or Rawls?)

]]>

Does the possibility of scalable quantum computers *prove* (or rather, “re-prove”) the Many-Worlds Interpretation, as Deutsch believes? If not, does it at least lend *support* to MWI? Is MWI a good heuristic way to *think*about what a quantum computer does?

On the contrary, does MWI encourage intuitions about how a quantum computer works that are flat-out wrong? (E.g., that it can “try all possible answers in parallel”?)

If scalable quantum computers are built, do you think that will change people’s outlook on the interpretation of quantum mechanics (and on MWI in particular)?

Can MWI be *tested*? What about, as Deutsch once suggested, by an experiment where an artificially-intelligent quantum computer was placed in a superposition over two different states of consciousness?

Does the fact that humans are open systems—that unlike quantum computers, we *can’t* practically be placed in coherent superposition states—mean that MWI and the Copenhagen interpretation are indistinguishable by human observers?

Would a scalable quantum computer test quantum mechanics itself in a new way? Is the possibility of scalable quantum computing (or more generally, of a violation of the Extended Church-Turing Thesis) so incredible that our default belief should be that quantum mechanics breaks down instead?

Could quantum computing be impossible in principle *without* quantum mechanics breaking down? If so, how?

Suppose it were proved that BPP=BQP. Would that influence the interpretation of QM? Would it undercut Deutsch’s case for MWI, by opening the door to “classical polynomial-time hidden-variable theories”?

**The Evolutionary Principle and Closed Timelike Curves**

Is Deutsch’s Evolutionary Principle—that “knowledge can only come into existence via causal processes”—valid? If so, how should we interpret that principle? What counts as knowledge, and what counts as a causal process?

Is Deutsch’s “Causal Consistency” requirement a good way to think about closed timelike curves, supposing they existed? Why or why not?

Does the apparent ability of closed-timelike-curve computers to solve NP-complete problems instantaneously mean that CTCs would *violate* the Evolutionary Principle? If so, how should we interpret this: that CTCs are physically impossible? That the Evolutionary Principle is a poor guide to physics? Or that, if CTCs exist, then some *new* physical principle has to come into effect to prevent them from solving NP-complete problems?

]]>

Does quantum mechanics need an “interpretation”? If so, why? Exactly what questions does an interpretation need to answer? Do you find any of the currently-extant interpretations satisfactory? Which ones, and why?

*More specifically:* is the Bayesian/neo-Copenhagen account really an “interpretation” at all, or is it more like a principled refusal even to discuss the measurement problem? What *is* the measurement problem?

If one accepts the Many-Worlds account, what meaning should one attach to probabilistic claims? (Does “this detector has a 30% chance of registering a photon” mean it will occur in 30% of universes-weighted-by-mod-squared-amplitudes?)

]]>

What are the important assumptions made by Valiant’s PAC model, and how reasonable are those assumptions?

What are the relative merits of the PAC-learning and Bayesian accounts of learning?

Do the sample complexity bounds in PAC-learning theory (either the basic m=(1/ε)log(|H|/δ) bound, or the bound based on VC-dimension) go any ways toward “justifying Occam’s Razor”? What about the Bayesian account, based on a universal prior where each self-delimiting computer program P occurs with probability ~2^{-|P|}?

Does some people’s criticism of PAC-theory—namely, that the number of bits needed to pick out an individual hypothesis h∈H need not be connected to “simplicity” in the intuitive sense—have merit? If so, what would need to be done to address that problem?

]]>

Are scientific hypotheses with low Kolmogorov complexity more likely to be *true*, all else being equal? If so, why? Is that the *only* reason to prefer such hypotheses, or are there separate reasons as well?

Rather than preferring scientific hypotheses with low Kolmogorov complexity, should we prefer hypotheses with low *resource-bounded Kolmogorov complexity* (i.e., whose predictions can be calculated not only by a short computer program, but by a short program that runs in a reasonable amount of time)? If we did that, then would we have rejected quantum mechanics right off the bat, because of the seemingly immense computations that it requires? At an even simpler level, would we have refused to believe that the observable universe could be billions of light-years across (“all that computation going to waste”)?

Scientists—especially physicists—often talk about “simple” theories being preferable (all else being equal), and *also* about “beautiful” theories being preferable. What, if anything, is the relation between these two criteria? Can you give examples of simple theories that aren’t beautiful, or of beautiful theories that aren’t simple? Within the context of scientific theories, is “beauty” basically just an imperfect, human proxy for “simplicity” (i.e., minimum description length or something of that kind)? Or are there other reasons to prefer beautiful theories?

Does fully homomorphic encryption have any interesting philosophical implications? Recall Andy’s thought experiment of “Homomorphic Man”, all of whose neural processing is homomorphically encrypted—with the encryption and decryption operations being carried out at the sensory nerves and motor nerves respectively. Given that the contents of Homomorphic Man’s brain look identical to any polynomial-time algorithm, regardless of whether he’s looking at (say) a blue image or a red image, do you think Homomorphic Man would have qualitatively-different subjective experiences from a normal person? How different can two computations look on the inside, while still giving rise to the same qualia?

]]>