This is a place for interested students to post links to their project reports and/or presentations.

Recent Posts
Recent Comments
Archives
Categories
Meta
Advertisements
This is a place for interested students to post links to their project reports and/or presentations.
(Participation is optional. For additional discussion about Newcomb’s problem and free will, please post in thread #12b.)
Feel free to post any thoughts you’ve had related to the topics discussed in class #13:
Do you believe the superintelligent “Predictor” of Newcomb’s Paradox could exist? Does your answer depend on specific properties of the laws of physics (such as the uncertainty principle)? Does it depend on the assumption that humans are “open systems”, in constant interaction with their external environment? Does it depend on subtleties in even defining the Predictor (for example, to what extent it’s allowed to alter the state of your brain while scanning it)?
Do you consider yourself a oneboxer or a twoboxer? Why? Or do you reject the entire premise of Newcomb’s Paradox on logical grounds?
What do you make of the idea that, if Newcomb’s Predictor existed, then you’d need to think in terms of two different “instantiations” of you, which nevertheless acted in sync and were “controlled by the same will”: (1) the “real” you, and (2) the simulated version of you being run on the Predictor’s computer? Does this proposal do irreparable violence to causality, locality, or other principles of physics? If so, is the “benefit” (justifying oneboxing) worth the cost?
What’s the role of probabilistic or gametheoretic considerations in Newcomb’s Paradox? Does the probabilistic variant of the paradox (discussed in class) raise any issues different from those raised by the deterministic version?
If Newcomb’s Predictor existed, would that create new and serious problems for the everyday notion of “free will”? (Feel free to divide this into two subquestions: whether it would change how people thought about their free will, and whether it ought to.)
More generally, what’s the relationship between Newcomb’s Paradox and the “traditional” problem of free will? Are there aspects of one problem that aren’t captured by the other?
Apart from what we discussed in class, can you suggest any computerscience examples or metaphors that clarify aspects of the freewill issue?
If philosophers, mathematicians, etc. had been clever enough, could they have figured out that natural selection was the right explanation for life a priori, without input from naturalists like Darwin? If life exists on other planets, should we expect that it, too, arose by Darwinian natural selection—or rather, that if the life was “intelligently designed,” then natural selection was ultimately needed to bring the intelligent designer(s) themselves into existence? Is there any conceivable mechanism other than natural selection that’s capable, in principle, of doing the same sort of explanatory work (i.e., that would qualify as a “good explanation” in David Deutsch’s sense)?
Consider a “bruteforce search algorithm,” which cycles through all possible DNA sequences until it finds some that lead to interesting and complex organisms. Many people would say that this algorithm does not do the same sort of explanatory work as natural selection—but if so, why not? Is it because bruteforce search takes exponential time? Or because the goal of “interesting and complex organisms” is too vague? Or both reasons, or something else entirely?
Likewise, what are the similarities and differences between Darwinian natural selection and the anthropic explanation for the apparent “finetuning” of physical constants? If the former explanation satisfies us while the latter doesn’t, then why?
Is there a puzzle about the speed of evolution? Is it reasonable to want an explanation for why evolution on Earth took roughly 4 billion years, rather than a much longer (or shorter) time? If so, what could such an explanation look like? Can Valiant’s “Evolvability” model shed any light on these questions?
What are the differences between genetic and memetic evolution?
What are the differences between natural selection and genetic algorithms, as applied (for example) to find approximate solutions to NPhard optimization problems?
By student request, I’ve created this post as a place for students and listeners in 6.893 to discuss any questions related to philosophy and theoretical computer science that don’t fit into the other posts.
In general, do computational complexity limitations pose a serious challenge to classical economic theory? If so, is that challenge over and above the one posed by ordinary cognitive biases? Note that there are really two components to these questions: the first is whether complexity limitations challenge the “descriptive” part of economics (by showing that real agents might not be able to compute Nash equilibria, perform Bayesian updates, maximize their expected utility, etc). The second component is whether they challenge the “prescriptive” part: i.e., if classical economic rationality requires computations that are so far out of mortal reach, then is it even worth setting up as a target to aim for? And even supposing you had unlimited computational power, what about taking into account other agents’ lack of such power?
How should we understand the observation that the Iterated Prisoner’s Dilemma, with any fixed number of rounds, has “always defect” as its only equilibrium? Is the solution to take into account the players’ bounded memories (as Neyman proposed), or some other cognitive limitation that blocks the “grim inductive reasoning” from the last round back to the first? Or is it simply that the assumed scenario—where you interact with someone for a fixed, known number of rounds—was artificial in the first place, so it doesn’t matter if game theory gives us an “artificial” answer?
What’s your favored resolution of the Surprise Examination Paradox? Do you agree with Kritchman and Raz, that the problem is basically one of the students not being able to prove their own rationality—and therefore, of their not being to declare in advance that, if the exam came on Friday, then it wouldn’t be a surprise to them (even though, in actual fact, it wouldn’t be)? In what ways is the Surprise Examination Paradox similar to the “Iterated Prisoner’s Dilemma Paradox,” and in what ways is it different?
Do you agree or disagree with the “Common Prior Assumption” (CPA), which says that all rational Bayesian agents should have the same prior probability distribution? If you agree, then what do you make of Aumann’s Agreement Theorem: does it really teach us that “agreeing to disagree” is inherently irrational, or is there some other loophole (for example, having to do with bounded rationality)? If you reject the CPA, what do you make of the argument that, if two people really want to be rational, they both need to take into account the possibility of “having been born the other person”? (And for the philosophers: are there any interesting connections here to the views of Spinoza, or Rawls?)
The “Deutsch Argument” for the ManyWorlds Interpretation (“Where Was The Number Factored?”)
Does the possibility of scalable quantum computers prove (or rather, “reprove”) the ManyWorlds Interpretation, as Deutsch believes? If not, does it at least lend support to MWI? Is MWI a good heuristic way to thinkabout what a quantum computer does?
On the contrary, does MWI encourage intuitions about how a quantum computer works that are flatout wrong? (E.g., that it can “try all possible answers in parallel”?)
If scalable quantum computers are built, do you think that will change people’s outlook on the interpretation of quantum mechanics (and on MWI in particular)?
Can MWI be tested? What about, as Deutsch once suggested, by an experiment where an artificiallyintelligent quantum computer was placed in a superposition over two different states of consciousness?
Does the fact that humans are open systems—that unlike quantum computers, we can’t practically be placed in coherent superposition states—mean that MWI and the Copenhagen interpretation are indistinguishable by human observers?
Would a scalable quantum computer test quantum mechanics itself in a new way? Is the possibility of scalable quantum computing (or more generally, of a violation of the Extended ChurchTuring Thesis) so incredible that our default belief should be that quantum mechanics breaks down instead?
Could quantum computing be impossible in principle without quantum mechanics breaking down? If so, how?
Suppose it were proved that BPP=BQP. Would that influence the interpretation of QM? Would it undercut Deutsch’s case for MWI, by opening the door to “classical polynomialtime hiddenvariable theories”?
The Evolutionary Principle and Closed Timelike Curves
Is Deutsch’s Evolutionary Principle—that “knowledge can only come into existence via causal processes”—valid? If so, how should we interpret that principle? What counts as knowledge, and what counts as a causal process?
Is Deutsch’s “Causal Consistency” requirement a good way to think about closed timelike curves, supposing they existed? Why or why not?
Does the apparent ability of closedtimelikecurve computers to solve NPcomplete problems instantaneously mean that CTCs would violate the Evolutionary Principle? If so, how should we interpret this: that CTCs are physically impossible? That the Evolutionary Principle is a poor guide to physics? Or that, if CTCs exist, then some new physical principle has to come into effect to prevent them from solving NPcomplete problems?