In general, do computational complexity limitations pose a serious challenge to classical economic theory? If so, is that challenge over and above the one posed by ordinary cognitive biases? Note that there are really two components to these questions: the first is whether complexity limitations challenge the “descriptive” part of economics (by showing that real agents might not be able to compute Nash equilibria, perform Bayesian updates, maximize their expected utility, etc). The second component is whether they challenge the “prescriptive” part: i.e., if classical economic rationality requires computations that are so far out of mortal reach, then is it even worth setting up as a target to aim for? And even supposing you had unlimited computational power, what about taking into account other agents’ lack of such power?
How should we understand the observation that the Iterated Prisoner’s Dilemma, with any fixed number of rounds, has “always defect” as its only equilibrium? Is the solution to take into account the players’ bounded memories (as Neyman proposed), or some other cognitive limitation that blocks the “grim inductive reasoning” from the last round back to the first? Or is it simply that the assumed scenario—where you interact with someone for a fixed, known number of rounds—was artificial in the first place, so it doesn’t matter if game theory gives us an “artificial” answer?
What’s your favored resolution of the Surprise Examination Paradox? Do you agree with Kritchman and Raz, that the problem is basically one of the students not being able to prove their own rationality—and therefore, of their not being to declare in advance that, if the exam came on Friday, then it wouldn’t be a surprise to them (even though, in actual fact, it wouldn’t be)? In what ways is the Surprise Examination Paradox similar to the “Iterated Prisoner’s Dilemma Paradox,” and in what ways is it different?
Do you agree or disagree with the “Common Prior Assumption” (CPA), which says that all rational Bayesian agents should have the same prior probability distribution? If you agree, then what do you make of Aumann’s Agreement Theorem: does it really teach us that “agreeing to disagree” is inherently irrational, or is there some other loophole (for example, having to do with bounded rationality)? If you reject the CPA, what do you make of the argument that, if two people really want to be rational, they both need to take into account the possibility of “having been born the other person”? (And for the philosophers: are there any interesting connections here to the views of Spinoza, or Rawls?)