Class #11: Complexity, Economics, and Bounded Rationality

In general, do computational complexity limitations pose a serious challenge to classical economic theory?  If so, is that challenge over and above the one posed by ordinary cognitive biases?  Note that there are really two components to these questions: the first is whether complexity limitations challenge the “descriptive” part of economics (by showing that real agents might not be able to compute Nash equilibria, perform Bayesian updates, maximize their expected utility, etc).  The second component is whether they challenge the “prescriptive” part: i.e., if classical economic rationality requires computations that are so far out of mortal reach, then is it even worth setting up as a target to aim for?  And even supposing you had unlimited computational power, what about taking into account other agents’ lack of such power?

How should we understand the observation that the Iterated Prisoner’s Dilemma, with any fixed number of rounds, has “always defect” as its only equilibrium?  Is the solution to take into account the players’ bounded memories (as Neyman proposed), or some other cognitive limitation that blocks the “grim inductive reasoning” from the last round back to the first?  Or is it simply that the assumed scenario—where you interact with someone for a fixed, known number of rounds—was artificial in the first place, so it doesn’t matter if game theory gives us an “artificial” answer?

What’s your favored resolution of the Surprise Examination Paradox?  Do you agree with Kritchman and Raz, that the problem is basically one of the students not being able to prove their own rationality—and therefore, of their not being to declare in advance that, if the exam came on Friday, then it wouldn’t be a surprise to them (even though, in actual fact, it wouldn’t be)?  In what ways is the Surprise Examination Paradox similar to the “Iterated Prisoner’s Dilemma Paradox,” and in what ways is it different?

Do you agree or disagree with the “Common Prior Assumption” (CPA), which says that all rational Bayesian agents should have the same prior probability distribution?  If you agree, then what do you make of Aumann’s Agreement Theorem: does it really teach us that “agreeing to disagree” is inherently irrational, or is there some other loophole (for example, having to do with bounded rationality)?  If you reject the CPA, what do you make of the argument that, if two people really want to be rational, they both need to take into account the possibility of “having been born the other person”?  (And for the philosophers: are there any interesting connections here to the views of Spinoza, or Rawls?)

This entry was posted in Uncategorized. Bookmark the permalink.

19 Responses to Class #11: Complexity, Economics, and Bounded Rationality

  1. bobthebayesian says:

    Regarding the Kritchman and Raz approach to the surprise examination paradox (SEP), can the same argument basically be applied to Freiling’s symmetry argument which has been argued to make the Continuum Hypothesis intuitively false? In the section of the linked Wikipedia page called “Freiling’s Argument”, we have:

    “Since we can predict with virtual certainty that “y is not in f(x)” after the first dart is thrown, and since this prediction is valid no matter what the first dart does, we should be able to make this prediction before the first dart is thrown. This is not to say that we still have a measurable event, rather it is an intuition about the nature of being predictable.”

    It seems like there ought to be some analogy between the kind of ‘induction’ done above, i.e. making a prediction before the first dart is thrown because of the measure zero argument, and working backwards from Friday in the SEP. I’m really just thinking out loud, so corrections or improvements would be appreciated.

  2. It seems to me that another challenge arising from complexity concerns in classical economic theory is the following.

    One often assumes that a government (or, more generally, a “designer”) has plenty of probabilistic information about the world (i.e. some sort of Bayesian) that it can use to engineer games with the goal of ensuring certain desirable social goals, by leveraging the rationality of the people.

    Yet, rarely (if at all) thought is given to what might happen when the Bayesian is only known approximately and, more importantly, whether the distribution at hand is even likely to be one that can be approximately learned in polynomial time / with a reasonable number of resources.

    I would be interested in hearing about works attempting to address such concerns, as I don’t know of many.

    • D.R.C. says:

      Even for situations that use Bayes’ nets, you would rarely have exact probabilities (except in very contrived examples). Bayes’ nets are pretty much simplified models that seem to model the real world with some reasonable amount of accuracy. They tend to ignore some actual contributions to altering probabilities if they are minimal in order to have a simpler model of dependancies (e.g. a store might try to estimate the number of people who would buy an item on a given day, and they might use price, brand, and demand that map to probabilities that a random person would buy the item, and ignore the predicted weather for the day [weather tends to play a role in shopping habits, even with items that aren’t weather-dependent]), and ignoring events that have next to no chance of happening, but that could still effect the outcome. Depending on how many things you are measuring, you could end up with a lot of extraneous data, which means many possible incidental correlations (either based on similar causes, or simply time).

      As you get new information, then you can adjust the Bayes’ net from the new information that gained, which is a fairly simple procedure assuming that you have a dependency diagram established. I think that the hardest part of the entire procedure is figuring out how to set up the dependancies to accurately model reality. I’m sure that there are methods/programs that can at least figure out the strongly connected dependancies relatively quickly, but I think that many times humans come up with theories that they use the computers to test if those are related, and then build up basic model using that, then using the computer to test other variables that seem related and where they seem to fit in the existing model.

  3. D says:

    Sorry if I seem to be a bit skeptical here….

    How should we understand the observation that the Iterated Prisoner’s Dilemma, with any fixed number of rounds, has “always defect” as its only equilibrium?

    By asserting that the model is inaccurate for modeling real-world situations. Possibly due to the memory-limitation argument stated in the post, but more likely that the payoff matrix does not represent the true payoff for real-world actors due to effects outside of the model. For example, there are reputation effects–even if you only interact with one other person a fixed number of times, if you gain a reputation as a defector, that may hinder your interaction with others, either in the IPD or other games. Also, there is an individual moral compass–if one views defecting as taking advantage of the other person, there may be a substantial negative payoff associated with being the type of person who takes advantage of others.

    What’s your favored resolution of the Surprise Examination Paradox?

    “The teacher is a liar.”
    Honestly, that is my favored resolution. Just because a person in authority says something doesn’t mean it is true or even sensical; the assertion that “you won’t know what day the test will be” doesn’t necessarily have to be taken as anything more than a slightly-more-complicated version of “this statement is false.”

    Do you agree or disagree with the “Common Prior Assumption” (CPA), which says that all rational Bayesian agents should have the same prior probability distribution?

    Disagree; it is non-intuitive to me, and I have not heard any case for why it should be so.

    • bobthebayesian says:

      I’m sympathetic to your first two points. I also prefer to resolve the surprise exam paradox by saying that the teacher has uttered a falsehood (i.e. if I take the teacher’s statement as an axiom, then either there will be no exam or else I will not be surprised when it comes). But I do like the Kritchman and Raz idea: you can prove that the exam won’t happen on a given day but you can’t prove that you will know that the exam will not happen on any other day. If such certainty is what we want when we say “surprise” then this seems to be an adequate resolution to me.

      Regarding the last issue, though, I disagree with you. If you know that some agent is really rational and Bayesian, that’s already saying quite a lot. If you query them for data about their current beliefs (what we’re calling the prior here), then you know that those beliefs come from updating correctly on hard evidence. Thus, merely hearing the opinion of the other person would count as very strong evidence to you and cause you to update your beliefs as well. If either of you was previously making a mistake, then it would have to be resolved in such a manner. It wouldn’t be rational for you to agree to disagree (in an idealized sense. Obviously, burning energy or spending time to disagree might not be the most rational thing to do, but we’re not talking about a socially practical kind of “agree to disagree” here, only an idealized sense in which you accept mutually exclusive points of view as both being valid).

      I think this sort of thing is a strong reason to accept the CPA. If interested in additional reading, check out this paper by Robin Hanson about pre-priors and the origins of prior distributions; this was discussed quite a bit in the lecture. I also found this discussion to be useful as a companion to actually reading the paper.

  4. Katrina LaCurts says:

    What’s your favored resolution of the Surprise Examination Paradox?

    The teacher’s statement is false. To me, this problem is only a paradox because we know that surprise exams exist based on our own experience, and yet this paradox seems to show they’re impossible. It’s resolved if we just change the teacher’s phrasing to “You’ll have an exam next week and I’m not going to tell you when”. Sure, if the students haven’t had an exam by Friday morning, then they know it’s going to happen Friday. But any day before that will be a surprise.

    There are a few parallels between this and the Iterated Prisoner’s Dilemma, most obviously that the arguments for the two are the same (prove that there can’t be an exam Friday, or that we must defect on the last day, and then work backwards). Second, the IPD seems unnatural in the same way that the surprise examination paradox is, but can also be resolved in a simple manner: the prisoners are maximizing an unrealistic utility function. In the real world, our utility function would take into account our reputation; this isn’t captured by the utility function in the IPD.

    To me, IPD is less of a paradox than the surprise examination paradox. If the utility functions in the IPD were accurate, i.e., our reputation really was not affected, I see no reason to be surprised by the “all defect” result. Why worry about constantly defecting if no one will ever know about it (aside from pesky moral considerations, also not captured by the utility function)?

    • Scott says:

      Why worry about constantly defecting if no one will ever know about it (aside from pesky moral considerations, also not captured by the utility function)?

      Well, one reason to worry about it is that both players will do much worse even for themselves than if they simply ignored the inductive argument for defecting and played TIT-FOR-TAT—and furthermore, both players can clearly see that they’ll do much worse! Of course, this “paradox” (or “tragedy,” or whatever you want to call it) exists even in the single-shot Prisoner’s Dilemma, but repeating the game 200 times seems to make it much more vivid.

      Also, regarding the Surprise Examination Paradox: on your account, I still don’t understand why an exam on Thursday would come as a “surprise” to the students.

  5. kasittig says:

    I believe that it’s completely reasonable for the Iterated Prisoner’s Dilemma to give us the bleak equilibrium of “always defect”. Whenever I’ve been told of the Prisoner’s Dilemma, it was to highlight the “natural” tendencies of humans to be pessimistic and mistrusting (“natural” because it’s ostensibly the optimal choice according to game theory) and to argue that, in general, it is better to be cooperative than it is to defect, because mutual cooperation is the only way to break free of the terrible fate of the theoretically most stable outcome.

    I think that this brings to light an even more interesting question – what can we actually predict, if humans can (and sometimes do!) alter the true equilibrium by making completely “irrational” choices? Perfectly rational beings are a simplification that is often applied to economics, but is this simplification really reasonable? There appears to be significant societal pressure to cooperate and “be nice” in many cases – is it really all that useful to study models that don’t take this into account?

    The problems that we discussed last Wednesday lead us to cute, mathematically satisfying answers – but is this really why we keep studying the Prisoner’s Dilemma? It seems like a greater social factor may be at play, and that this factor is probably something that we can’t even begin to quantify (or model).

  6. nemion says:

    Try stuff with move ordering
    Like path of the beam

    Etc …

    I am interested in drawing attention to the relationship of the topics discussed here and the efficient market hypothesis. Criticism of efficient markets has usually taken the form of behavioral economics, and particularly drawing attention on the inaccuracy of expected utility theory. Particularly it seems that people often make decisions based on approximate rules of thumb, not strict logic.

    I do not recall the name of the following thought experiment but I think it is interesting to take into account:

    Suppose I come to you and tell you to choose between the following games:

    1) You will throw a coin:
    If it appears heads I give you 10 dlls.
    If it appears tails I give you nothing

    2) You throw a coin
    I give you one dollar regardless of the result

    Now consider the following two games:
    1) You will throw a coin:
    If it appears heads I give you 10 million dlls.
    If it appears tails I give you nothing

    2) You throw a coin
    I give you one million dollars regardless of the result

    Following the normal expected return theory we should pick bets 1). Nevertheless, the experiment run with a variety of individuals shows that consistently people pick 1) and 2) respectively. I claim that although bounded rationality can account for many of the phenomenon we see in economics that lie outside the efficient market hypothesis, the experiment I just mentioned might not be accounted for without using a more sophisticated behavioral based theory of economics. Indeed it can be modeled mathematically, but it is not fully accounted for a bounded rationality.

  7. Here is a suggested resolution for the Surprise Examination Paradox: the “surprise” comes when on Thursday, the teacher announces that there is no exam. At *this* point you receive the bit of information that permits you to infer that the exam is on Friday; although at all points afterwards the presence of the exam is not a surprise, at that very moment you were surprised. So what we’ve done is claimed that there is no difference between “five minutes before Friday’s exam” and “the moment when the teacher tells you that there is no exam on Thursday.” I’m not sure if this explanation is entirely satisfactory.

    • Scott says:

      I don’t understand how this solves the problem. If the exam comes on Friday then it won’t be a surprise, so by assumption, it CAN’T be on Friday. So if, on Thursday, the teacher pushes the exam to Friday, then hasn’t the teacher in so doing contradicted her earlier statement that the exam will be a surprise?

      Should I take this to mean you’re in the camp that says the teacher simply contradicted herself, and there can’t be a surprise exam?

      • The idea is to treat this as an information-theoretic problem: You’re receiving a stream of five bits from the teacher, and furthermore, you know that one of the bits is a one, and that when you receive *that* bit, it will be a surprise. There are two claims here:

        1. The teacher was talking nonsense when he claimed that you will be surprised by receiving a one bit,

        2. Until you receive the fourth bit, there are more than one possible states of the world, and thus you are fully capable of being “surprised”, in that you will receive information that will collapse the number of possible states to one.

  8. amosw says:

    Regarding the Common Prior Assumption: the easiest way to find a bad programmer is to walk into a lab and find the guy who is swearing at his console. Bonus points if he is sputtering “that’s impossible!” at his debugger.

    The easiest way to find a master programmer is to find the gal who has detachment from her prejudices and who immediately updates her beliefs about her program when she receives a report from a debugger.

    This is a critical part of the education of a systems computer scientist: learning how to let go of one’s ideas about how the computer should be acting and to carefully study how the computer is actually acting. Many people, and for some reason especially physical scientists, have a very hard time with this. I have seen people with many years of programming experience cussing at their console instead of calmly saying “what my debugger is telling me is a near-perfect Bayesian reconstruction of reality and how should I update my beliefs given this new information?” Note here that I am talking about computer systems that are not experiencing hardware or operating system flaws.

    I mention this because interacting with a debugger is the closest I think most of us ever come to interacting with a perfect Bayesian. That is, when we ask our debugger to give us a stack trace, it is doing inference. It is possible that memory has been corrupted, that the program was written in self-modifying assembly language and didn’t follow the Application Binary Interface conventions for how to call functions, that a demon has inhabited the translation lookaside buffer, et cetera. In the face of this, the debugger does the best job it can to give you a maximum a posteriori estimate of the state of the program. Watching a master programmer interact with a good debugger is like watching two near-perfect Bayesians interact, as information flows both ways when the master programmer suggests that the debugger should modify its prior by typing in a command that suggests that a given convention was used by the program. Let me tell you, they converge rapidly.

    This suggests an operational test of how much we respect someone. We have an argument (interaction) with that someone, and then estimate our posterior. The KL-divergence between our posteriors is a measure of the degree to which we believe that they are perfect Bayesians, under the assumption that we are perfect Bayesians. (Aren’t we all?)

  9. Hagi says:

    I agree that the IPD with fixed number of rounds is similar to the SEP as both uses inductive reasoning backwards from the end. I also see the IPD with fixed number of rounds similar to the single PD, since both have the sharp deadline after which the decisions you have made cannot hurt you. In this sense IPD with fixed rounds and single PD are the same up to scaling by the number of rounds. I’m not sure I am convinced by the use of cognitive limitations on explaining the difference between IPD with fixed or not-fixed number of rounds, as I’m not sure how true it is that the parties would not be able to keep a count of the number of rounds.

    About the SEP: It seems to me that if the students do not use inductive reasoning to conclude that what the teacher has said cannot be right, then they will be surprised to have the exam on any day but Friday. If they make the inductive reasoning, and decide the teacher cannot give them an examination that week, then again they will be surprised when they are given the test. However if they start every day thinking the test must be that day since through inductive reasoning it cannot be the following days, then they will not be surprised no matter what they they are given the test. However I’m not sure if this has much significance, since their reasoning is falsified on Monday if the teacher did not give the examination on Monday. At the end of Monday, they assume it will be on Tuesday and that it will not be a surprise when they get it on Tuesday, however their reasoning should not be accepted anymore as it is not consistent. So they might not think they are surprised, but from my perspective I’d say it is a surprise.

  10. Miguel says:

    The hardness of convex optimization has profound consequences for neoclassical economic analysis beyond the bounded rationality critique. Optimization lies at the heart of every model in modern micro and macro, and so its intractability poses a major challenge for economics as it is known today.

    Yet economists are dismissive of the challenge: they claim that ‘real’ economic systems do equilibrate, and so to the extent that the outcome of intractable optimization problems fit observed data, the ‘true’ dynamics do not matter. This, however, makes for a bad explanation in Deutsch’s sense: by abstracting out the problem of equilibration, one is in fact sweeping it under the carpet. Furthermore, the ‘true’ dynamics are subject to complexity restrictions too, and so if equilibration is ‘real’ it sure does not occur via mechanisms anywhere close to those posited by economic analysis, which casts doubt on their empirical validity. One is also hard-pressed to justify the deductive leap of validating a large hypothesis class containing physically unrealizable hypotheses (eg the utility-maximization view in the revealed preference tradition), on the basis of finding one good tractable hypothesis from the class that fits data in some limited way (eg Afriat’s theorem): why not invoke Occam’s Razor and search for a simpler explanation of the data? Besides, the empirical evidence for equilibration and utility maximization in ‘real’ economic systems is sketchy at best.

  11. sbanerjee says:

    Seeing the proposed intractability of acting according to classical economics, I find that using theory of mind calculations to determine outcomes is more reasonable. Instead of taking on rational agents with perfect information [more of a bottom up approach to things], why not take on the model where rational agents are recursively calculating how their opponents are thinking [a top down approach, but one that can be limited to a certain number of steps to make the model computationally tractable]. This model is discussed in the paper “Game Theory of Mind” by Yoshida et. al. [http://www.fil.ion.ucl.ac.uk/~karl/Game%20Theory%20of%20Mind.pdf].

    In the paper, Yoshida et. al. uses inference on generative models to see how outcomes change based on the number of steps the game is played out to. I might be mistaken on Neyman’s stance, but this approach of the game theory of mind seems to be slightly different from Neyman’s approach that is mentioned in WPSCACC since it uses generative models and takes a probabilistic approach towards inferring a players actions. I think this difference enables a more realistic model of the players since humans can generate an extraordinary large set of outcomes, but usually end up sampling from that generative distribution instead of going through all outcomes of the model [which seems to be what Neyman’s finite automata approach attempts to do] – enabling a better approximation of human reasoning to be computationally tractable using probabilistic programming.

    • bobthebayesian says:

      “How should we understand the observation that the Iterated Prisoner’s Dilemma, with any fixed number of rounds, has “always defect” as its only equilibrium? Is the solution to take into account the players’ bounded memories (as Neyman proposed), or some other cognitive limitation that blocks the “grim inductive reasoning” from the last round back to the first? Or is it simply that the assumed scenario—where you interact with someone for a fixed, known number of rounds—was artificial in the first place, so it doesn’t matter if game theory gives us an “artificial” answer?”

      In thinking about this question, I came across an interesting paper that tests how anonymity factors into cooperation strategy in iterated games. As expected, when decisions were made public at the end of each round, participants were more likely to cooperate. When only the total number of defectors was revealed, but not the identities, then more defection happened.

      It is also mentioned in the paper that group size plays a role in cooperate/defect scenarios. As group size increases, defection becomes more prevalent. At first, this seems very obvious because we feel that smaller groups imply less “social distance” between the participants. In a larger group, you feel more anonymous or “deindividuated” and hence less socially/directly responsible for outcomes. Just think of the tragedy that happened earlier this year in China where a young child was hit by two cars while onlookers did nothing to help. Similar events have been reported in the US where onlookers use cell phones to photograph or videotape an accident without reporting it. Many studies have indeed confirmed this idea that individuals feel “shared responsibility” when among a crowd and thus less direct impetus to act. (This is important to keep in mind if you’re ever in a crisis. It’s probably better to point right at one bystander and appeal for help. By singling someone out, you’re more likely to induce a feeling of responsibility. By making global pleas, everyone will continue to feel that it’s not their responsibility.)

      What’s not obvious about the group size dependence is that there is a “pure” size effect, that is decoupled from social role, utility, and status effects [see Hamburger, Guyer, and Fox, “Group size and cooperation.” J. of Conflict Resolution. 1975]. Some ideas for why this happens are: (1) with a larger group size, there’s just an inherently higher probability of a “bad apple” and since the defect strategy quickly ruins an all-cooperate strategy even if only one agent is using it, all it takes is one bad apple… and (2) small groups are usually intrinsically less anonymous.

      What this says to me is that we should probably require explicit assumptions about anonymity, group size, and group structure in our game theory. In situations where we truly only face a competitor for a finite number of rounds, and our interaction is sufficiently anonymous (perhaps even to perceived judgement from scientists conducting the experiment), then we should probably expect that the all-defect strategy will be highly prevalent in human behavior.

      I’ve been trying to think of some good real life examples where you truly do face a competitor only once or a small number of times and you are anonymous. How often do people cheat when playing online games? How often do anonymous posters leave rude comments that they would never say if their actual identity depended on it? I think this anonymous / reputation effect needs to be accounted for before we worry about whether classical all-defect results are problematically different from human experience.

      On OvercomingBias, Robin Hanson has been on a riff lately with his theory of what he calls homo hypocritus — that humans have evolved to want to be perceived as obeying cultural norms in public but then to privately violate cultural norms indiscriminately. My thought is that classical game theory seems to only focus on the private, hypocrite mode, while in real life it is very hard to ever get an experiment that tests this “pure” mode and we almost always have some mixing with the public-appearance mode going on.

      FWIW, Hanson has some very interesting evidence of this as well. I particularly liked his recent quote about this effect in academia: “Academics talk as if academia is all about the research progress, but in fact it is more about “authorizing” the academics. That is, about credentialing their impressiveness, so that others can affiliate with credentialed impressive folks.”

  12. Cuellar says:

    I think there is another very interesting problem with infinitely repeated games proved by Weinstein and Yildiz. The summarized result says that that for every game continuous at infinity, and every rationalizable strategy a, there is a perturbation of the players’ types as small as you want (and this is the important part) that makes a the ONLY rationalizable solution.

    I think the problem here is very troubling. ANY perturbation to the players types can completely collapse the set of rationalizable strategies. That means that on analyzing a game any error in understanding the types completely changes the possible outcomes. Notice that the type includes the players beliefs and understanding of the game, so people rarely even know their own type with precision!

    I guess one could give the same answer that we could for the Prisoner’s Dilemma. The idealization of an infinite number of rounds is artificial in the first place, so it is normal to get artificial answers. The scary point is that this result wasn’t published until 2009 (revised in 2010), and it seemed like the theory was working in practice before that (or was it?).

  13. wjarjoui says:

    In this class and the readings it was really interesting to find out that about search problems that could be NP-Complete except for there decision version is trivial and hence they cannot technically be in that class.

    Regarding Neyman’s proposition for the Iterated Prisoner’s Dilemma – it seems that Neyman’s proposition is simply one way of actually not telling the players the number of rounds. The only difference that this would have from the original scenario (the players do not know anything about the number of rounds, and they don’t have any extra computational limitaions) is that the players would know that there will be a final round. What the players will also know, in each round i where i + k < N, is that the final round is not going to be in the next k steps, hence they will maximize their payoffs. This seems as essentially telling the players in each round "you don't know when the game is going to end" until one round i where i + k = N where you say "the game ends in k rounds". After that round, the players will defect on each round.
    Hence does the limitation that Neyman proposed really solve the problem? 9And is it practical?) I beleive the answer is only yes if we are trying to use it to model agents that can have such a small memory that they can't count, and otherwise the answer is no.

    On another note, as opposed to some of the ideas in class that the Iterated prisoner's Dilemma does not model the real world well because the payoff matrix does not take into account reputation with other players, I believe it does. The reason this is the case is that the two players are not necessarily two humans, they can be a human and his society, or his colleagues. Perhaps each round he plays against a different colleague, with repetition, and he is able to remember which colleague played which round.

    I found an interesting read that touches on both the paradox of the surprise exam and the prisoner's dillema. The author first shows why original statement the teacher makes (in the paradox) is false, and hence the students reach the false conclusion, and then the author relates it to the prisoner's dilemma. This article seemed as an exact response to our class so I thought I'd mention it, you can find it here http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0CEkQFjAD&url=http%3A%2F%2Fwww.cs.ucdavis.edu%2F~defigued%2Findex_files%2Fnew-surprise-exam-solution.pdf&ei=AW7qTsKLNcHf0QHnnJW7CQ&usg=AFQjCNHjKKR2zue0vy02tWUCLorWq_MlEQ&sig2=zYGTNpsx3lc9dY6mkl3P7Q

Leave a comment