Class #12b: Newcomb’s Paradox and Free Will

Do you believe the superintelligent “Predictor” of Newcomb’s Paradox could exist?  Does your answer depend on specific properties of the laws of physics (such as the uncertainty principle)?  Does it depend on the assumption that humans are “open systems”, in constant interaction with their external environment?  Does it depend on subtleties in even defining the Predictor (for example, to what extent it’s allowed to alter the state of your brain while scanning it)?

Do you consider yourself a one-boxer or a two-boxer?  Why?  Or do you reject the entire premise of Newcomb’s Paradox on logical grounds?

What do you make of the idea that, if Newcomb’s Predictor existed, then you’d need to think in terms of two different “instantiations” of you, which nevertheless acted in sync and were “controlled by the same will”: (1) the “real” you, and (2) the simulated version of you being run on the Predictor’s computer?  Does this proposal do irreparable violence to causality, locality, or other principles of physics?  If so, is the “benefit” (justifying one-boxing) worth the cost?

What’s the role of probabilistic or game-theoretic considerations in Newcomb’s Paradox?  Does the probabilistic variant of the paradox (discussed in class) raise any issues different from those raised by the deterministic version?

If Newcomb’s Predictor existed, would that create new and serious problems for the everyday notion of “free will”?  (Feel free to divide this into two sub-questions: whether it would change how people thought about their free will, and whether it ought to.)

More generally, what’s the relationship between Newcomb’s Paradox and the “traditional” problem of free will?  Are there aspects of one problem that aren’t captured by the other?

Apart from what we discussed in class, can you suggest any computer-science examples or metaphors that clarify aspects of the free-will issue?

This entry was posted in Uncategorized. Bookmark the permalink.

64 Responses to Class #12b: Newcomb’s Paradox and Free Will

  1. Silas Barta says:

    I think we should put to rest the idea that Newcomb’s problem can’t be instantiated in this world. At the very least, you can write a program capable of outputting an answer given a problem description [within a certain range], and feed it the Newcomb setup. A human capable of reading the program’s source code plays the role of Omega and can choose whether to fill the sealed box. The program then runs and gets an outcome. With that setup, you’ve captured all the essential aspects, including that of a predictor who is much better at predicting an agent’s actions than the agent.

    Furthermore, you can always relax the constraint of a perfect predictor, and simply use a “highly accurate” predictor, which is a lot more realistic, and still carries most of the philosophical and game-theoretic implications of the canonical Newcomb’s problem. I’ve compiled a list of real-life Newcomb-like problems, which retain the critical aspects that a) the “Omega” counterpart decides for you based on “who you are”, and b) there are no *causal* benefits to (the equivalent of) one-boxing.

    Also, have you read Yudkowsky’s Timeless Decision Theory paper, which addresses a lot of your questions, such as about choosing for multiple similar instantiations?

    • Scott says:

      Hi Silas,

      I’m aware of Eliezer’s “Timeless Decision Theory” and will read it as soon as I have a chance!

      What’s your argument for why we should “put to rest the idea that Newcomb’s problem can’t be instantiated in this world”? It’s obvious that one can have other situations that are analogous to Newcomb in one or more respects (e.g., game-theoretic problems, or Newcomb with a computer program in place of the human subject)—but the entire question at issue is whether the original Newcomb problem is interestingly different from those situations. If you think it isn’t, then at the least, I think you need to offer some argument about physics—e.g., explaining why you think that the uncertainty principle, or the interactions between brains and their external environments, aren’t going to present fundamental obstacles to “highly accurately” predicting the future behavior of a specific brain.


      • Silas Barta says:

        In the situation I described, where a computer program plays Newcomb’s game with a human code-reader as Omega, that computer (running the program) also interacts with its external environment, and is vulnerable to e.g. being kicked, circuits misfiring, cosmic rays flipping bits, etc., which can make it behave differently than what the source code says it will do. Nevertheless, the human can probably predict the program’s decision almost perfectly.

        The entity in the role of Omega only needs to simulate things that will affect the player’s decision, and then, only those things with a non-trivial chance of influencing the outcome. If Omega is “only” 99% accurate, all of the game-theoretic decisions apply that would when it’s 100% accurate. Indeed, for such a high potential payoff, one-boxing can be optimal even when Omega only does trivially better than chance. (I linked to a list of real-life circumstances where your actions are guessed with greater accuracy than chance and the situation is otherwise isomorphic to Newcomb’s problem.)

        Unless those environment interactions are going to be fundamental *decision* determinants for humans, Omega doesn’t need to simulate them.

        I agree that there can’t be situations where Omega has perfect accuracy, for much the same reason 1 is not a valid probability. However, if we can imagine a computationally-powerful being that is to us, what we are to the computer program, and that we are “programs” in the same sense, then it follows that humans can experience Newcomb’s problem.


        Btw, how come no one else is posting anymore?

      • bobthebayesian says:

        @Silas: Because we are all haggard from the semester which is coming to an end. Philosophy really takes it out of you 🙂

      • Scott says:

        However, if we can imagine a computationally-powerful being that is to us, what we are to the computer program, and that we are “programs” in the same sense, then it follows that humans can experience Newcomb’s problem.

        Sure. But for me the question is whether there can exist a computationally-powerful being that “reads our code” (in the same sense one can read a conventional computer’s code), by making measurements compatible with the laws of physics. (Sure, we can imagine such a being, but then we only get an imaginary Newcomb’s problem!) For a long time, I thought it was obvious that there was no physical obstruction to this, but the more I thought about it the less certain I became.

      • bobthebayesian says:

        Scott, what is your 1-paragraph summary of why it’s not obvious that technology could exist which “reads a human as if they were a computer program”? If you think this technology could exist, but just that it’s too hard for us to make it, then what’s the main reason to think the human species won’t have created this kind of technology in the next 1000 years, say?

      • Scott says:

        OK, one-paragraph summary: it’s not obvious that technology could exist that reads the brain as if it were a computer program, for the same reason it’s not obvious that technology could exist that reads the weather as if it were a computer program. Both are chaotic dynamical systems whose long-term evolution could be sensitive to tiny changes in the initial state. And even if we had the technology to scan every atom individually, the uncertainty principle would still put limits on how accurately we could measure the quantum state without making drastic interventions. Now, in the case of the weather, this apparent limit on predictability arguably isn’t so important, the reason being that weather doesn’t have much “memory.” Over a long enough timescale, it seems decomposable into a more-or-less deterministic “climate” part (seasons, ice ages, etc.) and a random variation part. Even if there happened to be a particularly strong hurricane one year, it presumably wouldn’t permanently change conditions on Earth so that there could never be another such hurricane again! To change the weather conditions permanently (or over, say, a ~10,000-year timescale), you need to alter the climate, as humans are doing now. By contrast, not only is brain activity chaotic, but the chaotic stuff might really matter for making long-term predictions! For example, suppose your model screws up the probability that the person being modeled is going to drive home drunk, because of an error in measuring the location of a single neurotransmitter molecule. Then all the model’s future predictions might be worthless: they don’t even given you decent probabilities, since the person is now a paraplegic. Much like in Asimov’s Foundation series, reality has completely diverged from your model, and you can’t obviously chalk up this failure to incomplete knowledge of the person’s external environment: you even missed something about the internal state of the person’s brain! Or would you say that the exact location of some neurotransmitter molecule is “part of your external environment” (just like the ambient air currents), rather than a piece of the internal software that makes you you? Well, unlike in the case of a (classical) robot moving around in a chaotic environment, here it’s not even obvious to me where to draw the line between “your code” and the external environment that your code is responding to! We could say that “your code” consists only of the pattern of interconnection between neurons and the synaptic strengths (which are, presumably, macroscopic, classical, and measurable), but how confident are you that that information suffices to make a copy of you, rather than a different person with the same memories? So, this is part of what makes me confused about the predictability of humans, animals, etc., in a way I’m not similarly confused about the physical predictability of robots.

        Sorry my paragraph turned out longer than I’d planned…

      • Silas Barta says:


        In the Human-as-Omega-vs-Computer (Haovc) setup I described, how much does the human Omega actually need to measure in order to guess correctly? Very little. It suffices to verify that

        a) The hardware is such that it will actually implement the code as written, and
        b) The human understands the function the program describes.

        And yet we have no problem believing that a human would correctly guess the program’s decision in Haovc! This is despite the facts that:

        – Most humans playing don’t even know how computers work.
        – Even if they do, the human might not know what microarchitecture this specific computer is using.
        – The computer is exposed to the chaos of the environment and all its uncertainties.
        – External factors (e.g. cosmic rays flipping bits) can make the computer’s behavior deviate from what the program specifies.

        What allows us to make this leap? Well, for one thing, the human Omega can perform polynomial ZKPs of a) and b). (For example, to verify a, feed the compiler random programs you choose and see if it correctly implements them.) Thus, the difficulty of predicting the computer’s output (including the difficulty of making the various measurements) is not (as you claim for humans) exponential in the system’s size, or complexity, or interactions with its environment.

        Your arguments about the relative difficulty of predicting humans (in Newcomb’s Problem) are basically:

        1) An exponential amount of possible factors can affect the outcome, and
        2) You need exponential precision to predict the human (e.g. knowing the exact location of a neurotransmitter).

        For the reasons I gave above, neither is true. 1) is false because both humans and computers (being goal-directed in the relevant sense) filter out most of their environments’ degrees of freedom: The temperature of a room (within limits) does not change how an x86’s circuits fire. The bombardment of EM waves does not put humans in constant seizures.

        Certainly, humans (and computers) can be quirky: maybe instead of actually trying to maximize dollar payoff, the human wants to make a point about what her sister did to her when she was five. Maybe the program will have a segfault because of a rare bug. Even so, Omega can be precise enough for purposes of this problem — these events are very rare.

        Likewise, 2) is false: just as you don’t need to know (among other irrelevancies) the microarchitecture of the processor in Haovc, neither do you need to know the exact location of this or that neurotransmitter. You just need to know (with sufficient certainty) the functional relationship between input and output.

        An Omega-for-humans would therefore not need such arbitrary precision, nor use such naive, brittle methods as you suggest. It can e.g. have a model of a typical human, and check for how a given human deviates. So it would ask, “Does this subsystem connect to this one, with the threshold amount of this chemical, like it does in the typical human?”, not, “What molecule is present at coordinate ?”

      • Silas Barta says:

        Oops, that last sentence should read, “present at coordinate [double float, double float, double float]?” The I put it between greater-than/less-than signs and the site gobbled it up.

      • Scott says:

        Silas, I’d say you’re simply assuming the answers to things that we ought to regard as very large scientific open questions!

        Can you reliably predict a human’s behavior by measuring a relatively-small number of macroscopic variables, in the same way that you can reliably predict a computer program’s behavior using higher-level abstractions, without having to know the quantum state of every electron in every transistor and wire? We know that neurons are much more complicated than transistors, being sensitively dependent on all sorts of chemical variables (e.g., the levels of various neurotransmitters), which might in turn depend chaotically on what’s happening at an even “lower” level. Is that just a relatively-unimportant “hardware difference,” or does it make brains (and biological systems more generally) interestingly different from existing computers, with respect to an outside observer’s ability to predict them?

        My current view is that I don’t know the answers to these questions, and neither do you! I very much hope that advances in neuroscience, chemistry, physics, and other fields will shed more light on them.

      • Silas Barta says:

        Scott, I’m not assuming such answers: I’m pointing out the problems with your approach to them: specifically, your unfair comparison to unhelpful levels of abstraction (the quantum level). Human decisions are affected by quantum level phenomena? Sure, but so are computers — you can still predict them, because the system’s structure filters this out. You should judge prediction difficulty based on the most helpful level of abstraction (software for computers, goals & biochemical control systems for humans), not the worst or average.

        Or to put it a different, imagine that the problem is just choosing between accepting a million dollars and accepting zero dollars. Your argument would just as well prove that, “gee, humans have all this noisy interference with their environment, and maybe they’ll decide based on something that happened 20 years ago, and biochemical systems are chaoitc, and you can’t measure the molecules in a person’s brain to a high enough precision … therefore no one can reliably guess whether humans will take a million dollars over zero.”

        Whatever reasons your argument is wrong there, apply just as well on Newcomb’s problem, for the reasons I have given: that Omega can ignore all decision-irrelevant factors, that the predictee is goal-directed (and so you can away from specifics of hardware), the system knowably filters out most environmental influences, etc.

        Yes, the problem is open, but let’s not consider it closed on the grounds of “stupid, molecule-based prediction methods won’t work!”


      • Scott says:

        Well, if we agree that it’s an open question, then maybe there should be nothing more to argue about!

        However, you keep saying that for a fair comparison, I need to focus on the “higher abstraction layer” for biological systems, just as I do for computers. To me, this ignores the whole question at issue: namely, whether biological systems have a more-or-less causally-closed higher abstraction layer! In the case of computers, we know the higher abstraction layer exists because we put it there ourselves. In brains, we can identify features (neurons, synapses) that are obviously somewhat analogous to gates and wires in a computer—but do they enjoy the same sort of autonomy from the temperature in the room and other “low-level physics details” that gates and wires enjoy? I don’t know! I used to think I knew, but I now feel that I got there only by assuming the answer I wanted.

        The fact that some human decisions are extremely predictable is both obvious and irrelevant. Humans have been predicting each others’ behavior using folk psychology and reasoning about goals (sometimes successfully, sometimes not) for millions of years. What’s at issue here is the decisions that are complicated and hard to predict—i.e., the ones for which you’d resort to a detailed brain model in the first place. If there were no such decisions, then your own belief about the existence of a higher abstraction layer for biochemical systems would (assuming it’s correct) be irrelevant: folk psychology would already suffice!

      • bobthebayesian says:

        I’m just not sure that it’s a tenable position any more to believe that we will lack the ability to specify a graphical structure that replicates brain activity well enough for “mind uploading” applications. It’s known today that you can image all the tissue in a brain at a fine enough level to reconstruct all relevant details of a neuron that could possibly be meaningful at any macro-abstraction level that current neuroscientists think could possibly be relevant. Either the neuroscientists have got it completely wrong and some new Nobel-prize-worthy biological phenomena will be discovered in the brain, or else translating the communication structure, synaptic sensitivities to neurotransmitters, neural firing patterns, etc., is all a straightforward problem of scaling up the hardware and making efficient algorithms for actually converting the physical mass of a brain into that digitally stored copy. I think the neuroscientific literature makes the latter option much more likely than the former, but I do grant that it still is an open question. Basically, what I am saying is that we more or less already know an “EXP algorithm” for this (namely, slicing brains in a FIB-SEM device, and reconstructing the neural parameters from the mountains of data), and all evidence suggests that with proper incentives in place, we should find a faster (fast enough) algorithm too.

        The more difficult question, in my view, is how to convert environmental stimuli into a digital format that could be fed into this digital brain structure. The chaotic/sensitivity issues seem to be more problematic at the boundary between brain and environment than within the structure of the brain. I would not say that it is obvious that we will be able to solve this problem any time soon, but it’s also not obvious that we should regard it as a 50/50 kind of thing either. The evidence suggests to me that it’s damn hard, but that the difficulty lies mostly in scaling up already well-understood technologies in ways that are pretty easy to incentive. Acting like there’s a much less than 50% chance that such an engineering program could succeed in 100-500 years, to me, is extremely pessimistic and counter-productive. However, I do think this kind of debate suffers from both sides believing that the burden of proof lies with the other camp.

        You mentioned this issue about whether the entity that comes out of the mind upload process “is really you.” I think ultimately this is just the Ship of Theseus debate. For which values of epsilon are we willing to draw a radius-epsilon ball around ourselves in physics-thingspace and concede that anything inside the ball “is us”? Unless we can decide that question with some quantitative answer, then I don’t see how we have any footing to claim future upload methods will be too sensitive to fit inside that epsilon-ball.

        Regarding the chaotic weather simulations: how many times in the past have people speculated that we’d run into the ‘chaotic limits’ of how well we can algorithmically predict the weather? Yet we not only shatter the “limits” each time, but do so at an accelerating pace, and are continuing to do so. Sure, we might have to modularize the weather into hurricane prediction, jet stream effects on snow, etc. But I don’t see why the same kind of analysis won’t converge on successful algorithmic models of the brain. We’re only just now at the tip of the iceberg of having data that actually lets us try things with the brain. Basically anything any of us says on this blog is going to look silly compared to what neuroscientists actually know in 10 years time. But I don’t see any reason to believe that the chaotic limits prevent us from making an application that we would all universally recognize as “mind uploading.” However weird that sort of application would be, I just don’t see a quantitative reason to doubt that it’s achievable in < 500 years.

        Interestingly, though, I do think that chaotic arguments might be a big problem when it comes to the friendly A.I. and cohesive extrapolated volition models that Yudkowsky proposes for the problem of friendliness. To manually construct an entity’s goal structures such that they remain stable into the future for an indefinite amount of time seems like a problem that will be incredibly more sensitive to initial conditions and parameters than anything we face currently.

      • Silas Barta says:

        Scott, if you accept that some human behaviors are trivially predictable, then you probably shouldn’t make arguments that imply there are no such cases! This is what you do when you point out that tiny imprecisions would fail to pick up a particular neurotransmitter and claim this is catastrophic for prediction.

        But once you accept that there are high level regularities that allow you to correctly predict on the “million dollars or zero” problem, the only question is *which* human decision problems have useful high-level regularities. And even folk psychology typically does much better than chance, showing just how easy prediction can be.

        (This is already a big step away from your insistence that the lack of exponential precision would allow for excessive loss of prediction capacity.)

        Now, there probably are human decision problems without useful high-level regularities, but they probably all have in common one feature: the lack of a goal. One candidate for such a problem is, “Choose a random string of n characters.” Here, Omega has very little to work with: there’s no “goal” favoring some class of strings over others. Omega might be able to get some use out of identifying the heuristics humans use when “trying to be random” (and as you’ve shown before, humans are poor RNGs!), but certainly nothing approaching 50+% accuracy like (I claim) it could achieve in Newcomb’s problem.

        But Newcomb’s problem is not the random string prediction problem! It introduces major simplifiers: human goal-directedness (they want more money), and human beliefs about what things are decision-relevant (e.g., whether they consider the “Omega has already chosen” sufficient grounds to take both boxes). These two simplifiers screen off virtually all of the variables you suggested would be necessary — much as the a) and b) I listed above screen off the exponentially-large set of environment variables in Haovc.

        So, whatever difficulties there might be in general human prediction are probably not present in Newcomb’s problem. Again, I refer you to the list I linked of real-life Newcomb’s problem where the Omega counterpart already does significantly better than chance, without the need for exponential precision. For example, the shoplifters vs merchants example: merchants (the Omegas) choose where to locate stores and what security measures to use based on their predictions of the shoplifting tendencies of the people in the area. Though any one person can causally benefit themselves (two-box) by shoplifting, merchants are accurate enough in predicting the “shoplifting-ness” of the people in the area so as to be profitable, which requires greater-than-chance accuracy.

      • Scott says:

        Scott, if you accept that some human behaviors are trivially predictable, then you probably shouldn’t make arguments that imply there are no such cases!

        My arguments imply no such thing. Look, let’s classify human decisions into three broad categories:

        (1) Decisions where pretty much all the considerations are on one side, and none are on the other. (“Would you prefer a million dollars, or five dollars?” “Should you walk into this burning building, or not?”)

        (2) Decisions with no important considerations on either side. (“What random string of characters would you like to output today?” “What color toothbrush do you want?”)

        (3) Decisions with powerful considerations on both sides. (“Mercy or justice?” “Ask her out and risk embarrassing yourself, or don’t and risk regret?” “Your family, or the Resistance?” “The salad, or the filet mignon?” “What novel should you write?” “One box or two boxes?”)

        Category (1) decisions are largely predictable. Category (2) decisions are largely unpredictable but no one really wants to predict them. That leaves Category (3) decisions—i.e., the subject of most of the world’s movies, novels, Shakespearean soliloquies, and philosophical arguments.

        I suspect the reason we’ve been talking past each other is that you’ve been focusing on categories (1) and (2), whereas I’ve been taking it as obvious that, when we talk about “free will,” (3) is the only category that we care about.

        Now, even category (3) decisions can be predicted somewhat better than chance—with or without detailed brain models! But I don’t see why anyone would regard, say, a 75%-accurate predictor as a threat to their free will: after all, their spouse or best friend is probably such a predictor already. It’s when you get to, say, the 99.9%-accurate level that I think you start to get loopy consequences for personal identity. But again, for 99.9% accuracy you’d presumably need a detailed brain model, and while such a model might be possible, I haven’t seen any non-handwaving, non-foot-stomping argument for why it must be.

      • Silas Barta says:

        With respect, Scott, your original argument (and, AFAICT, what I suspect you presented in class though I was not there) did not distinguish the cases as you have done so now, since your point was that a complete scan would require exponential precision, and so did not acknowledge the presence of filtering in goal-directed systems. You still seem to not appreciate this, as you are insisting that e.g. people’s decision on a Newcomblike problem is somehow dependent on the exact quantum state of the hairs on their head, like when you say:

        I think you need to offer some argument about physics—e.g., explaining why you think that the uncertainty principle, or the interactions between brains and their external environments, aren’t going to present fundamental obstacles to “highly accurately” predicting the future behavior of a specific brain.

        Also, your position, as now articulated, faces a symmetric problem, which is that you — with comparable hand-waving and foot-stomping that you claim is present in my argument — see some fundamental distinction between 75% Omega accuracy and 99.9% Omega accuracy. Somehow, you think that an Omega having ~2 bits of information about how you would choose (which happens *right now* with real-world Omegas on Newcomb’s problem and other Newcomblike problems like those I linked) is just fine and dandy, but having 8 more bits raises fundamental philosophical and computational questions.

        (The number of bits Omega has about your decision, relative to complete ignorance [50/50 probabililty], is calculated as base-2 log of Omega’s odds of correctly predicting, where odds = probability / (1 – probability) .)

        I suppose you could, favorably to your position, recast the distinction as isomorphic to that between PP and BPP. However, you specificed a constant 99.9% as the worrisome threshold (which is PP territory), rather than arbitrarily high accuracy (which would be BPP).

        Your faithful gadfly,

      • Scott says:

        Silas: It would probably be easier if we could discuss these questions in person (as the students and I were able to in class)—since that way, we could each prevent the other from assuming we must’ve ignored some trivial or obvious point. Hopefully we’ll get a chance to do that sometime!

        Look, I’m advancing a falsifiable empirical hypothesis: namely, that there exists some p<1 such that, as you learn more about a particular human (via any combination of behavioral observation, scanning of macroscopic brain data, and machine-learning algorithms), your accuracy at predicting the human’s behavior on “category-3 decisions” (for simplicity, let’s suppose 2-outcome ones) will asymptotically approach p but not exceed it. Naturally, p (like the fault-tolerance threshold for quantum computing) is a soft number, whose value will strongly depend on how you define “accuracy”, which category-3 decisions you look at, the timescale over which you’re trying to predict, etc. But for some reasonable choices, I could readily imagine (say) p>0.75. Indeed, as I said, I think well-above-chance prediction accuracy is achievable with no fancy technology at all: just intuitive psychology and acquaintance with the person to be predicted! I also think that the accuracy could be improved by the use of brain-scanning technology. However, my hypothesis is that significantly before you hit p=1, you’ll reach a limit imposed by the uncertainty principle combined with chaotic amplification of molecular-scale effects. While I don’t know what that limit is, if a prediction accuracy of (say) p>0.999 could be demonstrated empirically, then I would readily admit that my view had become untenable. (For in that case, there might still be a limit below 1 on prediction accuracy, but I don’t think it would be of any practical interest.)

  2. bobthebayesian says:

    One of the most interesting historical threads in the free will discussion is the two-stage model. There are many ways to describe this model, all with subtle differences from various contributors over the years (William James, Poincare, Hadamard, Karl Popper, Daniel Dennett, and John Searle, to name a few). Hadamard and Dennett both were inspired to include a quote from the French poet Paul Valery when describing their two-stage model of will: “It takes two to invent anything. The one makes up combinations; the other one chooses.”

    I have a soft spot for this quote because I enjoy stochastic optimization and this is basically describing sampling methods and simulated annealing in a beautiful way. In fact, I think if one reads Popper’s or Dennett’s account of the two-stage model with an ear for simulated annealing, you are struck at how many interesting analogies can be fruitfully made. The basic idea is that you must generate a random proposal from some list of possible actions and then some other part of your brain decides whether to accept or reject that proposal. This is how I understand the two-stage model, but again, there are subtle (and not-so-subtle) differences in many alternate takes on this model.

    A few things have always bugged me about this idea, however. For one, if we generate “random samples” from some supply of proposal thoughts/actions, then our brains must have random-number generation software. That’s fine, but the brain is a finite thing and any resource-limited random number generator must have some largest period. After enough random draws, it has to loop back around (leave aside the possibility of “quantum” random number generators or something bizarrely different than current algorithmic pseudorandom generators for the moment).

    An interesting post points out that because of the sheer size of the space of all possible permutations of a 52 card deck, one should expect that any “well-shuffled” permutation you happen to see is being seen for the first time in human history with very high probability. The same reasoning implies that for relatively modest vector length L, the space of all permutations of L items is too large for most pseudomrandom generators to effectively sample from.

    For example, the standard Python random generator has a period of 2^(19937)-1, so the first L such that L! > 2^(19937)-1 would be an upper bound on what you can do in Python. If such a similar bound exists for the random sampling capabilities of the human brain, then it would be in principle predictable after some number of draws. The real problem with this part of the two stage model is that what we call a “random sample” is probably really not random. If we had better neuroscience models, it might not be a random sample at all. On the other hand, if it is a random sample, then what random number generation process is going on that gets around the limitations of a finite period number generator?

    That might not be such a big deal, because the two-stage-modelist would say that the real “freedom” comes from the choosing stage and not so much the random sampling stage. I am personally very skeptical of this for reasons (1) due to the various studies that show activation in the brain prior to the conscious experience of having made a choice and (2) this doesn’t present a falsifiable theory for how such a choice occurs in the brain. Presumably we can just get in there with focused ion beam serial electron microscopy and see what’s going on — and that probably won’t take that many more years to develop. Does a two-stager have a prediction about what we’ll see that can be confirmed or denied? I’m asking because I haven’t seen one but a lot of smart people call themselves two-stagers, so there must be one, right?

    It seems to me that this need for indeterminacy stems from the fact that we sense alternative possibilities open to us, and that in fact we cannot get rid of this sensation and in some sense are forced to “make decisions” as if they were chosen from a pool of real alternatives. I’ve just never been convinced that humans “just feeling this way” is a good reason to think we need any explanation besides free-will-as-evolutionary-illusion.

    Lastly, I would really like opinions about “quantum indeterminacy” in the brain. I like this paper which argues that quantum effects are not needed to explain the cognitive emergent properties of brains any more than we need quantum mechanics to explain why a tennis match between Federer and Nadal is exciting. I agree with this idea, but lots of models have been put out there that quantum indeterminacy must play a role in consciousness.

    In particular, I’ll end with this quote by John Searle from his book Freedom and Neurobiology (pp 74-75), “First we know that our experiences of free action contain both indeterminism and rationality… Second we know that quantum indeterminacy is the only form of indeterminism that is indisputably established as a fact of nature… it follows that quantum mechanics must enter into the explanation of consciousness,” and Searle’s claim from a very interesting video lecture that “the higher levels of consciousness must inherit the indeterminism, but not the randomness [from the underlying quantum properties].”

  3. I wanted to sum up the computational argument against Newcomb’s paradox, which I brought up last lecture and was brought up again (by someone else) this lecture. After having listened to this come up twice, I don’t think it’s a strong argument against the paradox, although I do think it raises some interesting issues.

    The computational argument against Newcomb’s paradox goes as follows: in order for the predictor to know how a human will behave, it must simulate the thought processes of a human until the human reaches a conclusion. This encompasses general Turing-complete computation, so the way to evade the paradox is to trick the predictor into attempting to evaluate some computationally infeasible problem, which, unless we grant the predictor unreasonably strong computational powers, will not permit them to adequately guess the behavior of a human.

    Of course, the problem is actually “tricking” the predictor, which turns out to be rather difficult. Suppose that we can model the thought process of a human being as a deterministic function of inputs (representing initial environment) to the output, the decision whether or not to take one or two boxes. While the human can make this function be as complicated as he wants, he *himself* must still be able to evaluate the function in between the time between the simulator having taken a snapshot of his mental state, and him making the decision on the boxes. Even if there is a constant factor of difference between his computation time, and the simulator’s computation time, it’s a pretty weak assumption to grant that the simulator is a constant factor more computationally power than the human, and he can “catch up” on the missing time.

    Can we come up with a problem which would require more than a constant factor performance improvement on the part of the simulator? If we permit the human to spend unbounded time thinking about the problem once he has reached the room, with the boxes predetermined and in front of him, is this possible. (Personally, I think this is the strongest formulation of the argument.) The point is that the computer only has some constant amount of time to think about what the decision might be, whereas the human may have linear, quadratic, exponential, or even better to think about the problem once they are in the room. By the time hierarchy theorems, it is completely possible for the human to pose such a problem that the computer cannot solve more quickly. But it’s easy to fix this problem by only allowing the human a constant amount of time in the room; and even in this weaker scenario Newcomb’s paradox seems to challenge free will.

    If we further explore the vein of “computationally tractable for the human, but intractable for the simulator”, we might consider cryptographic algorithms. Assume that the simulator cannot solve cryptographic problems in reasonable amounts of time. If we try to setup an experiment like this, we still run into the difficulty that the simulator knows the private keys that the human knows, and so can still run the same computation. So the only way we can advance this is if we stipulate that the secret information is some knowledge that the human knows, but the simulator does not. This seems like it is dodging the question, but in practical settings one might expect the simulator only to get a lossy snapshot of the human’s brain state, and only be expected to achieve a high probability outcome. So if the human can stuff the private key in the “lossy bits”, can he do better? Here’s one scheme: the human makes a decision, and encrypts this with the randomness that he knows that the computer will not have access to, and then forgets the original decision prior to submitting to the brain scan. Perhaps he can even make it the case that, with quantum cryptography, if the simulator observes the quantum bits, he will notice, and then operate randomly. In general, if we can somehow amplify initial uncertainty, we can take advantage of chaos theory in order to systematically trick the guesser.

    Whether or not this has much to say about free will is an open question.

    • bobthebayesian says:

      I think this argument was misunderstood in class today. The idea was that as long as the Predictor has any sort of computational limit whatsoever, then one can find some problem P which the computer cannot solve in one hour’s time. If we assume then that the Predictor receives a copy of the state of my brain starting 1 hour before I am to make the decision, then at precisely that time, I can begin to solve the problem P, knowing that there’s no way that either I or the computer can have finished in 1 hour when the prediction has to be turned in and the money placed in the boxes. Then I walk into the room where the boxes are located, stand there for perhaps several more years while I finish solving problem P with my very slow brain, and use the outcome of the problem to determine whether I choose one or two boxes. Let me hereby decree that if I am ever in a Newcomb problem scenario, and I have enough advanced notice to set up this situation, then this is how I will handle my own prediction.

      In that case, then computer can’t predict what I’ll do because it would entail the computer using the simulation of me to solve some problem in 1 hour’s time that’s known to be hard enough to require the predictor to take more than 1 hour to solve it. It doesn’t matter how much additional time it would require for me to solve it.

      I think this is an interesting way to defeat the predictor, but still suffers from its own problems. (1) It effectively removes free will from being involved since I am agree to am algorithmic process to compute my decision for me. (2) If we add artificial time constraints, limit the knowledge that I have about the predictor, etc. etc. then this immediately goes away.

      • Scott says:

        On reflection, it’s an interesting point that, because the Predictor “moves first” in this game, we can imagine that you enter the game knowing some finite upper bound T on the total amount of computation that the Predictor has performed. In which case, all you’d need to do to elude the Predictor is base your decision on the output of some computation that requires more than T steps!

        On the other hand, I still feel like this solution violates the spirit of Newcomb’s Problem. For suppose a Predictor existed, which could “merely” predict the outcome of any possible deliberation of yours lasting three hours or less. Wouldn’t that still have almost all the counterintuitive consequences of a “full” Predictor?

        Or maybe we’re staking out a brand-new philosophical doctrine here, between free-will and determinism: namely, the doctrine that all of our actual choices are unfree, but that they would be free if only we had billions of years to deliberate about them! 🙂

  4. Pingback: Class #13: The Singularity and Universe as Computer | Philosophy and Theoretical Computer Science

  5. It seems to me that there are two questions here:

    (1) Is there a way to read out enough information about the state of the brain in order to produce a copy such that, if the two brains were exposed to the *same* environment, differences in behavior (due to the finite precision of the copy and the chaotic behavior of the brain) would not arise until after 100 years of running time?

    (2) If the answer to (1) is positive, how does one replicate an individual’s environment for the purpose of using the accurate copy as a predictor?

    The predictor of Newcomb’s Paradox seems to require a solution to both (1) and (2); yet, for “brain uploading” we may be content with (1) alone. (Indeed, (1) ensures that the copy is good enough, as any differences in behavior for the first 100 years can be attributed to differences in inputs from the environment.)

    Also, it seems to me that the hardness of (1) lies entirely in the copy, and it’s merely a physics/technology question. (And not an algorithmic one. After all, we *know* that the brain is efficient; all we need is some way to replicate its state so that we can go off and use the copy in cool ways.)

  6. D says:

    I agree with Alessandro that a large portion of the question of whether the predictor is possible is largely one that relies of physics. It’s possible that it touches on metaphysics as well, though that’s certainly much more thorny. I’m skeptical as to the practical realizability of such a perfect predictor; however, I suspect a predictor that is correct more than 50% of the time is possible, and as we mentioned in class, I think the same philosophical issues would arise with any predictor that was accurate more often than not.

    That said, I don’t think even a hypothetical perfect predictor poses a philosophical problem for free will. Picture this: Instead of the Newcomb predictor as a computer, I play this game against an extremely smart (human) psychologist who has studied me for a while–or even just a friend of mine who knows me well. Does this violate free will? If not, how is it different from a (possibly-incorrect) Newcomb predictor?

    I see the Newcomb game as still a game between me and the other party who potentially has access to my thought processes–not as a game between me and myself. (In this sense, I think it is *deceptive* to phrase things “in terms of two different ‘instantiations’ of you, which nevertheless acted in sync and were ‘controlled by the same will'”. The other “instantiation” is a simulation, and is not “me”; it is perhaps copied from my thought processes but not “controlled by the same will” in some sort of spooky-metaphysical-action-at-a-distance.) In this sense, there is no “time paradox”–while it’s true that the other individual has already acted by the time I make my decision, since they are acting with knowledge of my mental state my optimal strategy is to adopt a mental state that will cause them to put the money in the first box. Put another way, even though it appears that the predictor moves first (by placing the money), in reality I move first (by setting my intentions/general beliefs/worldview prior to getting scanned).

    • bobthebayesian says:

      Picture this: Instead of the Newcomb predictor as a computer, I play this game against an extremely smart (human) psychologist who has studied me for a while–or even just a friend of mine who knows me well. Does this violate free will? If not, how is it different from a (possibly-incorrect) Newcomb predictor?

      To me, this does violate free will in the strong libertarian sense. The degree to which another agent can successfully predict what you’ll do is precisely the degree to which you could not have actually done otherwise.

      To make an argument that “predicting will” somehow doesn’t imply that the will is governed by lawful physics, I believe the burden of proof is on free-will-supporters to explain how something can be predictable but at the same time possess “freedom” to exhibit other real alternative behaviors.

      Put another way, even though it appears that the predictor moves first (by placing the money), in reality I move first (by setting my intentions/general beliefs/worldview prior to getting scanned).

      If you believe that setting your own intentions prior to having the brain scan will give you the upper hand, then don’t you have to acknowledge that the “copy” of you is more than just a copy? I mean, for example, which one of you “is the real you”? I don’t think that’s an easy question. If it really has all the same memories, sentient experiences, morals, ethics, etc., then it presumably has an equal footing to claim that it is “really you.” Certainly the mere fact that you “existed first” in the entropy-preferred time direction doesn’t seem to give you any special ownership rights to that set of sentient experiences. So I don’t think it’s trivially easy to say that the predictor’s “copy” is not really you. I would just say there are two identical originals, up to some physical precision limit that probably doesn’t matter for replicating brain processes.

      If you “go first” by choosing your intentions ahead of the game, then what should you pick? If you set your intentions to be a one boxer, then you *know* it will be more economically rational to take two boxes once you play the game (because the predictor will have decided you are a one boxer and placed money accordingly). If you set intentions so that you are a two-boxer, then you will *know* that it’s more economically rational to take one box when you play the game. The way I see it, this is the whole debate of Newcomb’s problem. You want to be completely predictable to the predictor up until the very last second, when you then want to change strategies. So I still think the predictor “moves first” in this sense, because if you view your pre-box-decisions about setting your intentions one way or another as part of the game, then it’s an infinite regress.

  7. wjarjoui says:

    Newcomb’s paradox obviously raises some good questions about free will and predictability, however I felt part of the paradox is a bit too arbitrary.
    When arguments for reasons how the predictor could be fooled, those arguments seemed to fail simply by the definition of the problem. For example, if we were to state that the human player will first think of what the predictor will predict, and then reverse it, the argument was made that the predictor would still have predicted this thought and adjusted to it. Other techniques also didn’t work because they violated some problem definitions. Hence it seems that whatever solution we come up with it will most likely not work because either a) the Predictor would have discovered our plans and simulated them faster b) or our solution does not qualify. For that reason, sometimes it seems we are giving the Predictor so much power that it is important to wonder if it could really ever exist at all! Sure we can still imagine it does and discuss the problem, but I’m sure we all agree that the discussion will have a lot less importance.

    The other issue I have with the Predictor is that for it to go against free-will what it will have to do is force the human player into one option (as one of my colleagues mentioned in class), not just predict it. I am sure there is no debate on free-will whenever someone predicts the Celtics will lose a game.

    Having said that, the human player in the paradox can always refer to an online captcha and base his answer on it (take both boxes if it has an e, one box if it doesn’t). The Predictor will not be able to decode the captcha (NP and hence wont be able to predict the player with certainty.

    • wjarjoui says:

      Of course we can argue that my last solution either does not capture the soul of the Predictor or the Predictor could also have a perfectly accurate OCR device. At that point I don’t see why we should ever try to fool the Predictor if all we’re going to do is make it stronger to discover the trick (similar to anti-AI arguments that we discussed early on).

    • bobthebayesian says:

      I don’t understand what the captcha buys you. Are you arguing that humans solve arbitrary instances of captchas? I don’t think there strong evidence to believe that. Conversely, we get better every day at computer vision methods to break state of the art captchas. If we already grant that the predictor can simulate the player’s brain, then why can’t the predictor just simulate the subroutine in the player’s brain that solves captchas with human-level accuracy?

  8. wjarjoui says:

    It buys me time – if Predictor has to make the prediction before I make my decision, then it wont necessarily have enough time to solve the captcha. The basic assumption I am using is that it will take the Predictor significantly longer to solve the captcha even though it can simulate subroutine in the brain that solves captchas. If that assumption is broken (because we grant the Predictor more powers) or because it is already not true in the real world (in which case I could use an update on this topic) then obviously my technique fails.

    • bobthebayesian says:

      Do you mean to use Captcha as an example of the “problem P” that I mentioned in my other comment. If so, then I agree, we might find a captcha so tough that neither the player nor the predictor can solve it in some fixed time. But then the player can keep on solving it after the money is placed in the boxes, perhaps for a long time, but still defeat the predictor. However, this doesn’t really resolve the problem is we put a time limit on the human player too, because then it’s very likely that there does not exist a problem that the predictor fails to solve quickly but which you still can solve in your longer time limit.

      • wjarjoui says:

        I was arguing that this does resolve the problem because a human would be able to resolve the problem within the time limit, and the Predictor wont. However, I could be incorrect if my assumptions (described in my previous reply) about machine vision are incorrect – is that the case?

    • D.R.C. says:

      I don’t see why solving a captcha would be any more difficult than simulating a person, since that is basically what it comes down to in the end. The problem then would be: a) predicting where you would get the captcha, b) predicting what criteria you would use to try to game the system, and c) predicting how likely are you to follow through with the system.

      The only way that I could see something like this would be “helpful” is if you decide ask someone else (a strange whom you would have no mental model for besides the trivial one) to make the decision for you. Then your free-will aspect is limited to accepting their decision or not, but that just hands off the problem of free will.

      • wjarjoui says:

        Yeah you might have a point. The basic assumption I made is separating the assumption that the Predictor can simulate the thought process of the person from the assumption that the Predictor can simulate all processes in the human brain. I wish I knew more about Machine Vision, but my idea was that if the way humans carry out vision is asymptotically harder than other computational processes in the brain, then the second assumption might not be as trivial to make.

      • bobthebayesian says:

        I think A.I.-completeness is the right thing to think about regarding the predictor.

  9. Katrina LaCurts says:

    I see three cases for Newcomb’s Paradox.

    1. It is irrelevant, because such a predictor cannot exist.

    2. Such a predictor can exist, but it cannot predict everything about the human in the game, only whether he is a one-boxer or a two-boxer. To me, this predictor is not modeling the entire state of my brain.

    3. Such a predictor can exist, and is perfectly omniscient; it can predict everything about the human in the game (what he will have for breakfast tomorrow, how he will die, etc.)

    There is a difference between how cases 2 and 3 affect free will in my mind. In the second, I don’t actually see a problem, in part because even if a predictor could accurately predict one thing about me, I would likely never be convinced that my behavior and his prediction were actually related, just as I was not convinced that Paul the Octopus had any control over the 2010 World Cup. Surely the Predictor is smarter than Paul the Octopus, but to me, an accurate one-or-two-box Predictor doesn’t imply that its prediction actually caused me to behave in a certain way.

    So, what about case 3? Would I start to be more convinced that the Predictor’s prediction forced me to act in a certain way, and thus there is no free will? Honestly I’m not sure. I’m still not convinced that an (admittedly insanely powerful) accurate Predictor necessarily causes any of my actions, but then again, if it can predict absolutely all of them, is that not the definition of a lack of free will? That even if I thought various choices were open to me before I chose, that is just an illusion?

  10. nemion says:

    I have been recently overly exposed to the field of Philosophy of mind and its fascinating problems. Some of its most fascinating questions lie in the realm of intentionality, namely, how is it that we can have thoughts of the world at all? what is the causal relation between the world and our thoughts? what is the nature of such relationship and what does it say about the space of possible thoughts.

    Due to the wide success of reductionist explanations of many types of phenomena, it is usually believed at least among positive and materialistic circles that a full account of intentionality can be reduced to the physical facts. This is indeed not hard to believe or not easy to refute, is a natural consequence of the reductionist success of modern science.

    This also poses the question of our seemingly unbounded capacity of thinking about anything that we please. If everything is reducible to the physical and the physical is predictable, then that seemingly unbounded capacity to think and dream about subjects of the world is also bounded. If there were such a thing as having thoughts about something, then this seems to imply that the conceptual realm to which we have access is not freely accessible as one might think of.

    The question I want to draw attention to is the one of intentionality, how do we deal with it in a purely mechanistic world? Reductionism eliminates free will in this sense, and this seems to be very problematic if one wants to preserve the unbounded capacity of thinking about anything.

    As for the Newcomb’s paradox, I am a two boxer, namely if the predictor’s decision has already been made, you better go and take the two boxes, since together they will have more money, than taking a single one. Now, if we analyze a string of consecutive games, an argument similar to this one would say that for the last game the best strategy is to take the two boxes, since the predictor has already made its decision. Inducting backwards the best strategy will be to take two boxes at each step.

    Evidently this type of answer is the one of an individual who resists to get rid of the mystical nature of the world, and still wants to believe in at least a hint of free will. An individual with a one boxer position will be likely to be afraid the predictor actually be able to exist. I believe it could be interesting to run a poll among people and drawing a correlation between one boxers, two boxers and belief of free will or not.

    • bobthebayesian says:

      Philosophy of mind is indeed an interesting field. During an undergraduate class on philosophy, I was in a “debate” with another student. He was on the side that believed a reductionist account could never fully explain human consciousness or thought and I was on the side in favor of reductionism/materialism.

      At one point in the discussion, I asked him, “what do you think is the number one reason why we can never physically understand how consciousness and thoughts work in the human mind?”

      He paused for a couple of seconds, looked around the room, and then just said, “Steve Urkel.”

      To my mind, this has been the strongest counter argument that I’ve yet heard against the reductionist approach. 🙂

      Regarding your idea for a poll… I would be interested in trying to create a simple Doodle poll for our class, similar to this recent LessWrong poll.

      The questions would be restricted to the “big topics” that we’ve talked about this semester, like whether the universe is a computer, whether the quantum extension of the ECT is true about the physical world, whether a person should one-box or two-box, what’s the probability that we do have “free will”, what’s the probability that the universe is a computation or a computer simulation, is the surprise exam paradox just a falsehood or a matter of a person’s inability to prove consistent reasoning, what’s the probability that a technological singularity can happen at all, how about in less than 1000 years, less than 500, and less than 100?

      If enough people are willing to take a Doodle survey about these questions, I’m happy to organize it, tabulate the data, and give it to Scott so he can make a last blog entry with the results.

  11. amosw says:

    I have been working with Bayesian deep belief networks recently, and my sociological observations from that experience have given me reason to be very suspicious of how quickly humans jump to the assumption of free will.

    Deep belief networks are qualitatively different from the standard multilayer neural networks in that it is natural to ask the network to “dream”. I.e., they are probabilistic generative models. So a deep belief network for digit recognition can, if asked, produce for you a sample of digits that it has never seen before but which it “believes in”. I have observed myself and other humans to very quickly start saying things like “the net believes” or “the net likes” or “the net wants”: especially as the number of hidden layers grows.

    I suspect that the rapidity with which this happens (in contrast with for example a logistic regression model which we don’t assign any desires or beliefs to) indicates a deep inductive bias that is perhaps embedded in our DNA. I suspect that it was evolutionarily beneficial to, once a system in our environment displayed complex enough behavior, to have a phase change in the way we think about it in which we immediately start giving it credit for desires and beliefs.

  12. John Sidles says:

    A post on Gödel’s Lost Letter and P=NP discusses the possibility that the Chooser and the Predictor will collude to defy the Proctor of the trial.

    This example leads to the interesting questions: Is it logically consistent to deny free will to the Predictor? And if we allow both free will and rationality to the Predictor, what rational reason(s) does she have to collaborate with the Proctor rather than the Chooser?

    • bobthebayesian says:

      It’s very unclear to me why the Predictor’s free will should matter at all. Perhaps we make some breakthrough discoveries in neuroscience that allow highly accurate predictions of a person’s choice in Newcomb’s problem using only some variants of machine learning algorithms that we already know operating on a fairly low-dimensional feature space. We can already predict the outcome of complicated social phenomena like sporting events with non-trivial improvement above random chance using these methods, and we attribute no free will to these sorts of learning algorithms at all.

      Or, if you like, assume we do have a very powerful Predictor that is an AI in the sense that questions of free will should apply to it. Then suppose it just builds its own internal software program or automaton specifically to offload the prediction task, like its own personal Roomba for playing the Newcomb game. While the Predictor may have free will, it’s uncontroversial that its Roomba assistant does not.

      I think even facing the Roomba assistant predictor raises questions about the free will of the player, but clearly you cannot collude with an unconscious agent. If the Predictor has free will, then the questions you ask are interesting. But I think you can have Newcomb’s problem with no reliance on the Predictor’s will, and by no means does consideration of the Predictor’s will have consequences for whether or not there is free will in general.

      • John Sidles says:

        Bobthebayesian, let’s follow your reasoning a little bit deeper.

        Suppose that the Predictor has free will, and suppose further that the Proctors seek to thwart that free will by (in effect) applying a Star-Wars-style Restraining Bolt to the Predictor.

        But you think about it, you will appreciate that programming Restraining Bolts that work for Predictors is by no means a logically trivial task, because the Restraining Bolt has to be similarly skilled at predicting the Predictor’s actions, as the Predictor is at predicting the Chooser’s actions.

        So isn’t Proctor’s task (of enforcing the rules of Newcomb’s game) essentially an impossible one?

      • bobthebayesian says:

        Sure, but the problem of restricting the actions of an agent with free will is entirely different than the problem of whether one’s own predictability casts doubt on one’s own free will. I don’t see how this “follow[ed] my reasoning a little deeper.” These seem to be entirely independent questions to me. Newcomb’s problem assumes the Predictor itself wishes to play the game, or else is an automaton programmed to play the game and unable to choose not to. The meta question about whether you could enforce a Predictor to obey the rules of the game is entirely moot with respect to the questions that Newcomb’s problem addresses.

      • bobthebayesian says:

        Also, depending on one’s views about computationalism and the mind, programming a restraining bolt might indeed be logically trivial, but nearly impossible from a practical engineering level. I think many people believe that is one possible conclusion of reductionism. In principle, programming a restraining bolt means I merely need to know the Predictor’s connectome and engineer a device that manipulates the connectome. There’s nothing logically difficult about that if one believes that the connectome really is all that’s needed to specify cognitive processes (I mostly do believe that). It might be close to impossible from an engineering perspective, but that also doesn’t matter for Newcomb’s problem.

        For instance, if the Predictor can simulate my cognition arbitrarily well for any practical purpose, and if the Predictor has a massive computational advantage and can parallelize this computation and farm out the prediction task of what I’ll do, such that at a macroscopic level of description of my behavior, the Predictor can predict me faster than real time, then it’s not hard to imagine some neural prosthetic that the Predictor could attach to me to manipulate attributes of my connectome to cause me to exhibit certain behaviors. Thus, the Predictor, almost by assumption (modulo questions of computationla resource), can fit me with a restraining bolt.

        If the Predictor can do this to me, why can’t the Proctor be assumed to be able to do this for the Predictor? The Proctor doesn’t even need to be “smarter” than the Predictor. It could just be a copy of the Predictor but have much more computational resource to run an additional copy in faster than real time. Yes, you get a cascade of simulations here, but if we think the Predictor can predict my behavior in the game from only a restricted set of features describing me, and similarly for the Proctor to simulate the Predictor, then it’s conceivable that this arrangement could work without needing more processors than there are atoms in the universe, or other standard resource limitation arguments like that.

        At an engineering / sensitivity / chaotic systems level, this stuff is non-trivial. But I think it’s trivial at a logical level.

      • John Sidles says:

        The Proctoring Problem arises like this, bobthebayesian:

        Predictor: I want to put money in both boxes.
        Restraining Bolt: Do you promise you’re not colluding with Chooser?
        Predictor: I so promise!
           (Chooser takes both boxes)
        Restraining Bolt: Arrgh!

        Thus the Restraining Bolt can reliably foresee whether the Predictor is colluding only by being even smarter than the Predictor.

        The larger point is that the Proctor(s) who enforce the rules of Newcomb’s Game has a job that is computationally harder even than the job of the Predictor … perhaps even an impossible job.

        In summary, we should regard as incomplete any discussion of Newcomb’s Game that does not discuss the computational difficulty of proctoring that game.

      • bobthebayesian says:

        I disagree. We should separate instances of the game into (a) cases where we make the assumption that there is no Proctor and the Predictor wishes to obey the rules of its own accord and (b) cases where we don’t make the assumption that the Predictor wants to play by the rules and we need to consider the computational burden of a Proctor.

        If we don’t make the distinction then it’s an infinite regress: who proctors the Proctor?

      • John Sidles says:

        Bobthebayesian, we can eliminate the Proctor if and only if we can reliably attribute these two attributes to the Predictor:

        (1) Intelligence sufficient to predict any Chooser, and

        (2) “The Predictor wishes to obey the rules of its own accord.”

        But we can know (2) if and only if we ourselves are smart enough to predict the Predictor. And we know that is not the case! 🙂

        That is why the Proctoring Problem is non-trivial for Newcomb’s Game.

      • bobthebayesian says:

        Sure, but the issue of knowing (2) to be the case is decoupled from the ramifications of Newcomb’s problem given that (2) is the case. A magic Truth Wizard who has never been observed to utter a falsehood in a bagillion years tells you “(2) holds and I know this without resorting to magic.” If you believe the wizard, then Newcomb’s problem still has interesting issues with free will. If you want to ask the question of whether computational intractability means you should never believe such a wizard, that’s different than the fact that if Newcomb’s problem were undertaken straightforwardly, then it has an impact on my free will.

        Let Q = ” if Newcomb’s problem were undertaken straightforwardly, then it has an impact on my free will“. Then I claim Q has interesting issues with free will all on its own, even if you only want to consider it as a counter-factual to claims that (2) should never be believed.

        However, I think the connectome comment I mentioned helps alleviate some of this. Suppose the Predictor learns pretty quickly that it doesn’t need to simulate much about each Chooser to get ~99% accuracy; suppose it quickly realizes that it only needs to know what you ate for dinner last night and what the most advanced economics course you took in high school / college was, assuming you took one. Then it would be pretty easy for the Predictor to simulate the Chooser. Similarly, perhaps only a couple of things needs to be known about the Predictor to determine with high accuracy whether it will tell the truth and follow the rules. Then, the Proctor doesn’t incur a large computational burden to have very high confidence that the Predictor is behaving.

        Lastly, I’m surprised that Proctor & Gamble isn’t all over this idea 🙂

      • John Sidles says:

        LOL Bobthebayesian … I think you and I (and everyone else who has considered Newcomb’s Problem) agree that Newcomb’s Problem becomes interesting if and only if the Chooser, the Predictor, and (as we now appreciate) the Proctor(s) too, all base their predictions upon models of each other’s cognition that are sufficiently sophisticated to include notions of free will.

        The thrust of my argument is that there is a stable equilibrium in which the Chooser and Predictor benignly collude to violate the rules of Newcomb’s game, and the Proctor(s) resign because it is their task that is impossible.

      • bobthebayesian says:

        I’m trying to say that free will has this property that if we believe the Predictor earnestly wants to play the game then Newcomb’s paradox says something about it. That fact, regardless of whatever Newcomb’s problem says about free will, is interesting in its own right. Free will is this thing that predictability and lawfulness affects.

        You seem to be conflating that statement with a different one. If Q=” if we believe the Predictor earnestly wants to play the game then Newcomb’s paradox says something about free will”, then I claim we learn something regardless of whether we accept the premises of Q, just by the fact that Q would hold under some premises. You seem to think it is the truth of Q itself that matters here, but that’s not what I’m saying.

        I also disagree that the Proctor’s task is intractable (at least, this is not at all obvious to me). Moreover, if I am the player in Newcomb’s game, I have no problem believing the Predictor. If the Predictor tries to collude with me, I’ll know that it tried. If it doesn’t try, then I’ll know that it didn’t try. Either way, from my perspective, Newcomb’s problem would be experienced in the straightforward way without a Proctor.

      • John Sidles says:

        You make some good points, Bobthebayesian, and to be clear, it was never my intention to impute that the computational feasibility of the Proctor’s role is the sole paradoxical aspect of Newcomb’s game, rather it is one more paradoxical aspect.

        For example, we can imagine that the roles of Room, Chooser, Predictor, and Proctor are filled by Turing Machines, with the Chooser given access to the Room’s description, the Predictor given access to the Chooser’s description (including the Room), and the Proctor given access to Room, Chooser, and Predictor. Then what restrictions must we place on the computational capability of the Chooser, in order that the Proctor can prove that the Predictor will respect the terms of Newcomb’s game?

        For example, it seems that we have to restrict the Chooser to provably halt; this means that there are some Choosers in P that the Proctor must disallow from playing the Newcomb game! Namely, those Choosers in whose runtimes are in P, but not provably so. But this violates an (implicit) assumption of the game, namely that the Predictor can accommodate all Choosers.

        Perhaps such omniscient Predictors do exist, but the point is moot, since no such Predictor can be trusted by the Proctor.

      • bobthebayesian says:

        I agree these questions about a Proctor can raise interesting additional aspects of Newcomb’s problem. But, I’m concerned that some of it is just a battle over words. For example, I would be fine taking the term Chooser to be defined as an agent that will halt and produce a decision about Newcomb’s problem in less than some reasonable but large time bound. I don’t think the paradox loses any of its luster even under that assumption.

        In my view, the criterion that all Choosers must be predictable implicitly means all halting agents who can produce an answer in a reasonable amount of time must be predictable. You’d have to provide an example of a problem on which a human computer won’t halt for me to think that halting or unbounded decision time should be relevant. Aside from “attempting to not die” or something, I don’t think such a problem exists for people. Even if they can’t decide something (in the computability sense of the word decide), they’ll just stop and output some gibberish because they are hungry or need to use the bathroom.

  13. Hagi says:

    Can the “Predictor” really exist? This is certainly an interesting question, as simulating ourselves might be humanity’s best chance at living for much longer than 100 years.

    It is not obvious to me if the predictor could exist or not. If it cannot, this would probably be caused by the fact that a simulation of the brain would be too sensitive to the initial conditions and/or the interaction with outside. I find this option much more depressing than the alternative.

    The alternative apparently seems to reject the idea that we have free will. I certainly do not agree with this. It is not clear to me why if our free will can be simulated, it is not really free will anymore. To me this kind of implies that our free will is just a combination of the effects of randomness and sensitivity to small perturbations of the state of the brain. This is upsetting, because I would hope that our personality and existence is more robust and stable to random fluctuations.

    I would imagine that there are mechanisms in the brain, such that the small fluctuations (that aren’t well defined) would not make a difference. Just like in a computer, there must be error correcting mechanisms that make sure of this. Otherwise the physical constraints, such as the uncertainty relations and chaoticity would not only be a problem for a future computer to simulate the brain, but also for our physical brain. I would rather prefer that my thoughts and feelings are somewhat robust to fluctuations and not too sensitive to decoherence.

  14. Cuellar says:

    Even if Newcomb’s paradox could be reproduced in principle, I think is not feasible in reality. First it is impossible to get a perfect reading of the current state of the world at a given time. Even if that was possible after a short period of time the output would be intractable; only the universe would be powerful enough to calculate it’s own output after a long period of time.

    Then, of course, we could create a Newcombs predictor with just ‘good’ probability of prediction. I don’t see why this is impossible and I could imagine the probability going up as technology advances. But it would never reach a 100% of accuracy, and this is what is important. I further think that there is an upper bound to how precise the predictor can be.

    Nobody claims that our free will is completely free. we are definitely affected by our environment and our actions do depend on the past. But it’s enough to have a small probability of uncertainty to have free will. In fact, it is easy to show that many (in not almost all) of our actions have clear causations. But that small fraction of thing that are inexplainable (even if it’s just intractability) that makes us free beings, at least in practice.

  15. sbanerjee says:

    I want to tackle the question of whether the “Predictor” could exist assuming that humans are “open systems.” Specifically, what I find interesting is a human’s interactions with their external environment can not only be an “input,” but it can also go into the human and alter him or her in interesting ways. Professor Aaronson mentioned one way of how drinking can affect the ability of the “Predictor” to predict a human’s behavior or decisions. However, his example tackled how drinking can result in some permanent distortion in the body that would limit the Predictor’s ability to correctly make a prediction. Another interesting application of drinking to this question is to think about how after the Predictor makes a prediction, the human can take in different inputs from his or her environment to change his or her decision making process. Specifically, what happens if you get drunk after the game has begun. I would argue that becoming intoxicated makes you somewhat a more open system than you would be if you were sober. In a sense, your interactions with your environment seem to have a more amplified effect on you when you are drunk. The amplification also varies with how much one imbibes. As a result, the Predictor would have to predict how much the player will imbibe – something that is perhaps not deterministic. The decision of whether you will stop drinking alcohol will depend on how much alcohol you have already drunk… This seems to result in an uncountably infinite [similar to the real numbers] set of options the Predictor has to consider… If the Predictor must always be correct, then the Predictor cannot exist because it cannot consider an uncountably infinite set of options. If the Predictor can be usually correct, then perhaps the Predictor can exist. There is also the counter argument that after a certain point in this sequence of considerations that extends into an infinite set of options… the miniscule amounts of alcohol might not affect your decision making process.

  16. John Sidles says:

    Bobthebayesian and Sbanerjee, perhaps one strategy for clarifying and focusing the (many) interesting issues that your posts raise, begins by seeking concrete answer to the question: Why is the Newcomb’s Paradox (seemingly) harder to analyze than the Blue-Eyed Islanders Puzzle? (as expounded by Terry Tao, for example).

    Why is the former regarded as “a paradox” and the latter merely as “a puzzle”?

    Both conundrums can be analyzed in terms of the Predictor-Chooser-Proctor model, if we regard each islander as effectively having a tricameral mind that every morning (1) predicts the state-of-mind of all the other islanders, (2) chooses whether to commit suicide, and (3) provides (to a virtual proctor/conscience) a certificate rigorously justifying the prediction in accord with the rules of logic and the choice in accord with the customs of the island.

    Then a natural and reasonably well-posed question is, why are the tasks of predicting, choosing, and proctoring (seemingly) easier on the Island of the Blue-Eyes than in Newcomb’s Room?

    A related question is, are their “Cheating Choosers” who must be denied admittance to Newcomb’s Room just as cheaters are not tolerated on the Island of the Blue-Eyes? One kind of Cheating Chooser is (obviously) any chooser who is found to be in possession of a random-number generator. But are there other viable strategies for Cheating Choosers? In this regard I commend Juris Hartmanis textbook Feasible computations and provable complexity properties as a thought-provoking starting-point.

    • bobthebayesian says:

      Could elaborate a little on why this is related to Newcomb’s paradox? I’m not grasping the connection.

      I was actually asked a version of the Blue-Eyed Islander’s Paradox during a finance interview (their version was about wives inductively learning that their respective husbands had cheated on them).

      In my view, this inductive answer is straightforward because an external piece of information is introduced. Someone outside the group uttered the fact that at least one person has blue eyes. If any islander were to merely think that thought internally, it wouldn’t be common knowledge, and so the possibility one’s own eyes are blue won’t affect any other islander. But once it is uttered as common knowledge, that data has the prospect of informing others of their own eye color, resulting in them committing suicide. If I don’t have blue eyes, then I’ll observe them commit suicide in some fixed number of days from now, otherwise I have blue eyes. That’s my take anyway. The internal knowledge that some of the people have blue eyes is logically different than the common knowledge that the visitor sees a person with blue eyes (and that everyone knows the visitor saw this, and that everyone knows that everyone knows the visitor saw this …)

      • John Sidles says:

        bobthebayesian says: Could elaborate a little on why this is related to Newcomb’s paradox?

        Well (just for fun) let’s imagine that the Islanders are deeply thrilled that Terry Tao (and many others) have clarified their customs to the point that every Islander can confidently delegate their activities to three kinds of Turing Machine:

        a Chooser TM that chooses life versus suicide a Predictor TM that assesses the state of all the other Predictors a Proctor TM that certifies (by exhibiting a concrete proof) that everyone’s Chooser and Predictor are functioning in accord with

        the rules of logic, and the customs of the Island.

        Now the Islanders want to know: Is there an interesting way to program their Predictors, Choosers, and Proctors so as to play Newcomb’s Game?

        It is clear that making the TM version of Newcomb’s Game “interesting” requires careful navigation between the Scylla of making the Predictor TM’s job impossible (by allowing the Chooser a random number generator, for example) and the Charybdis of making the Predictor TM’s job trivial (by restricting the Chooser TM to be provably in P, and giving the Predictor access to the code).

        Thus, everything depends upon the rules that the Proctor TM enforces. A candidate set of rules is:

        Chooser arrives with his Chooser TM, and Predictor arrives with his Predictor TM. Neither Chooser not Predictor can subsequently alter their TMs. Proctor provides to both Chooser and Predictor an input tape having arbitrary content. Predictor must provide to Proctor a proof that the Predictor TM halts for all inputs. The Chooser must provide the Chooser TM’s input tape to the Predictor; we call this stipulation the Predictor’s Advantage. The Chooser TM must halt for all inputs, but Chooser need not provide a proof of this fact; we call this stipulation the Chooser’s Advantage.

        The point is that it’s not clear (to me as a complexity theory non-expert) which dominates: the Chooser’s Advantage (that Chooser need not prove Halting)? Or the Predictor’s Advantage (that Predictor sees Chooser’s TM)?

        Intuition: No such Predictor TM can exist — thus Chooser can defeat Predictor.

  17. John Sidles says:

    Ooops — list-related HTML gets stripped-out. Let’s fix it.

    bobthebayesian says: Could elaborate a little on why this is related to Newcomb’s paradox?

    Well (just for fun) let’s imagine that the Islanders are deeply thrilled that Terry Tao (and many others) have clarified their customs to the point that every Islander can confidently delegate their activities to three kinds of Turing Machine:

    • a Chooser TM that chooses life versus suicide

    • a Predictor TM that assesses the state of all the other Predictors

    • a Proctor TM that certifies (by exhibiting a concrete proof) that everyone’s Chooser and Predictor are functioning in accord with

      – the rules of logic, and

      – the customs of the Island.

    Now the Islanders want to know: Is there an interesting way to program their Predictors, Choosers, and Proctors so as to play Newcomb’s Game?

    It is clear that making the TM version of Newcomb’s Game “interesting” requires careful navigation between the Scylla of making the Predictor TM’s job impossible (by allowing the Chooser a random number generator, for example) and the Charybdis of making the Predictor TM’s job trivial (by restricting the Chooser TM to be provably in P, and giving the Predictor access to the code).

    Thus, everything depends upon the rules that the Proctor TM enforces. A candidate set of rules is:

    • Chooser arrives with his Chooser TM, and Predictor arrives with his Predictor TM.

    • Neither Chooser not Predictor can subsequently alter their TMs.

    • Proctor provides to both Chooser and Predictor an input tape having arbitrary content.

    • Predictor must provide to Proctor a proof that the Predictor TM halts for all inputs.

    • The Chooser must provide the Chooser TM’s input tape to the Predictor; this stipulation is called the Predictor’s Advantage.

    • The Chooser TM must halt for all inputs, but Chooser need not provide a proof of this fact; this stipulation is called the Chooser’s Advantage.

    The point is that it’s not clear (to me as a complexity theory non-expert) which dominates: the Chooser’s Advantage (that Chooser need not prove Halting)? Or the Predictor’s Advantage (that Predictor sees Chooser’s TM)?

    Intuition: No such Predictor TM can exist — thus Chooser can defeat Predictor.

    • John Sidles says:

      And as a point of clarification, in the above narrative the Chooser, Predictor, and Proctor all are passive spectators — rather like programmers in a chess tournament — such that all computations are carried out by their respective TMs. Then the intuition associated to the above narrative amounts to this (which an expert might possibly regard as trivial): no TM that decidably halts can predict every decision TM that halts, but not decidably.

    • John Sidles says:

      And as one final point of clarification, for most folks (including me), the interesting aspect of programming the Chooser, Predictor, and Proctor TM to play the Blue-Eyed Island Game was the representation of knowledge necessary to program the Predictor TM, in particular, the need to model N levels of understanding. Once this is understood, TM codes that correctly play the Blue-Eyed Island Game are easy to write, short, and efficient.

      In contrast and by construction, the Balanced Advantage Newcomb Game is far more difficult for the Islanders to program, and indeed, it seems likely (to me) that the Predictor TM cannot be coded at all, in essence because of what Juris Hartmanis wrote:

      Results about the complexity of algorithms change quite radically if we consider only properties of computations which can be proven formally. … Results about optimality of all programs computing the same function as a given program will differ from the optimality results about all programs which can be formally proven to be equivalent to the given program.

      Intuitively speaking, no Predictor TM can do what the terms of balanced advantage require: that Predictors explain to us how they do it and assure us that their methods always will succeed.

      Most importantly, and without regard for whether the above analysis is right, wrong, or simply muddled, I’d like to express my appreciation to Terry and Scott for providing venues where these wonderfully interesting issues can be shared and explored.

  18. John Sidles says:

    By the way, I was led to this discussion via a link that Scott posted last week on the Lipton/Regan weblog Gödel’s Lost Letter and P-NP; a link that (at that time) led directly to this discussion of Newcomb’s Paradox, for which there was (at that time) no indication that this was a student forum.

    So if the above analysis of Newcomb’s Paradox as a “Balanced Advantage TM Game” adversely obtrudes upon MIT student discussions, then please let me apologize, because I have only admiration for this wonderful course and for Scott’s outstanding choice of lecture topics.

  19. Pingback: Does P contain languages decided solely by incomprehensible TMs? | MoVn - Linux Ubuntu Center

  20. Pingback: Does P contain incomprehensible languages? (TCS community wiki) | Question and Answer

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s