Class #13: The Singularity and Universe as Computer

(Participation is optional.  For additional discussion about Newcomb’s problem and free will, please post in thread #12b.)

Feel free to post any thoughts you’ve had related to the topics discussed in class #13:

  • The concept of a “technological singularity” and what exactly we should interpret it to mean
  • What the fundamental physical limits are on intelligence
  • Whether it’s useful to regard the universe as a computer, what that statement even means, and whether it has empirical content
  • Whether any of the following considerations are obstructions to thinking of the universe as a computer: quantum mechanics; apparent continuity in the laws of physics; the possible infinitude of space; a lack of knowledge on our part about the inputs, outputs, or purposes of the computation
This entry was posted in Uncategorized. Bookmark the permalink.

18 Responses to Class #13: The Singularity and Universe as Computer

  1. A scientific theory can be considered a Turing machine that simulates the evolution of the observable universe. This is not at all the same thing as considering the universe itself to be a Turing machine.

    As far as I can see, though I am not an expert, deep presuppositions in both quantum theory, and in the working philosophy of scientists, are not consistent with the universe being a Turing machine.

    As I understand it, quantum mechanics assumes that observations are an irreducibly random — both in the sense of happening at chance, and in the sense of being computationally irreducible — selection from a probability wave. Such randomness is not computable, and if quantum mechanics is true, then the universe as a whole is not computable either, although any finite set of observations can of course be simulated by a Turing machine. In addition, as far as I get it, the experimental falsification of hidden variable theories shows that if scientists are free to set up any experiment, then physical theory must be formulated as though the irreducible randomness I mentioned does exist. This means that the “working metaphysics” of scientists, who in fact assume that theory should be predictive for any experimental setup, is not consistent with the universe as a whole being a Turing machine.

    Hidden within these questions are two approaches or attitudes in the metaphysics of science. The first attitude is fully satisfied if the evolution of the observable universe can be simulated by some Turing machine in a way that is not falsified by observation, and the second attitude continues to wonder what is “really” going on beyond this. I would say that Quine exemplifies the first attitude, and Goedel exemplifies the second attitude. I would also say that I am puzzled by the apparent fact that most physicists reject hidden variable theories, but also seem to think that the universe is nevertheless some sort of Turing machine. Perhaps I just don’t understand.

    • Hi Michael,

      The key to resolving your puzzle is that randomness, per se, is not especially mysterious or hard to handle: we can easily simulate it by the simple device of equipping our Turing machine with a random number generator. (And even a deterministic TM can calculate the probability of any particular outcome we care about, and can also output the ensemble of all possible outcomes together with the probability of each.) For this reason, it seems to me that, if you want to find a quantum-mechanical obstruction to the universe being “computational”, then at the least, you’ll need to invoke some deeper feature of QM than just the fact that measurement outcomes are random. (One possible example of such a feature, which Bohr emphasized, is our lack of complete knowledge of the initial state, owing to the uncertainty principle. If you know the initial state, then the Born rule lets you calculate the probability of any possible outcome of any possible measurement on that state. But what if you don’t or can’t know the initial state?)

  2. I’m sorry, but you seem to be responding way too quickly…

    As far as I can see, your assumption that we can equip a Turing machine with a random number generator simply begs the question.

    The question, after all, is whether the universe, which by definition includes any and all random number generators including the perfect one assumed by quantum theory, is or is not Turing computable.

    It seems to me that a blurring of concepts, or equivocation, begins to occur here. I’m not quite certain whether this blurring is in my own understanding of the universe as computer concept, or in the minds of those propounding the concept.

    The physical Church-Turing thesis, on the face of it, seems to assume a Turing machine of finite Kolmogorov complexity whose output is decidable. Your proposal for a Turing machine with a perfect random number generator would be of denumerably infinite Kolmogorov complexity if events are discrete, non-denumerably infinite complexity if events are continuous.

    The output of the first machine is recursively computable and decidable, the output of the second machine is not recursively computable but is recursively enumerable and undecidable. The “program” of the second machine is infinitely long and infinitely complex.

    These two things are obviously quite different steps on the ladder and have very, very different philosophical implications…

    I have a feeling there are many further distinctions and elaborations to be made here.

    Hoping for a slower answer,
    Mike

    • bobthebayesian says:

      I do not understand when you say, “Your proposal for a Turing machine with a perfect random number generator would be of denumerably infinite Kolmogorov complexity if events are discrete, non-denumerably infinite complexity if events are continuous.”

      Denumerably infinite and non-denumerably infinite (why not just say countable and uncountable?) refer to the sizes of sets. But Kolmogorov complexity refers to a real number describing the length of a minimal Turing machine. I don’t understand how this number can be countable or uncountable; it’s just a number. Do you mean instead that we would be computing the K-complexity of adenumerable set (resp. non-denumerable set)? Otherwise your concepts seem mistaken here, or I am unaware of how the term denumerable is being used in this context.

      • bobthebayesian says:

        Perhaps this post is helpful? I’m just very confused by what comparison between K-complexity and uncountability you’re trying to make; it’s easy to misconstrue K-complexity as saying something about uncountable sets of things, when really it’s about strings.

      • Thanks for your comment. I am not an expert in this field so it is possible I have used the wrong term.

        What I mean to say is that if the universe contains a source of perfect randomness, then considered as a machine, it has to be of infinite complexity. If the universe is “discrete” in that the “computation” proceeds step by step as in a digital computer, then its complexity with perfect randomness is infinite and the number of steps is countable. If the universe is “continuous” in that between any two points of “computation” there is another, then its complexity with perfect randomness is infinite and the number of “computations” is uncountable.

        What I am trying to get at here is what I see as an ambiguity in the concept of “universe as machine.” Is it a finite machine or an infinite machine? What do you think?

      • bobthebayesian says:

        For one, we don’t really know if the universe is infinite or finite, so it’d be pretty speculative to say something about whether it is infinite “as a machine” or not. If it turned out that the universe was some complicated but finite manifold, whould that make you more likely to think it was a finite Turing machine? I’m not sure it would make that more likely to me.

        Secondly, there is a big difference between the two ideas (1) uncountability and (2) ““continuous” in that between any two points of “computation” there is another”. Even a countably infinite discrete set has property 2 (Just take the set of all rational numbers, including negative ones. Between any two rationals lies another rational, yet the set is countably infinite.) To get real continuity in the universe, you need something a lot stronger. Another thing is that there are approximation theorems that tell us that even if the universe were continuous, we could approximate it arbitrarily well with a discrete representation. For me, this more or less removes any concern for whether or not it is discrete. “For all practical purposes” it is discrete.

        The real question you are wanting to ask here is whether the universe, when considered as a formal program, has to have infinite complexity due to the existence of “true randomness” inside of the program. Whatever submodule inside of the program Universe() that gets called, PerfectRandomGenerator(), can be compressible in any way or else it is not really random. This is exactly what K-complexity is used for. Something is considered random if its description is shorter than any program which can compute it.

        And to me, this is the problem with what you’re asking. Is the description of the program Universe() shorter than the description of any program which can compute Universe()? This just opens a can of worms. What meta-programming language are we talking about? Can we write other universe programs in this language, and if so, does this mean we have to talk about some Platonic world of universe programs, one of which happens to correspond to our own (be careful, this is not at all similar to the Many Worlds questions).

        I can’t quite articulate exactly what I want to say right now, but it seems the problem is that “infinite complexity” starts to deal with subjectivity. Any actually existing random number generator in the universe will be a finite thing, and since we can write down a description of any finite random number generator, it can’t have infinite K-complexity. So whatever this “perfect random number generator” is, it can’t exist inside the universe because its very existence means we could write down a succinct program that generates truly random strings — but truly random strings are ones that can’t be generated by a program which we can’t succinctly describe.

    • Scott says:

      Mike: You asked an answerable question, I answered it, and the problem is that I did so too quickly? 😉 Recall, you expressed confusion about why “most scientists” aren’t troubled by what you thought of as the contradiction between computability and randomness:

        This means that the “working metaphysics” of scientists, who in fact assume that theory should be predictive for any experimental setup, is not consistent with the universe as a whole being a Turing machine … I would also say that I am puzzled by the apparent fact that most physicists reject hidden variable theories, but also seem to think that the universe is nevertheless some sort of Turing machine. Perhaps I just don’t understand.

      Given how you expressed your puzzlement, I thought it would suffice to explain why “most physicists” (and computer scientists, FWIW) don’t perceive any contradiction. Knowing the relevant technical points, you’re then free to ponder the philosophical implications and draw your own conclusions.

      So, briefly:

      – A Turing machine equipped with a random number generator is called a randomized Turing machine. However you choose to classify randomized TMs (as types of TMs, variants of TMs, etc.), they’re extremely well-studied and familiar objects in theoretical computer science. Shannon proved that, if you care about language decidability, then a randomized TM decides exactly the same set of languages as a deterministic TM—i.e., the randomness gives you no advantage. Today, we conjecture the same thing in complexity theory (P=BPP). Of course, it’s trivially the case that randomized TMs can do one thing that deterministic TMs can’t: namely, they can output random strings! But if you instead ask: what exactly did you want the random string for? What, using the random string, did you want to know? then you need to ask whether a deterministic TM could have told you that thing as well, and the answer is usually yes. On the other hand, it’s sort of a moot point anyway, since most CS theorists these days are perfectly happy to talk about randomized TMs and forget about the deterministic ones.

      – According to quantum mechanics, you can’t predict exactly when a radioactive atom is going to decay. But you can calculate the exact probability distribution over decay times! So, given a collection of atoms, you can predict to enormous accuracy what fraction of them will have decayed within any given time interval. Given the almost-perfect predictability of a large ensemble of random systems (something we already knew, incidentally, from 19th-century thermodynamics), it’s hard for most physicists to take seriously the idea that the uncertainty in decay time reflects any sort of “free will” or “uncomputability” on the atoms’ part. Of course, you’re free to define words however you want—but whatever you call this sort of unpredictability, it doesn’t seem to be the sort that can provide any reassurance to anyone who worries about science being too mechanistic.

      – (Added) Regarding your point that a randomized Turing machine has “infinite Kolmogorov complexity”: I’d say that’s a misleading way to describe what’s going on. The specification of a randomized TM consists only of its finite state transition diagram; the random string r is then thought of as simply another input (alongside the “real” input x). While it’s true that r “could be absolutely anything”, just like in my radioactive decay example, the overall properties of r are nevertheless known with confidence approaching certainty! For example, it’s astronomically unlikely that the number of 1s in r deviates significantly from the number of 0s, or indeed that r contains any other computable regularities whatsoever. Another way to say this is that, even though r contains infinitely many bits, it contains zero bits about anything other than its own rather-uninteresting self!

      • Thanks to both bobthebayesian and Scott for your responses, which are now much closer to understanding what I am trying to say. I’ll try to make myself even clearer.

        bobthebayesian said:

        The real question you are wanting to ask here is whether the universe, when considered as a formal program, has to have infinite complexity due to the existence of “true randomness” inside of the program. Whatever submodule inside of the program Universe() that gets called, PerfectRandomGenerator(), can be compressible in any way or else it is not really random. This is exactly what K-complexity is used for. Something is considered random if its description is shorter than any program which can compute it.

        Yes, this is exactly what I am saying.

        bobthebayesian continued:

        I can’t quite articulate exactly what I want to say right now, but it seems the problem is that “infinite complexity” starts to deal with subjectivity. Any actually existing random number generator in the universe will be a finite thing, and since we can write down a description of any finite random number generator, it can’t have infinite K-complexity.

        I’m sorry, but I simply don’t see how this doesn’t just beg the question. Why exactly should any actually existing random number generator in the universe be a finite thing? Nobody has yet informed me how quantum mechanics does not assume the existence of perfect randomness.

        Scott considers this:

        According to quantum mechanics, you can’t predict exactly when a radioactive atom is going to decay. But you can calculate the exact probability distribution over decay times! So, given a collection of atoms, you can predict to enormous accuracy what fraction of them will have decayed within any given time interval. Given the almost-perfect predictability of a large ensemble of random systems (something we already knew, incidentally, from 19th-century thermodynamics), it’s hard for most physicists to take seriously the idea that the uncertainty in decay time reflects any sort of “free will” or “uncomputability” on the atoms’ part. Of course, you’re free to define words however you want—but whatever you call this sort of unpredictability, it doesn’t seem to be the sort that can provide any reassurance to anyone who worries about science being too mechanistic.

        Because the question here is one of philosophy and not of science or engineering, I’d say that the distinction between “almost-perfect” and “perfect,” or between “approaching certainty” and “certainty”, is in fact critical! It’s the difference between “ACTS LIKE a machine,” and “IS a machine.”

        I would also say that the “overall properties” of a random string are by no means the only interesting things about it, and even just one bit of difference could be an utterly critical difference, especially if that one bit determines which of two paths the whole universe takes.

        To revert to my original post, if you are satisfied with “almost-perfect” and “approaching certainty” and that convinces you the universe is a machine, then to me you appear to share the naturalism of Quine, whereas if you are NOT satisfied and what to know what IS, you appear to share the Platonism of Goedel.

      • Scott says:

        Mike: In that case, I’d say that ironically, your “mystical” impulses are easier to satisfy than mine! 🙂 Once you concede that physical systems are probabilistically predictable, and large ensembles of such systems almost-perfectly so, you’ve given your Gödelian mystical idealism hardly any room with which to act. On this view, a superintelligence who knew the current state of the universe could presumably predict the probability of any significant event in your life over the next year: 90% chance of this, 35% chance of that. Then all that would be left for your “free will” to do would be to sample from that precisely-calculated probability distribution (which, in contrast to weather forecasts, is assumed to be perfectly accurate). I say: if you’re going to believe in a non-mechanistic free will at all, why not hold out for more? Why not conjecture that the superintelligence couldn’t even predict the probabilities very accurately, but could at most crudely bound them?

      • That’s interesting. What do you think about this? Personally, I have no idea what to make of free will, except I have a feeling it exists. In other words, I certainly don’t feel bound to sample from a distribution.

        Anyway, although I appreciate your response about probability, I would be more interested in your response to the philosophical issue of the truth or validity of the so-called “physical Church-Turing thesis.”

      • bobthebayesian says:

        Many people seem to care a great deal about this fact that it feels like we are not bound to sample behaviors from a complex distribution. I think it’s one of the main reasons why Searle argues that quantum randomness must somehow feed forward its indeterminacy (but not randomness) to the macroscopic level working inside brains. Most ordinary folks also tend to cite the feeling that they are choosing their actions as the overwhelming evidence that we are in lucid control of behaviors.

        I’m not convinced by this way of thinking. One might say that our aim in philosophy, mathematics, and physics, is to give an account or a description that analytically reproduces aspects of our experience in a consistent way. But the problem is that time and time again, we learn that there are meta-aspects to our experience that override the more basal aspects of experience.

        For example, knowledge of internal medicine and beliefs that emotions are seated either in the heart or the bowels are at odds with one another. We just had a feeling that emotions were seated in {heart,bowels} and attempted to construct theories that reproduced this aspect of our experience. But then we learned more about internal medicine and the mechanistic functions of the heart and bowels, and the correlation between brain damage and personality changes, and gradually came to understand that emotions are “located” in the brain (as an approximation).

        For me, reliance on “basal perceptions of free will” to drive my beliefs about free will is similarly prone to anthropocentric inaccuracies. My feeling like X is not a good reason to require analytic descriptions of the world that reproduce X. And in the case of free will, I feel it’s similar to the creationism/evolution debate. On one hand, we feel like we’re a special species with innate superiority, and religious explanations offer a way to reproduce that feeling analytically. On the other hand, we have evidence and detailed understanding of a mechanistic process that would lead to our feeling like we’re a special species with innate superiority.

        For free will it’s the same. Can you imagine what it would take to program a computer to behave and act as if it had free will, or at least that it reported the feeling of having free will whenever queried about its actions? The internal qualia of feeling like one has free will is a totally different thing than free will.

        I’m not saying that the problem of free will is dissolvable. The “merely-crude-probability-bounds” argument is interesting and non-trivial. But I do not think that our drive to invent theories that reproduce the feeling of free will should be affecting people as ubiquitously as it seems to.

      • bobthebayesian says:

        Incidentally, I am in the middle of reading Thinking, Fast and Slow, published earlier this year by Daniel Kahneman. There are a lot of striking attributes of fast-mode thinking that would seem to cast certain aspects of free will into doubt. It will be interesting whether a comprehensive way of understanding heuristics and biases and how to stitch them together can account for nearly all of human thought. I’m sure that AI researchers really want to have a more formalized grasp of heuristics and biases in this sense. FWIW, a lot of researchers in sensory modalities also think along these lines. I’m reminded of the book The Ecological Approach to Visual Perception by James Gibson.

      • Scott says:
          Anyway, although I appreciate your response about probability, I would be more interested in your response to the philosophical issue of the truth or validity of the so-called “physical Church-Turing thesis.”

        Mike: The trouble is that people interpret the “physical Church-Turing Thesis” in different and often-conflicting ways. As I interpret the thesis, though, to refute it you would need to show that it’s possible in principle to build a physical computing device that solves a well-defined computational problem (decision, search, sampling, whatever) that’s unsolvable by a randomized Turing machine. And on that reading, I’d say there are very strong grounds from current physics to believe that the physical Church-Turing Thesis holds.

        Note, however, that one could accept the physical Church-Turing Thesis (in my interpretation), while denying the claim that “the universe is a giant computation.” To do so, one would say that, while the universe might have “non-computational” elements (e.g., free will), whatever those elements are, they can’t reliably be used to solve the halting problem, or any other Turing-unsolvable mathematical problem that we’re able to describe in advance.

  3. Scott:

    I’m not sure whether it is, or is not, possible in principle to build a physical computing device that solves a well-defined computational problem that’s unsolvable by a randomized Turing machine.

    If it were possible in principle, I would have no idea at all how to use this knowledge.

    In any case I agree that non-computational elements of existence can’t, couldn’t, reliably be used for computing. I do wonder with what kind of deliberation you chose the word “reliably.”

    • Scott says:
        I do wonder with what kind of deliberation you chose the word “reliably.”

      And I do wonder with what kind of deliberation you wrote that sentence! 😉

  4. rationalist says:

    How seriously do you take the singularity hypothesis – by which I mean significantly smarter than human intelligence, Scott?

    For example, what probability do you assign to the development of a computer of some kind exhibiting significantly smarter than human intelligence in the next 10, 20, 30, 40, 50, 60, and 70 years?

    My own view is that David Chalmers’ comments are the most balanced on this question:

    “Nevertheless, my credence that there will be human-level AI before 2100 is somewhere over one half” (http://consc.net/papers/singularity.pdf)

  5. Scott:

    I’ve read the slides from your talk on free will at http://www.scottaaronson.com/talks/freewill.ppt, and they go a long ways towards answering my questions about what you really think. I think.

    I’ll be reading this for a while (at least) before I have anything worth saying. I do think that your approach is original and well worth pursuing.

Leave a reply to rationalist Cancel reply