Class #2: The Extended Church-Turing Thesis, the Turing Test, and the Chinese Room

What are the best arguments for polynomial-time as a criterion of efficiency, and exponential-time as a criterion of inefficiency?  How strong are those arguments?  How should we handle the problem of n10000 and 1.0000001n algorithms?

What sort of statement is the Extended Church-Turing Thesis (which equates the problems that would “naturally be regarded as polynomial-time computable” with those solvable in polynomial time by a deterministic Turing machine)?  Does the ECT raise any philosophical issues that are different from those raised by the original Church-Turing Thesis?

Should a “giant lookup table” be regarded as conscious, assuming it passes the Turing Test?  Given that such a lookup table couldn’t even fit inside the observable universe, does the question even matter?

If we say that a giant lookup table that passed the Turing Test wouldn’t be conscious, then what additional attributes, besides passing the Turing Test, should be sufficient to regard a program as conscious?  Are time and space complexities that are polynomially-bounded in the length of the conversation enough?  Or can we also demand other attributes, related to the program’s internal structure?

If we try to understand Searle’s Chinese Room Argument as sympathetically as possible, what can we take Searle to be saying?  Even if Searle himself refuses to specify, is there any plausible scientific candidate for what the “causal powers” that he talks about could consist of?  Could those causal powers have anything to do with computational complexity?

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

43 Responses to Class #2: The Extended Church-Turing Thesis, the Turing Test, and the Chinese Room

  1. ezyang says:

    “I’ve come to see the oracle,” you tell the wizened old man.

    “Ah yes, well, come along, come along.” He takes a dusty lantern and leads you into the ancient stacks of books. “Won’t be long now…ah, yes, here she is.” His finger lies to rest on a thin, blue paper jacket. “The oracle will see you now.”

    “But, there is no other person here—” You are cut off as the man thrusts the book into your arms. You reluctantly open it and begin reading.

    “Welcome! I am the Oracle of Delphi. Do not be alarmed by my appearance. Though I do not appear before you in flesh and bone, I am alive and well in the books of this library. I have predetermined every possible question you may have for me, and written down every response I would have given you. All of these conversations are cataloged in this library. You only need find them.” The rest of the book is an index, explaining how to convert a question into a location in the stacks.

    You turn to the man. “What tricks are these! There is no oracle here, only a library claiming to be the work of a preternatural augur. I have little faith in those who claim to tell the future: I seek the oracle for consultation, not for tea-leaves and fortune cookies.” But the man only smiles, and replies, “While it is true that the Oracle was never clairvoyant, you underestimate this library. Unimaginably vast, it not only contains answers for the questions you seek, but every other question too, that any person who ever lived or who will live has asked. Questions in languages you had never existed, on concepts you cannot comprehend. Questions that would make the Gods blush, questions that would make even foolish men seem wise. You see, it contains an answer to every question that can be fit on a page of this book. The library is indiscriminate. You could spend the lifetime of a universe wandering these books, browsing the questions.”

    But then he sighs, slouching a little. “Unfortunately, the Oracle was not all-seeing. She could only write down answers for what she understood; many of the books tell you no more than you could ask a schoolboy. Mind you, the Oracle was wise and far-seeing, and many of her answers are truly insightful. But there is only so much she can say in reply to ‘hjak as wekajkv ksalkslk sk.’” He straightens up. “Well then, I hope you have your question in mind, so we can begin our descent.” He gestures at the distant blackness.

    “Will it be long to the answer?”

    “Oh,” the old man chuckles. “That depends on how long your question is. First letter please?”

    • Scott says:

      Best. Reaction. Essay. EVAR. (Well, at least so far! 🙂 )

      • Temporal-Spatial Caching for Fun and Prophet

        There is a famous temple in South India which houses hundreds of bundles of palm leaves, with random people’s life stories written in an ancient script. These bundles are, by rotation, circulated through several auxiliary branches located throughout the country.

        You go to one of these branches, and give them your thumb impression. The palm bundles are hashed into buckets, and the hashing algorithm is a function thumb-print features.

        Each hash bucket contains several bundles. To locate the one corresponding to you, they will ask you for the first syllable of your name (I think the buckets are sorted by name), read out a couple of details from the index leaf (like the number and sex of siblings). If you are lucky, they will confirm that they have leaves for you…

        … and then, they will tell you your name, your date of birth, and details of your past in uncomfortable detail. All this is recorded on audio-cassette and presented to you when you leave.

        Legend has it that it was written by sages who were not only prescient but also very smart, so that they only ever needed to write leaves of people who would, with high probability, ask to see their leaves. Moreover, the geographic pattern of circulation is pre-determined, so as to maximize the possibility of “hits”. Clearly, if you can predict the future, preloading caches becomes much more optimal.

        The whole system is called Naadi astrology. Unfortunately, online references to Naadi are garbled with gobbledygook. I got the lowdown from two friends, who had the fortune – or misfortune – to get immediate cache hits. It profoundly and permanently altered their theory-of-universe.

        I was going to milk it for a Library of Babel-like story, but what the hell, I neither have the time nor the writing skills.

    • tjdelgado says:

      Was your story inspired by Jorge Luis Borges’ “The Library of Babel,” by chance?
      Linky: http://jubal.westnet.com/hyperdiscordia/library_of_babel.html [westnet.com]

      Compare:
      “But there is only so much she can say in reply to ‘hjak as wekajkv ksalkslk sk.’”
      and Borges’:
      “For a long time it was believed that these impenetrable books corresponded to past or remote languages. […] All this, I repeat, is true, but four hundred and ten pages of inalterable MCV’s cannot correspond to any language, no matter how dialectical or rudimentary it may be.”
      …among other similarities. It’s scary.

      If you want more Borges, I recommend his “Garden of Forking Paths”–I’ll probably have some stuff to say about it when we get around to quantum computing.

      • Scott says:

        I think the library ezyang describes is different and more interesting than the Library of Babel: after all, it consists of intelligent answers to arbitrary questions (wherever such answers are possible), rather than just arbitrary strings.

      • Miguel says:

        Wow amazing!! Borges’ stories def have a complexity angle; also “The Writing of the God”, or “Tlon, Uqbar, Orbis Tertius” I think..

      • tjdelgado says:

        @Scott:
        Absolutely. I just noticed a thematic similarity or two.

        Random thoughts: What happens if both:
        1) There is no computable/human-determinable ordering to the books in the library (that is, without the oracle-book, a turing machine/human cannot, a priori, determine the function that maps books to their locations in the library), and
        2) One doesn’t have access to this oracle/”index-book” of (what I presume is) the space of all answers this oracle could produce?

        Wouldn’t ezyang’s library then degenerate into a Library of Babel, where one could spend an eternity searching for any given book, including the oracle-book (if it can be considered to be part of the library proper)?
        Also, does condition 1) need to hold for the library to be as it is presented in ezyang’s story?

      • The point of the Library of Babel is that the presence of all possible information is equivalent to the lack of any information whatsoever. This would not be the case with the Oracle — even if each answer-book didn’t include the question inside it, the collection of answers would almost certainly not cover all possible.strings, nor would it be uniformly distributed. The function oracle(x) : questions -> answers encodes information in its image.

        The Library is also not infinite in content; it contains only books of a specified length. If the Oracle is also finite, then the mapping is trivially computable, at least for certain definitions of “a priori”.

      • D.R.C. says:

        So basically this Library is the equivalent of the Library of Babel after finding the Crimson Hexagon. I suppose that it would be a much smaller size, simply for the fact that this library specifically ignores non-intelligible questions and contains no books that are gibberish, unlike the Library of Babel, which contain all possible books of a finite length. I would still doubt that it would fit inside the observable universe, especially considering the estimated size of the Library of Babel (as shown on Wikipedia), and the fact that the length of an intelligible question seems to be unbounded, so even if the Library has multiple questions for which it responds with the same answer, I doubt that it would be a significant enough difference to reduce a number from $10^{1,834,097}$ to less than $10^{120}$.

    • Note by the way that Mark Fabi’s novel “Wyrm” has at one point a version of an Oracle that is run by a series of monks working as a Chinese Room. Your idea is more clever though.

  2. bobthebayesian says:

    To get a good handle on the thrust of Deutsch’s book, I began by reading chapters one and two before chapter seven. I found on the whole that there is a lot that I disagree with in chapter one but that I agree with large chunks of Deutsch’s work in chapter seven. Since I disagree almost entirely with Penrose, I’ve lumped Deutsch Chapter 1 and Penrose Chapter 4 together and this reaction essay is meant to discuss some observations on stochastic analysis as a potentially better foundation for human thinking than logic. Most of my argument is based upon the essay “The Dawning of the Age of Stochasticity” by David Mumford, and heavily related chapters from E.T. Jaynes’ book, Probability Theory: The Logic of Science. As someone who has studied measure theory and formal probability for almost all of my higher education, I definitely side with Jaynes and Mumford in the view that measure theory is a complete waste of time (unless you study it for aesthetic reasons and appeal to a G.H. Hardy style apology for mathematics; then it’s just fine.) Following from this, I agree with Mumford that we ought to use random variables as the basic constructs of mathematics and do away with untenable claims that human thought or mathematical reasoning are well modeled with logic in the first place.

    Penrose paints himself in a corner by dismissing the point of view that provability and mathematical truth are equivalent. This is the point of view that I take and propositions such as the self-referent Godel proposition P_{k}(k) cannot meaningfully be said to be true or false except in some sense external to mathematics. Penrose bases much of his arguments about mathematics on his view that P_{k}(k) “just is” true, or is true by appealing to human intuition. This is a fine position to take, but what’s not fine is to call this a mathematical truth, which is why Penrose says that formalists should be worried (and is a foundational point for his personal rejection of computationalism). But the problem is this: if we can’t prove something is true (like P_{k}(k)) then in what sense is it `mathematically’ true? Penrose seems to stamp his feet and say “it just is,” but if mathematics starts admitting truths that are outside of formalism, it defeats the whole purpose of doing mathematics. This is an example, I feel, illustrating the point we mentioned on day one: scientists often take a philosophical point of view without admitting it or considering it to be pre-scientific.

    Another qualm I have with Penrose’s passage is his obsession with recursive sets. He picks a few examples and basically says, `this is what we really \textit{mean} by a recursive set’ or says `this set surely should be recursive’ (e.g. the examples of common curves, like exp[x]). Mumford makes the observation that “all mathematics arises by abstracting some aspect of our experience.” Mathematicians are often guilty of hiding this fact. Take for example the elementary concept from real analysis of a compact set. In analysis classes, professors have often tried to impress upon me the great consequences of compactness and the neat results in differential geometry, topolgy, and operator theory that rely on compactness. But the problem is, as was pointed out to me by the Brown University C.S. professor John Hughes, compactness is just the weakest set of conditions such that all those nice results will hold true. Mathematicians first experience some of reality and come up with properties they think ought to be true, and then they go see what conditions have to be assumed to ensure that truth. I feel Penrose is doing just this with recursive sets (and undecidable problem, which is what he is dancing all around). He trumps recursiveness up as an intrinsically interesting property, and then offers cases where surely our intuitive idea (that he primed) should hold true. This doesn’t make recursive sets or undecidable problems suddenly become great paradoxes that refute computationalism. It just means maybe there are some extra intuitions or experiences to use when defining the conditions that lead to recursive sets, or maybe recursive sets just aren’t that cool to begin with.

    Switching over to Deutsch’s chapter one, it seems Deutsch does not offer anything new that hasn’t already been said between Jaynes, Valiant, and Mumford. The problem, though, is that Deutsch does obfuscate everything with his insistance on making his own definition for “good explanation.” Essentially, modulo a few details involving PAC learning as a justification for induction, minimum description length, and Cox’s theorem, you can just replace Deutsch’s word “explanation” with Jaynes’ word “model” from chapters one and two of the book I mentioned at the beginning. The criteria that Deutsch wants out of an explanation, such as being “hard to vary,” can all be recast into Jaynes’ language of entropy, uninformative prior distributions, etc. What’s really bad is that since Deutsch makes disparaging remarks about classical inductionism, many readers are apt to feel like his “good explanations” are somehow different than inductively supported models that succeed under Occam’s razor.

    For example, Deutsch rather nonchalantly dismisses Occam’s razor as insufficient, claiming that “there are plenty of very simple explanations that are nevertheless easily variable (such as `Demeter did it’).” What’s wrong (and misleading) about this is that the concept of the goddess Demeter may have a short description length in an English sentence, but when subjected to algorithmic description when functioning as part of an algorithm searching for a maximum a posteriori model given some observed seasonal weather data, say, the description length of a being with such magical powers grows unwieldy very quickly. It’s similar to the creationist `refutation’ of evolution mentioned in WPSCACC. You could see a 747 formed out of a junkyard in a tornado, but you have to wait for exponentially many tornadoes in the size of the number of parts that comprise the 747. Occam’s razor would say that this creationist counter-argument is a bad one by complexity. Similarly, Demeter is a bad explanation for things because of the complexity invoked through Demeter. Just because it is statable in a small English phrase does not count as a mark against Occam’s Razor. It’s the *minimum* description length that matters, not just some English description length.

    In general, it seems that both Deutsch and Penrose don’t acknowledge that mystery is a property of minds that perceive and it is not a property of the external world. If I find a phenomenon mysterious, it is because I am ignorant of that phenomenon (literally, the KL Divergence between Nature’s actual probability distribution and my current mental model is large or possibly infinite; the incoming observations surprise me in the formal sense of Bayesian Surprise). They also do not justify their tendency to ascribe meta-mathematical success to human thinking. Lots of people point to Einstein as an example of human genius in `seeing’ the elegance of differential geometry as a connective analogy that led him to discover relativity and to assert his confidence about it even before it was experimentally verified. But for every Einstein that obsessed over the differential geometry connections, there were countless other physicists who obsessed over category theory connections, or algebraic geometry connections, believing them to be the next big explanation. History remembers the person who got it right; but this doesn’t mean that sicentific discovery happens because of regular human ability to think outside of existing formalism. It might even be the case that overall, as a species, we mostly accomplish every discovery via trial and error. Lots of physicists tried to derive lots of formal descriptions of relativity; many of them failed and the few successful theories are selected because of their ability to explain observations. This seems to be at odds with Penrose and Deutsch who both (albeit for different reasons) seem to think there is some kind of extra creativity going on inside the human mind.

  3. bobthebayesian says:

    To make it easier to skip over my longer comments, I’ll keep them together and go ahead and add my two cents on some additional things to read that grapple with some of what Penrose argues. First, David Foster Wallace (known for his great novels but also was quite gifted in formal logic) has an interesting book called Everything and More: A Compact History of Infinity. It is a very witty, irreverent discussion of the nature of infinity in mathematics history, and goes into a lot of technical details about paradoxes (such as whether 0.9999…. and 1.0000 are equal, as Penrose brings up). I found the discussion of ancient Greek interpretations on this in sections 2a and 2b of Wallace’s book to be especially interesting and helpful.

    The Mumford paper that I mentioned in my other post contains a section relating PAC learning to induction in a somewhat similar manner as is done in WPSCACC. Also, Penrose mentions the continuum hypothesis in footnote 5 and frames it as a significant and important dividing line between formalist and Platonist thinking. The Mumford paper cites a research paper from Chris Freiling that shows, via a very simple argument, that if random variables are permitted as basic constructs in mathematics then the C.H. is trivially false. I especially like Mumford’s reaction to this: “If we make random variables one of the basic elements of mathematics it follows that the continuum hypothesis is false and we will get rid of one of the meaningless conundrums of set theory… The continuum hypothesis is surely similar to the scholastic issue of how many angels can stand on the head of a pin: an issue which disappears if you change your point of view.”

    There are also a couple of additional comments that I’d like to make which are related to the stimulating and well-phrased questions that Scott included in the Lecture 2 summary that started this thread. In class, it was mentioned that Turing’s original way of phrasing the attribution of intelligence to humans was that we extend the “courtesy” of treating each other as conscious. This was possible in Turing’s time merely as a matter of cultural convenience. There weren’t chatboxes interacting with humans in meaningful ways and so the hardware standard for performing human-type tasks was just the human mind. There was nothing else to consider. But imagine our current world. When you log in to a bank’s customer service website, you might very well be greeted by a natural language chatbot. It might be able to talk to you and answer questions or point you to the right human customer service department. The text you exchange over a brief web chat may very earnestly not be enough to distinguish the customer service chatbot from a human. Because of this, you might very plausibly *not* extend the courtesy of just assuming the customer service agent is conscious. A person transplanted from Turing’s time who heard an automated voice when dialing Delta Airlines might genuinely mistake the voice for a human’s for several minutes. In short, when something is actually on the line, you won’t automatically extend the courtesy Turing talked about. If there were animatronic human-like robots from Blade Runner walking around, you’d have to give them the Voight-Kampff test before extending them that courtesy. I conjecture that this cannot be decoupled from ethical and social considerations and the way culture functions in a given technological climate. We cannot take Turing’s original conception of the test too literally. His desire to avoid the philosophical / ethical debate about what intelligence entails cannot be practically sidestepped like he wanted to; that was just a luxury of his time period. If A.I. is practical, choices of intelligence definitions can’t be decoupled from some sort of social and ethical considerations. Turing’s test is a candidate *definition* for intelligence, and it therefore implicitly has social consequences.

    I think this is the main positive point we can take from Searle’s work. I don’t find the Chinese room argument convincing, and I admire the concise and stylish way that especially Levesque and Shieber counter Searle’s points. However, with one very tiny and very simple philosophical argument, Searle set two decades worth of thinkers toiling on this issue. That alone deserves a lot of praise. If there’s anything to salvage from his point of view, it is that there is inherently more to the conception of intelligence than just measurable capacity to perform certain behaviors. Tis is the portion of Deutsch, from chapter 7, that I grealt agree with. You have to know something about how the knowledge is contained in a program before you can really judge. We can’t even get at this data for human thought currently, and we’ver just been following Turing’s advice and extending the label of intelligence to other people as a social courtesy. But a real understanding of intelligence, one that is a “good explanation” in Deutsch’s parlance, has to explain *how* the behavior happens, not just *that* it happens. Shieber’s arguments might have revealed that standard human Turing tests can rule out preposterously slow algorithms, but the distinction between intelligence and unintelligent doesn’t appear to be a distinction between preposterously slow algorithms and efficient algorithms. It appears to be a much more subtle distinction between different algorithms of roughly similar efficiency.

    • If humans are the only positive example of the property of intelligence, then every new example must be compared to some aspect of humans. From Turing’s perspective, the ability to carry on a natural-language conversation would have seemed one of the most convincing, much more indicative of intelligence than solving algebra problems or playing a game.

      There’s an argument that humans scientists will never understand the human brain, because to understand some system requires something strictly better than that system. It’s analogous to the incompleteness theorems, I suppose, where a theory of logic cannot prove itself consistent. It seems to me this concept has the potential to be speculated wildly upon, in conjunction with the possible relation of intelligence to efficiency, but I don’t know anything about its technical merits, and it’s 5 AM.

  4. Mr. Potter, Donald MacKay proposed a form of your argument in his Mind essay, “On the Logical Indeterminacy of a Free Choice” (http://mind.oxfordjournals.org/content/LXIX/273/31.extract).

    Professor Aaronson, I’d be interested in your response to MacKay.

    It may also be noted, that the fact that are aware that we aware means that there is something involved in human consciousness and thinking that is not regressive or reflective in quite this sense.

    • Scott says:

      I found that a very illuminating and well-written essay—thanks for the link!

      I do, however, disagree with MacKay’s central contention that the fact that the onlookers can’t tell A what he’s going to do, without A thereby doing something different, implies that the onlookers aren’t in possession of a “universal truth” or a “view from Mount Olympus” any more than A himself is. For, as MacKay himself points out in a footnote, the situation is far from symmetrical: the onlookers know what A is going to do, but A doesn’t know what the onlookers are going to do. (Furthermore, this sort of asymmetry is an essential feature of the scenario, for the logical reasons that MacKay points out.) If, after making his decision, A can consult the onlookers’ records and see that they perfectly predicted what he would do, then, even if the onlookers couldn’t have told A their prediction in advance, A would still have the perception that either

      (1) the onlookers were omniscient gods (ones who actually existed and could be interacted with!) in relation to him, or

      (2) he, A, had the “paranormal, retrocausal” ability to alter the predictors’ records and memories through the act of making his decision.

      I imagine that either of these conclusions would profoundly change A’s view of the world.

  5. bobthebayesian says:

    Not relevant yet, but let’s not let this sneak past us! (http://blogs.scientificamerican.com/observations/2011/09/19/free-will-and-quantum-clones-how-your-choices-today-affect-the-universe-at-its-origin/) Those are some interesting slides, Scott. I hope we will be talking about this during the semester.

    • Scott says:

      You better believe it. 🙂

      (George made me sound much more confident in that post than I actually am—“I’m just asking questions, man!”—but I understand he’s going to post corrections.)

  6. bobthebayesian says:

    I just stumbled across this video interview with several A.I. researchers, all discussing the history of A.I. through the lens of chess playing. The discussion meanders around and talks quite a bit about the “knowledge vs. search” dichotomy and the relationship between combinatorics and creativity. It’s lengthy, but very interesting and related to some of this weeks material. Pretend it’s a podcast 🙂

    (http://video.google.com/videoplay?docid=-1583888480148765375)

  7. I was convinced that in order to declare a program P “meaningfully victorious” at the Turing test, the program P must, as a necessary condition, both pass the indistinguishability criterion *and* be efficient.

    Indeed, from an existential standpoint, there *is* a function F that passes the indistinguishability criterion, because passing the indistinguishability criterion is a finite problem; and thus the function F is computable by a program P_F that has hardcoded the input-output table of the function F. Yet, we refuse to call P_F intelligent, for even in everyday life we tend to consider intelligent those behaviors that not only “make sense” but also possess “succinct explanations”.

    This leaves me wondering: if there is a way to measure “consciousness”, where does the “consciousness threshold” lie, among all programs passing the indistinguishability criterion?

    Note that I am thinking of a consciousness measurement as some sort of a “non-black-box” test, while the Turing test is a “black-box” one. It seems plausible to believe that, if there is a way to measure consciousness, then P_F would not have any, while the smallest program passing the Turing test would (humans probably being somewhere towards the “small” end of the spectrum).

    Where and how does this transition, from unconscious Turing-test-passing programs to conscious Turing-test-passing programs, occur?

  8. bobthebayesian says:

    For what it’s worth, my view is that both intelligence and consciousness fall on a spectrum, at least in the life forms observed on Earth. I think they both have to do with the resource efficiency of both the hardware and software that they are implemented on. We don’t take the lookup table to be intelligent because, as Shieber and Levesque noted, such a table would consume dramatically inefficient resources and physical hardware cannot implement them in a way that’s fast enough for real time Turing tests. But what’s more interesting to me is that among the things we do classify as intelligent, it seems like there is a lot of hair-splitting over algorithms that are assumed to be low order polynomial. I think the interesting debates about intelligence begin by assuming most relevant things are already in P (or some fixed complexity class) and then trying to decide what additional criteria result in the nuances of intelligence that we observe.

    • The connection between compression and abstraction / understanding / intelligence suggests an information theoretic perspective, in which we are all striving to approach the Kolmogorov complexity of the conversation tree (or whatever other task is being performed).

  9. In Levesque’s paper “Is It Enough to Get the Behavior Right?”, Levesque makes the argument that if a book did exist in the Summation Room (a simpler version of the Chinese Room, where the goal is only to sum two numbers rather than to respond to Chinese phrases), it would by design have to teach the person in the room how to add. In particular:

    “I claim that the person following the algorithm in Book B is not just looking up answers, but is literally adding the numbers. In other words, a person who memorizes the book and learns PROC1, PROC2, PROC3, and PROC4 actually learns how to add.”

    He goes on to say that the person in the room need not realize that he’s adding, or understand that the procedures even relate to numbers, but still has learned how to add.

    I don’t know that I agree with this argument, that memorizing a procedure is equivalent to learning. Perhaps part of the problem is that we tend to expect more when we say a person has learned a particular procedure, e.g., an ability to extend the procedure and to know when to apply it. This might be going too far; maybe extending and applying the procedure should be categorized as “understanding” rather than “learning”.

    On the other hand, my computer has the procedure for addition memorized, and can apply it whenever I ask it to, but I still don’t think my computer has learned to add. I don’t know that Levesque intended for his argument to be extended in this way, but it seems like there is something missing from the discussion, something that a human does when learning a procedure that a computer doesn’t. Which might imply that it’s not enough to get the behavior right.

    • D.R.C. says:

      But what exactly do you consider yourself to be doing when you are adding two numbers (or $n$ numbers, since that is a fairly simple procedure using the basic 2 number addition)? After memorizing the basic addition on 1-digit integers (or any base, but I presume decimal is what most people learn), you learn how a procedure that allows you to add two numbers, and then later, how to add an arbitrary number of numbers using the previous procedure. I don’t necessarily believe that (most, if not all) people have any greater insight into addition beyond what the computer does. In fact, the reason that computers add the way that we do is because we program them to do the same thing that we do. How else would you “extend” addition such that it requires true understanding rather than simply the ability to manipulate procedures? I believe I remember Minsky mentioning in The Society of Mind (the class, not the book) that someone programmed a basic system with just addition (if I remember correctly), and some simple rules to extend that system, and it wound up coming up with subtraction, multiplication, and division by simply trying out different combinations of additions. Would this be “understanding” or “learning”?

      One could argue that there is an algebraic reasoning how addition works (and/or other basic mathematical disciplines, since addition is defined for at least some of them), but eventually you get to the point where it comes down to some definition, which could be programmed directly in some form or another. After that, you could define rules which allow you to add in any system, and I do not believe that there would be any less understanding than a human has.

      • bobthebayesian says:

        This is why Penrose chooses to make his argument about the consistency of an axiomatic system rather than any specific computation one can carry out in that axiomatic system. His view is that humans can somehow “see” that their mathematical system is consistent whereas the results of Godel and Turing eliminate this possibility for formal systems and (for Penrose) thus also computers that “merely” implement formal systems.

        A useful fact for our class is that the philosopher John Lucas ( http://en.wikipedia.org/wiki/John_Lucas_(philosopher) ) actually put forward basically this same idea in the early 1960s without all the physics speculation over quantum gravity. Many people since have criticized it far more cogently then I can, but the main difficulty seems to be as follows. An axiomatic system can infer that if its axioms are self-consistent, then its Godel sentence (the P_{k}(k) example from the Penrose reading) is true. An axiomatic system just can’t determine its own self-consistency. But the problem is that neither can human mathematicians. They cannot know whether the axioms they explicitly favor (to say nothing about the axioms they are formally equivalent to) are self-consistent. Cantor and Frege, for example, famously proposed axioms of set theory that turned out to be inconsistent, and there is no reason to think human mathematics won’t have this problem again.

        This is why I expressed frustration over Penrose’s desire to more or less stamp his feet and just appeal to our intuition and say that the Godel sentence just must be “true.” He goes to far when claiming that this human-intuitive kind of “truth” is a mathematical truth that represents philosophical departure from the reasoning that a formal system can do.

      • The (E)CTT would seem to imply that any human thinker is merely a particular flavor of Turing machine that we don’t totally understand yet, and thus to oppose any claims of human exceptionalism.

      • bobthebayesian says:

        Yes, that’s true. Penrose’s book (which, don’t get me wrong, is very important and an incredibly good summary of various mathematics and physics results and the way they relate to philosophy of mind) is essentially his specific claim that human thinking is exceptional, that the ECTT is wrong about human minds, and that quantum gravity is a speculative explanation for exactly why current physics cannot (even in principle) explain human minds inside of brains.

  10. tomeru says:

    Related to the recent comment by Katrina regarding Levesque: Levesque seems to suggest there are two books – A and B, which correspond to look-up table and more compact procedures, which also correspond to non-understanding and understanding. I agree with the preceding comments in that my own intuition is that memorizing book B also doesn’t necessarily constitute ‘understanding’. There seems to be a continuum between two possible books, one which is a giant look-up table and one which is ‘actual understanding’ (not mentioned by Levesque, let’s call it B*). The trouble for me is that while I think B* is a possible book (i.e. I think ‘understanding’ of addition, or language, or chess, or whatever, can be written as some program), I don’t think B==B* and am not sure what B* would look like exactly.

    This is certainly true in the case of chess, which ‘bob’ mentioned in passing. We got the behavior right there, using methods which are not exactly look-up tables, but also are not what we think humans are doing given psychological experiments. In this case, computers have ‘passed’ the test, but we still don’t think they understand chess, or think that they are playing it intelligently. To claim that `computers which pass the Turing test are intelligent` or that they understand language is really to claim that there is no possible (implementable) procedure which could mimic conversation but not actually understand it. Given that for many other behaviors that there are such mid-continuum procedures, and given that the simplistic far end of it (a table-lookup) is at least theoretically possible, I’m not sure I see a strong reason to think the Turing Test is special in that if we find some ‘book B’ for it, it will automatically also be B*.

    • This reminds we of when I learned the multiplication tables for 9, and was taught the trick of writing down the numbers 0-9 forwards and backwards. When I memorized that trick, did I really *learn* how to multiply? One could argue all multiplication taught in elementary school is memorization of simple tricks and procedures, and that you don’t gain a true understanding of numbers until you study deeper fields like number theory or abstract algebra. Even now when I multiply numbers in my head I employ shortcuts.

      • tomeru says:

        Actually, I think what most people would call ‘learning’ or ‘understanding’ would fall somewhere in between those two things you describe, depending on the mental models they attach to these procedures. High-level abstract algebra can seem like meaningless symbol pushing just as much as senseless 4th grade tables. To take the ‘simple’ case of addition, I think it’s true that one gains some deep understanding from high-level theory, but that these high-level theories would be meaningless to us if we didn’t have earlier grounding in basic stuff being able to mentally picture urns and balls. Whether a machine would be limited by such a need to connect to ‘mental models’ of the world in order to ‘understand’ something is unclear to me, although I think this is what some people get at when they talk about grounding symbols.

  11. kasittig says:

    The example of the lookup table sparked a lot of debate, which sounded like it could be boiled down to two categories: either a lookup table was intelligent because it could provide you with the correct answer, or it wasn’t because simply coming up with the correct answer just isn’t good enough. I believe the lookup table fundamentally brings into question whether we think that intelligence is fundamentally a product of a large corpus of declarative knowledge (which is essentially what a lookup table would be) or whether it is being able to carry out a reasoning process (which is what many machine learning algorithms strive for).

    I have been going through the job interview process over these past few weeks, and companies assure me that they are attempting to gauge my suitability for a position by asking me questions and listening to my answers. It is interesting that the majority of these questions are intended to show my reasoning abilities rather than my ability to arrive at the correct answer – and my interviewers have been frustrated when I have already heard the question and therefore already know what to say. While this is perhaps an overly simplistic example (and realistically, I haven’t had enough interviews to get a statistically significant sample), this would seem to imply that declarative knowledge is far less important than my ability to reason, and also that very few companies would actually hire a lookup table.

    But, perhaps intelligence is just a buzzword – after all, it has nice connotations, and people like to think that they’re smart. It is also possible that our affinity for reasoning is born out of the fact that humans, unlike infinite lookup tables, cannot store an infinite amount of information in our heads. There is no conceivable way for me to know, in advance, all of the answers to all of the questions that I will ever be asked – and therefore, it is important for me to be able to arrive at the correct conclusion even when I don’t know the answer to start. A lookup table, however, can conceivably store all of the information (and maybe it will even have some way to incorporate new information as new information becomes available), and therefore it won’t necessarily need the ability to reason like I do.

    I think the question is actually simpler than this, though – where does the knowledge in a lookup table come from, anyway? Unlike the table, I actually do have the potential to come up with infinite information (well, for some definitions of infinite) by drawing conclusions based on the information that I have via my reasoning skills. An infinite lookup table will only ever be able to incorporate information that is already known – it has no way of generating new information. If we operate under the assumption that there are always new things to learn, then a lookup table could never actually be infinite – and it would need humans (or some other reasoning entity) to provide more information to catalog in the first place.

    I believe that a true test of intelligence must therefore not only test a program’s corpus of knowledge but must also test that program’s ability to generate new knowledge. Just as no company will hire me if I can’t reason out the answers to new and interesting problems, I don’t believe that we should settle for calling anything intelligent that cannot reason out new conclusions. Therefore, I don’t believe that a lookup table could ever be considered intelligent – because just knowing the answer isn’t good enough.

  12. amosw says:

    The reader will recall the Ship of Theseus as related by Plutarch:

    “The ship wherein Theseus and the youth of Athens returned [from Crete] had
    thirty oars, and was preserved by the Athenians down even to the time of
    Demetrius Phalereus, for they took away the old planks as they decayed, putting
    in new and stronger timber in their place …”

    This ship was discussed by Heraclitus et al. at length, the question being:
    after how many planks are replaced is one entitled to say that it is no longer
    the same ship?

    Now let us imagine an extension of the “epsilon machines” work by Shalizi and
    Crutchfield, in which we estimate to arbitrary accuracy a Hidden Markov Model for
    the internal states of a neuron given a record of its spike train data. It does
    not seem out of the question that we will someday have the technology to acquire
    a large database of spike trains for every neuron in Searle’s brain.

    In this manner we might acquire a complete description of the interactions of
    each neuron in his brain (without having to worry about how the internals of
    each neuron actually work). It seems likely that neurons are simple enough
    (Penrose’s arguments notwithstanding) that we can build tiny machines that
    faithfully emulate the input/output behavior of each neuron in his brain. Let
    us now imagine replacing every neuron in his brain, one by one. At what point
    would Searle say that he is no longer intelligent and is instead a Chinese Room?

    A similar question has actually been posed to Searle, and I find his
    response nonsensical:

    “… you find, to your total amazement, that you are indeed losing control of
    your external behavior. You find, for example, that when doctors test your
    vision, you hear them say ‘We are holding up a red object in front of you;
    please tell us what you see.’ You want to cry out ‘I can’t see anything. I’m
    going totally blind.’ But you hear your voice saying in a way that is
    completely out your control, ‘I see a red object in front of me.’ … [Y]our
    conscious experience slowly shrinks to nothing, while your externally
    observable behavior remains the same.”

    This seems to me the worst kind of magical thinking: if Searle will stipulate
    that our replacement neurons faithfully emulate the input/output of his
    biological neurons, then how could his intentionality evaporate bit by bit?
    Alternatively, if he will not thus stipulate, then his argument is now a new
    matter entirely: namely that there is something about the way neurons interact
    with other neurons which cannot be measured or emulated.

  13. Hagi says:

    Regarding the ECT: The problems for which the space of possible solutions grow exponentially with the problem size will not be in P, unless we have a clever way of solving the problem. It seems like by clever, one means being able to find structure in the problem that allows the computational scheme to exploit to have some “shortcuts” to the solution. In this sense, it would not be surprising that the ECT would not hold if we could have computational schemes that have a larger/different set of states available for computation. Proposed quantum computers satisfy this criteria, since we know from Bell’s inequality that quantum computers can take paths in computational phase space that Turing machines cannot. Thus it would be more appropriate to talk about the “structure” of a problem given a computational framework (or the allowed set of states in the computational phase space). Prime factorization problem may be an example to this, where it has some structure on the allowed states of a quantum computer such that it can be done in polynomial time, whereas it may not a for classical Turing machine. It would be much more fun if machines that have more states available to them (more than quantum computers, but still causal) can solve some problems more efficiently than quantum computers. Then we could talk about a problem not having exploitable structure on quantum mechanical states, but on non-local box states. Unfortunately the results about these non-local boxes have focused on communication problems as far as I know.

    Regarding the Chinese Room Argument: Based on the discussions from two weeks ago, it seems like most people would not consider a large look-up table as intelligent. This makes me agree that the best way to think about intelligence is through computational complexity arguments, where a thing is as intelligent as it can compress the resources needed to complete a given task. Thus the exponential look-up table would be least intelligent amongst the things that can pass the Turing test, humans would be most (so far in history), and IBM’s Watson would be somewhere in between once it can pass the test. In this sense reasoning would be the algorithm you use to match the correct output to the given input, and learning would be constructing that algorithm.

  14. Cuellar says:

    I find very interesting the idea of considering complexity in the Turing Test. By asking a program to be polynomial size (on the length of the Test) we seem to require some ‘understanding’ of the information which allows compression. I want explore some related issues and argue for the necessity of randomization.

    What does it actually take to judge the Turing Test? Suppose we have a program capable of passing the Turing Test. We could, for example, ask the program to judge a Turing Test with a third party. So it is natural to require an ‘intelligent’ program to be able to judge a Turing test. But, what does that even mean? If the program was deterministic it would be trivial to write a (very non-intelligent) script to pass his Turing test. Sure, humans don’t always judge perfectly during a test, but for this program it just takes an extra linear space for any other program to deceive it. We don’t know how to do so with a human, although one could argue it is possible. In either case, the idea of using any (decent enough) human as a judge suggest some unpredictability. If our ‘intelligent’ program was able to randomize over some n questions (every time he asks a question), encoding the responses to all of them would take an exponential space (namely n^t). It is not clear weather humans can actually randomize but we can always use coins, so this idea seems to be realistic. Moreover, if one was to assume humans do apply the Turing test deterministically, if there were polynomialy many humans, a look up table needs only to store responses for each one of them. This scenario seems also undesirable and thus I believe randomisation is necessary to perform the Turing Test (by humans or computers).

    How precise is the Test? Even Turing in his original paper understood that the test was not completely accurate and would only succeed with ‘good’ probability. Now, in most cases we can just repeatedly apply the test to amplify the probability, but this is not always possible. Consider the case where the difference between the probability of guessing a human is human and the probability of guessing a pseudo-smart program is human, is negligible. That is:
    |Pr(A(Human)=yes)-Pr(A(program)=yes)|<e(n)
    for some exponentially small e. I imagine such a robot to be indistinguishable from humans in a day by day basis, but incapable of building a civilization on its own (or in a community of similar robots). In such a case it is clear that a 10 second conversation will not be enough to determine if the interlocutor is the program or the human. Indeed, most reasonable-length conversations the program would appear to be human. It is unfair to just ask for a longer Turing test because our resources are just polynomial and we can’t perform a sufficiently long one! So it seems we are stuck, for what the Turing test is concerned, with accepting pseudo-smart robots in our communities, neighborhoods and schools. On the other hand, to be fair, we don’t even know to that degree of precision what ‘intelligence’ actually means and we are not concern to test our friends with that rigour. If we were asked to rank people’s intelligence from toaster to human we would probably rank the pseudo-smart program much higher than some humans. So, who cares?

    • Scott says:

      Very interesting points! To summarize: IF a Turing test was being judged by a deterministic program P, AND you knew the code of P, it would be very easy to write another program that passed the Turing test as judged by P.

      As always, though, when we make an argument of this sort we need to check how it changes when the programs are replaced by humans. Suppose someone knew the complete wiring diagram of your brain; then wouldn’t it be easy for them to write a program that passed the Turing test as judged by YOU? 🙂 Or is the resolution that your brain (necessarily?) behaves non-deterministically?

      As a side comment, the duration of civilization is just an eyeblink compared to 2^n for reasonably large n! So if we really had a program such that
      |Pr(A(Human)=yes)-Pr(A(program)=yes)|<1/2^n,
      it's entirely plausible that the program would be completely indistinguishable from a human from now until the end of the universe.

  15. Sinchan says:

    In this essay, I attempt to take a few stabs at why a giant lookup table would not be conscious.

    First of all I would like to argue that it seems impossible to create a lookup table that can truly be large enough to replicate the interactions conscious beings have [where I assume the interactions of conscious beings to engage all possible questions and answers]. This is because if at time T, the lookup table says that it is big enough, I can use a variant of Cantor’s diagonal argument to argue that it is not big enough by coming up with a question that is not stored in the lookup table. The variant of Cantor’s diagonal argument is basically that if you take [I’m not exactly sure if this is the right descriptor] semantic subsegments of all of the sentences of the lookup table at the time it says that it is big enough and then put them in a matrix. Then you can make a new piece of knowledge from the elements on the diagonal of the matrix. There are some details of how you combine and pick the semantic subsegments that I am glossing over here.

    Secondly, if it is possible to create a giant lookup table that can truly answer all of my questions [this is a stronger requirement than the requirement in the Turing Test], then I would like to argue that it is not conscious. One can argue that such a lookup table can be ‘intelligent’ or ‘all-knowing,’ but consciousness [at least the way I see it] is a property that is differentiated by its experiential nature. Conscious beings are defined by what they experience. The difference between experience and knowledge can be seen in emotions. I can have knowledge of the strong emotional bond between a parent and a child by reading about it in a book. However, I would argue that such knowledge is insignificant when compared to assuming the role of a father or mother and experiencing the bond first hand.

    To this one can argue that emotions themselves can also be represented in the form of knowledge by mapping out flows of chemicals in a body etc. However, even if a giant lookup table can actually hold representations/knowledge of all emotions, I would argue that it is still not conscious. I would argue this because conscious beings react not only based on what they know, but also based on what they do not. The Turing Test and the Chinese room argument focus on how much humans can find out and know about. Yet, the complexity that arises between conscious beings often is a result of the difference in knowledge between the beings. Let’s assume that my set of knowledge and experiences is set A and then my friend’s set of knowledge and experiences is set B with the complements of these sets being non-empty. Now our interactions are based on not only our sets of knowledge but also the relative complements to them. The incompleteness of my knowledge set makes my interactions with my friend different from the interactions I would have with my friend if the complement of our knowledge sets was empty or if we were both all knowing.

    • Your second point is reminiscent of the Mary’s Room argument.

    • bobthebayesian says:

      These are interesting points. Let me ask a few questions to see if it stimulates any insights. What if someone solves the problem of artificial general intelligence, but the being that this programmatic A.I. is implemented in does not feel emotions? Human emotions are a result of evolutionary processes; when you are in a computational bind and cannot afford to spend time processing data, you will have a better chance for survival if your cognitive system overrides consciousness and implements pre-programmed responses that have worked on stereotypical inputs in the past. It seems natural selection happened to favor these in human evolution. The side effect that emotions can also play a role when we are conscious and not in a computational bind doesn’t seem to be any special part of what it means to be conscious. What if we created an artificially intelligent being that did not have cognitive processes similar to what we call ’emotions’? Are you saying that this is impossible (even if the intelligent being is far more resource efficient than a look-up table)? If so, what is your intuition or some evidence about why emotions are special to our thinking, and can it be related to Searle’s ‘causal powers’ argument? Does resource-efficient solubility of Turing tests imply not only the capacity to simulate emotions but also that they are a necessary part of the thinking process? If you think a machine artificial intelligence could be built independently of emotions, then how else is it possible that we might verify this “experiential” aspect of intelligence in your definition?

  16. D says:

    It seems like it may be a bit unpopular around here, but I’m actually fairly receptive to Searle’s thesis, and unconvinced by the arguments against him.

    A lot of the general confusion is about the question we’re asking. Turing refused to consider the question of machines “thinking”, and asked instead about machines passing an imitation game–but then talked about machines learning and competing in intellectual fields. Of course, the modern interpretation of the Turing test is one of “intelligence”, but it’s not really clear what “intelligence” means, or why passing an imitation test is the appropriate qualification.

    The Chinese Room seems fairly convincing to me that the Turing test can’t really be an accurate test for “real” intelligence (whatever that may be). The strongest reply I’ve seen–the so-called “systems reply” that Prof. Sussman brought up in class–doesn’t really seem very compelling in this area, for the simple reason that it posits some sort of macroscopic, emergent definition of “intelligence” that is equated with human intelligence. Even if one accepts that the system as a whole “speaks Chinese”, does this mean that the system as a whole is “intelligent”, or simply that it is a mechanistic system able to accurately mimic a Chinese speaker?

    We aren’t going to get anywhere on the question /most/ people mean when they speak of the Turing test without a definition of “intelligence”. Unfortunately, this question goes quickly into metaphysics. (My personal leaning is that intelligence requires understanding, which requires a conscious agent, and that machines–even ones that can mimic humans very well–aren’t conscious.)

    Finally, some seem to suggest that more skeptical points of view (such as Searle’s) necessarily lead to an extreme form of solipsism: if we don’t have a test for intelligence, how can we know that anything (apart from ourselves) is intelligent? Scott gave an off-the-cuff response in lecture (I’m not sure how serious) of “courtesy”–we may not know whether others are intelligent, but if their conversational input-output behavior implies that they /might/ be, we might as well act as if they are out of politeness. (That is, we can conclude that we should /act/ like others are intelligent without taking an underlying philosophical position on the matter.) To which I reply: why is conversational input-output the only thing that matters? That’s not the only data we have on hand. It’s perfectly reasonable for me to ascribe more intelligence to humans based not only on conversation but also on the fact that they appear to have similar biology to me, etc.

  17. nemion says:

    I would like to examine intelligence from a subject perspective. From the viewpoint of a subject, it seems intelligence is a process,± not a result. This means we actually understand the concepts of reality, or at least we apprehend reality in a certain way. When a subject is asked for the word apple, the question fires a process in our brain that generates traces of the experience of an apple inside the subject. This experiential reminiscence of the concepts of reality I claim is the basis of what intelligence is.

    If we analyze the problem from a non subject perspective, the Turing test is the right experiment to ask about the nature of intelligence. The fundamental question which seems to be the point of disagreement between the accepters of the Turing test, and those who don’t is the following: what do we actually mean when using the word understanding? It seems the general consensus is that we understand concepts in a way that implements an experiential reminiscence as the one described above, and that this experiential reminiscence (representation) cannot be replicated by a computer program.

    The last seems to be the main issue in the Chinese room problem. The man inside the room does not have the experiential reminiscence of Chinese (and this doesn’t need to be a language in the form of a verbal grammar, but a symptom of a wider array of languages, and world descriptors, and objects), and therefore from a subject point of view it seems the is a barrier of indirection that prevents the man from actually knowing Chinese since there is an added layer of indirection in his translation from the symbols to the internal experiential reminiscences.

    As a side note, notice that we can always pose similar tests for the more general problem of perception. If we see an object that looks like an apple, feels like an apple, etc… is it an apple? Ultimately we find the metaphysical question of what is actually an object? What defines an object? From a metaphysical viewpoint in which things are defined by their properties and that things are instantiation of properties (and the one that people usually asume), intelligence would then look as the instantiation of an ideal that then can be said to be unable to be replicated in a machine. On the other hand from a metaphysical viewpoint in which things are defined by their relations to each other, it is reasonable to say that from a certain view frame (for example outside observer of the Turing test, or the deceived participant of the Turing test) anything passing the Turing test should be considered intelligent.

    In any case, it seems the question of experiential reminiscence is really controversial because of the widely assumed dualist view of the being and the “natural” metaphysical view that treats objects as instantiation of platonic properties. This does not mean that there is an impossibility of simulating intelligence, but I think is an explanation of the widely belief that it cannot be simulated or that it is not the gigantic lookup table we talked in class.

    On other things we can also consider the following viewpoint, saying that it could be possible to ask if there exists a subject that from within itself would be unable to distinguish its own intelligence. It then comes the question if consciousness is dissociated from intelligence or not, or even if consciousness exists. If so, then it is plausible to have entities that are self deceived to think they are intelligent. A subject that is self deceived to think it is intelligent. If we were such types of entities, then, would it even be possible to define intelligence.

    We can also ask if it could exist a recognizer of the Turing test that exists dissociated of any intelligent entity. Or in other words if the ability of recognizing the Touring test is or not dissociated from that of passing the Turing test and of course we can ask (if the experiential reminiscence cannot be simulated with a Turing machine) if there is an experiential reminiscence test tat can be passed by a Turing machine.

  18. bobthebayesian says:

    I just saw this on Hacker News: (http://www.cs.nyu.edu/pipermail/fom/2011-September/015816.html) with the outline for the upcoming paper here: (http://www.math.princeton.edu/~nelson/papers/outline.pdf). Can anyone from class who has a background in logic perhaps comment on this? Is this as significant as it sounds, or is it just dressed up a little? It seems like it could have some implications for our course if we discuss foundations of mathematics and finitism. I can’t find any obvious weaknesses in the outlined PDF, but this is probably just due to my lack of background in logic. It looks like several noted mathematicians are skeptical about it. If it wound up being correct, this would be a good illustration of why Penrose’s conjecture that human mathematicians can “just see” the consistency of their own axiomatic systems is wrong.

    • Scott says:

      bobthebayesian: Terry Tao has a comment here pointing out what seems to be a fatal error in Nelson’s alleged proof of the inconsistency of PA. (In the past, I’ve seen Edward Nelson make a large number of incorrect claims, and have never seen Terry Tao make a single incorrect claim, so without having studied the details my money would be on the latter here—even setting aside the issue of my near-zero prior probability that PA is inconsistent. 🙂 )

  19. wjarjoui says:

    Is a large look-up table conscious? Well, is our brain a large look-up table?

    If our brain is where our “consciousness” comes from, then I do not see why a large look-up table necessarily cannot be considered conscious. Our brains can store a ridiculous amount of information, and every time we answer questions we access this information. Whether I store this information on a few billion (?) neurons or a chip is a matter of “implementation”. Perhaps neurons are God’s version of flash memory (where God is whatever created us humans, be it an actual God or evolution or anything else).

    If we are able to accurately model an apple, then we would be able to represent an apple for all what it is in a computer simulation. The simulated apple will not be physically the same as the apple itself, but it will otherwise be exactly the same.

    There is no reason why we cannot assume that humans were not created by a different species who uses organic material instead of semiconductors to build their robots. Hence I do not see a reason to disregard a large look-up table, at least physically, as conscious – if consciousness is a property of the brain. The large look-up table might lack some algorithms to exhibit true consciousness, but if we are able to model behaviors in the brain, that should not be a problem. Whether such a large table can be built is another issue – this is a philosophy class, practicality is out of the window :).

    • wjarjoui says:

      Another point to consider is are we conscious the moment we are born? Or do we learn to be conscious?
      This learning does not have to be brought about by an external teacher – it can be simply from experience.
      If we encode a set of rules for a program that represent human instinct – stay away from pain for example – and allow the program to record it’s own experiences and apply those rules on them, can’t we refer to this program as conscious?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s