Does many people’s reluctance to regard a giant lookup table as intelligent simply have to do with induction—with the fact that they know the lookup table can’t handle inputs beyond some fixed size, whereas a human being’s responses are, in some sense, “infinitely generalizable”?
If so, then what is the sense in which a human’s responses are “infinitely generalizable”? And how can we reconcile this idea with the fact that humans die and their conversations end after bounded amounts of time—and that we don’t have an idealized mathematical definition of what it means to “pass a Turing test of length n” for arbitrary n, analogous to the definition of what it means to factor an n-digit integer?
Andy Drucker pointed out the following irony: we suggested that calculating an answer via a compact, efficient algorithm would demonstrate far more “intelligence” than simply reading the answer off of a giant lookup table. But couldn’t one instead say that someone who had to calculate the answer by a step-by-step algorithm was plodding and not particularly intelligent, whereas someone who mysteriously spit out the correct answer in a single time step was a brilliant savant? What do we mean by the phrase “lookup table,” anyway?
What is the precise relationship between Gödel’s Incompleteness Theorem and Turing’s proof of the unsolvability of the halting problem?
Does the Lucas/Penrose argument succeed in establishing any interesting conclusion? (For example, does it at least establish that algorithmic processes whose code is publicly known have interesting limitations, compared to processes that either aren’t algorithmic or whose code is unknowable if they are?)
Does the argument succeed in establishing the stronger conclusions about human vs. machine intelligence that Lucas and Penrose want? If not, why doesn’t it?