What are the best arguments for polynomial-time as a criterion of efficiency, and exponential-time as a criterion of inefficiency? How strong are those arguments? How should we handle the problem of n10000 and 1.0000001n algorithms?
What sort of statement is the Extended Church-Turing Thesis (which equates the problems that would “naturally be regarded as polynomial-time computable” with those solvable in polynomial time by a deterministic Turing machine)? Does the ECT raise any philosophical issues that are different from those raised by the original Church-Turing Thesis?
Should a “giant lookup table” be regarded as conscious, assuming it passes the Turing Test? Given that such a lookup table couldn’t even fit inside the observable universe, does the question even matter?
If we say that a giant lookup table that passed the Turing Test wouldn’t be conscious, then what additional attributes, besides passing the Turing Test, should be sufficient to regard a program as conscious? Are time and space complexities that are polynomially-bounded in the length of the conversation enough? Or can we also demand other attributes, related to the program’s internal structure?
If we try to understand Searle’s Chinese Room Argument as sympathetically as possible, what can we take Searle to be saying? Even if Searle himself refuses to specify, is there any plausible scientific candidate for what the “causal powers” that he talks about could consist of? Could those causal powers have anything to do with computational complexity?