Class #6: Proof Protocols, Semantics of Computations, and Waterfalls

In class, someone suggested why traditional mathematical proofs seem to differ from zero-knowledge proofs, probabilistically checkable proofs, (multi-prover) interactive proofs, quantum proofs, and all the other exotic types of proof studied in modern cryptography and complexity theory.  Namely, with a traditional proof, “all the cards are on the table”: there’s nothing the verifier doesn’t get to see, and for which he or she is forced to make some assumption about the laws of physics and/or the limitations of the prover(s).  But what exactly do we mean by “all the cards being on the table”?  Suppose a proof is written out in a conventional way, but it’s a trillion lines long, so in practice it can only be checked by a computer—does that count as “all cards being on the table”?  Or does a conscious person need to be able to understand the whole proof, “at a glance” so to speak?  Where should we draw the line, and why?  (Or is there no need to draw a line?)

Suppose we say that “all the cards on the table” with a conventional proof, but not with a proof that requires a quantum computer to verify.  Does that imply that (as Niels Bohr argued) there’s still a fundamental role for classical concepts, even in a quantum-mechanical universe?

We discussed the arguments of Searle, Putnam, and other philosophers that the “meaning” of a computation is always observer-dependent—since merely by our choice of how to label inputs and outputs, we can interpret even a rock or a waterfall as computing anything at all (say, the next move in a chess game).  Most CS folks have a very strong intuition that this argument is wrong, but why is it wrong?  Does it have to do with some input/output labelings being “inherently more natural” than others?  Or with the computational complexity of reducing chess to rock- or waterfall-simulation—i.e., with our intuition that all the actual work of computing chess moves would have to be done by the reduction itself, rather than by the rock or waterfall being “reduced” to?  Or (as Chalmers says) with the various internal states of the rock or waterfall not having the right causal relations to one another?  Or something else?

Are the various responses to the rock/waterfall argument mentioned above all basically the same?  Can you suggest examples where some of the responses would work, but not others?

Are any of your answers different for the rock (which we could naïvely describe as “not computing anything”) than for the waterfall (which we could naïvely describe as “computing something complicated but uncontrollable”)?

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

19 Responses to Class #6: Proof Protocols, Semantics of Computations, and Waterfalls

  1. We discussed Deutsch’s claim that “the notion of a proof depends on the laws of physics”.

    An argument supporting that claim that was mentioned in class is the following: when reading a proof we tacitly assume that a demon is not messing around with our ability to reason and/or accurately retrieve data from our short-term memory.

    Yet, since the above argument seems so implausible, it almost undermines Deutsch’s claim.

    I *was* mostly convinced that the notion of a proof does depend on the laws of physics, but *not* through the demon argument. I think there are better arguments for Deutsch’s claim. (Even without venturing into more modern notions of proof introduced by computer scientists.)

    For example, we are happy to equate traditional “classical” proofs with computations of Turing machines, but we hesitate to call computations of quantum computers “proofs”. But this seems an arbitrary separation! This, especially when we remember that our innate reasoning abilities have evolved in world where our sensory experiences are “classical” and not “quantum” (i.e., are about average-size objects moving at average speeds). That is, we natively reason classically, and only understand quantum mechanics through that lens.

    In principle, in other universes with different laws of physics, people could evolve with ideas of what reasoning means that are different than ours, as informed by what is mechanically possible in their universe. For example, in a world where Hilbert’s Infinity Hotel could exist, what constitutes a proof (or a computer) could be vastly different.

  2. Despite their nice, mathematical and rational flavor, one should not be tricked into thinking interactive proofs are “proofs” in the traditional sense of the term. Rather, they are observations of events (in the real world no less) that result in *belief*. Certainly interactive proofs result in stronger (and more rational) beliefs than the ones I form watching my friend place a red blob in their mouth that they are eating an apple, but the observation makes no sense absent an external world, absent other entities I’m interacting with and an independent physical existence. I can play these games in my head but it won’t do me any good (even the quantum proofs!)

    One should grant, however, that traditional mathematical practice is, in fact, a tremendously interactive and social pursuit, and that the idealized, formal notion of proof is rarely carried out. But in principle, these mechanisms can be abstracted out: for interactive proof, it cannot.

  3. Katrina LaCurts says:

    I have a few thoughts on what differentiates classicial proofs from interactive ones.

    First, interactive proofs aren’t actually proofs; they don’t prove that a statement is true, but rather show that it’s true with extremely high probability. That doesn’t make them unimportant, or not worthy of pursuit, but it also doesn’t make them proofs. If the prover in an IP could prove the statement as definitely true (not just probabilistically true), my guess is that we could reduce that interactive proof to a non-interactive one. But I’m only speculating.

    Second, there is a difference in the questions being asked. Conventional proofs tend to prove broad statements (“all graphs are four-colorable”), whereas an interactive proof is about a specific instance of a problem (“this graph is three-colorable”).

    Third, regarding the “all the cards are on the table” idea, an interactive proof can’t be performed alone. Without Merlin, Arthur can’t convince himself that a statement is true. A conventional proof, however, only requires the prover. Relating this to our discussion about physics, personally I don’t believe that conventional mathematical proofs are dependent on physics, but interactive proofs very clearly are: they depend on a wizard with unlimited computational power. Or, at the very least, they depend on there being a person who actually holds a (presumed) solution to the problem (a three-coloring of the graph, for instance). This isn’t quite the same as proving that the statement is true; it’s showing, with high probability, that someone else can prove the statement is true.

    Fourth, at least in the case of zero-knowledge proofs, we (intentionally) don’t learn anything about the structure of the problem. That’s virtually never true in a conventional proof. Even with existence (vs. constructive) proofs, we learn something about structure of the problem space, if not about the structure of the entity that exists. This is great for certain security problems (authentication, e.g.), but it’s not what we expect in a conventional proof.

    • Scott says:

      Hi Katrina, just a brief clarification: you can give a zero-knowledge proof for as “broad” a statement as you want — for example, that every planar graph is four-colorable — by first expressing the proof in ZF set theory, then converting it to a 3-coloring instance (using standard NP-completeness reductions), and finally running the GMW protocol.

      (To put it another way: there exists a fixed planar graph G such that, if you can 3-color G, you thereby prove that every planar graph is 4-colorable! 🙂 )

  4. bobthebayesian says:

    Regarding the “all the cards on the table” characterization, I think one feature that is important to classical proofs (even if they are a trillion lines long) is that they are *in principle* compressible into a more compact natural language description that captures the gist of “why” the underlying concept that the proof either proves or disproves is true in a sense that can be accessed in day-to-day human experience.

    My family, for example, often asks me why I study advanced math and to them it really seems like I must just solve really large systems or equations or reduce extremely large fractions to lowest terms or something (this is not a knock; why would they know what mathematicians really do?). I usually tell them something like: mathematics is all about memorizing *the least* amount of information possible such that you can recover any specific details you might want later on just from the small set of principles you carry in your pocket. Euler’s formula is a good example of this. When I was preparing for the GRE math exam, having to memorize all the common trig identities was a real pain… something that for me had essentially nothing to do with mathematics. Knowing Euler’s formula allowed to re-derive any such identity whenever I might need it without having to have it memorized for immediate use.

    I know the analogy with Euler’s formula doesn’t carry over exactly to the domain of proofs. But one of the valuable things about classical proofs is that they turn a mathematically formal result into a compact concept you can carry around. Zero-knowledge proofs don’t do this for you. The only knowledge you gain from a zero-knowledge proof is that there exists a process for determining the answer to your question, assuming the prover isn’t cheating or a very lucky guesser. This doesn’t give you any way to compress the truth of the result beyond just always recapitulating the entire result.

    The trillion-line proof, however, could in principle be compressed down to a smaller size, say a few pages, that give an account of the key modular steps involved. We might need good natural language processing software to do this synopsizing, but it is in principle doable. Now, if the proof grew so long that it was computationally inefficient to even parse it and put it into some natural language summary, I think we’d have to ask more questions.

    Interestingly, this might imply that the different between classical proofs and zero-knowledge proofs is *anthropomorphic*. Maybe humans seek the comfort of a compact English synopsis of the steps of a proof and we have a certain kind of prideful satisfaction about being able to give this slightly detailed, modular account of the process the proof takes you on, without needing to recapitulate the entire literal proof. A zero-knowledge proof doesn’t afford us that chance, so we have an inclination to turn up our noses at them. If machines ever did gain sentience and began doing mathematics for themselves, I wonder if they would even think to care about this distinction.

  5. nemion says:

    Semantics of computation

    I really enjoyed the question of whether computation can be about anything. I believe the intuition behind a rock being able to implement any finite automaton is wrong. It is basically relying on the idea that any labeling of the beginning can produce any labeling of the final states, this idea of unrrestricted labeling draws from the also widely present idea of that information bits (chunks) are unlabeled. There is a reason to believe that computation over this type of unlabeled information would then be implementable by a rock, or a waterfall.

    I claim that the mistake of all these arguments lies in the use of the traditional information theory and theory of computation framework to analyze information that is semantic in nature. If we were to consider in the processing of information as a semantic bussiness, then the initial and the final labeling would need to be related in some way.

    Evidently this argument is circular, it relies on the assumption that semantics is necessary. Notice that if there were not a claim that semantics were necessary, we would not even discussing this argument.

    As a side note, this seems to imply the need for a theory of semantic information. I close this discussion by outlining some ideas towards such a theory:

    “Our ordinary notion of understanding a concept can be captured in the following way: we understand a concept c if we have a succinct (undefined, constant length?, polynomial length?) semantic description of c in terms of a previous (prior) semantic information basis. Semantic information can be succinctly encoded if new information can be succinctly (complexity of the semantic construction) encoded (using a set of predefined encoding operators) based a previous semantic knowledge basis (notice the vague resemblance with Lempel Ziv).

    Consider the following example:

    A = hhh, B= hh , C= tht, D= hhhhh

    D= A+B hence it has a succinct semantic description if we consider {A,B} to be our semantic information basis, while C has (given the same semantic information basis {A, B}) no succinct description. Notice that the previous discussion both explains the need and outlines the fundamental requirements of a theory of semantic information. Probably flawed, this discussion serves to illustrate the general goal that such a theory would like to achieve. “

  6. D says:

    I agree with above commenters that a traditional proof is a fundamentally different object from an interactive proof. They achieve their goals in very different ways.

    While there are a number of differences (e.g. the negligible but nonzero inherent probability of error in interactive proofs, even independent of things like the demon argument), they seem to boil down to the inherent use of interactivity.

    In terms of generating proofs, in the traditional proof setting if I don’t know a mathematical fact, in principle I could derive it on my own. Of course, I’m not Gauss, so in practice I generally fail at doing so–but in principle I could. In the interactive proof setting, there’s no way I could generate a convincing proof on my own (if I could, it would be a traditional proof).
    In terms of verifying proofs, I can look at someone else’s traditional proof and, again in principle, understand it. Even if the proof is very long and advanced, in an idealized situation (“if I had infinite time and were as smart as Gauss”) I could. If I look at a transcript of someone else’s zero-knowledge proof, it’s not going to convince me.

    This is true even assuming all my naive perceptions of the universe and my own mind are correct. Both traditional and interactive proofs–and essentially everything about life in general–is subject to the demon argument (how do you know you aren’t a brain in a jar hooked up to a bunch of electrodes feeding you stimuli?). But they have numerous differences aside from that. Essentially, in answer to your question, I would say that the line exists even in principle, and am willing to abstract away size and complexity. I’d say that a trillion-line (computer-generated) traditional proof is still fundamentally the same sort of thing as a three-line traditional proof, even if a human couldn’t realistically generate or verify it.

  7. It seems to me that practically speaking, the purpose of a traditional proof is to transmit the understanding of some chain of logic from a prover to a reader, who may check each step but still has the possibility of making a logical mistake due to limited computational power. Though they all follow the same general protocol, each reader goes about it in their own way. From this angle traditional proofs are not so different from interactive proofs. But there’s still some sense in which the Platonic ideal of a traditional proof, which is understood efficiently and completely by readers who can then prove it to others, differs from that of an interactive proof.

  8. amosw says:

    It appears that Hilary Putnam is using Liouville’s theorem to argue
    that a rock, when modeled as a system in classical mechanics, has no
    trajectories that merge: i.e. it is forwards and backwards deterministic.

    So if we quotient time from 0 to n seconds into
    t1 = [0, 1), t2 = [1, 2), …, [n-1, n), then we have n connected segments
    in the trajectory of the rock through its phase space from time 0 to n seconds.

    We can label these segments s1, s2, …, sn. Then if we, for example, make the
    identification A = si where i is odd, and B = si where i is even, we
    have an alternating sequence ABAB…AB, which corresponds to a two-state
    finite state automata {(A, B), (B, A)}. This example is without loss
    of generality, and Putnam claims, shows
    that an arbitrary physical system modelled by classical mechanics can
    implement any input-less finite state automata simply by choosing a suitable
    identification.

    We might not worry about this too much were it not for the fact that
    input-less finite state automata whose state graphs are disconnected
    can compute arbitrary functions restricted to
    finite input. This works because we can choose to interpret different
    starting state as “input”. For example, if have the
    FSA M = {(A, B), (B, C); (D, E), (E, F)}, then if we set the initial condition
    to A we will halt having “computed” an output of C, and with the initial
    condition D we will halt having “computed” an output of F. This is
    in fact precisely how any program running on your laptop works: even though
    we usually model computers as Turing machines, any physical computer is
    in fact a finite state automata that we program by picking different
    initial conditions.

    If we think that consciousness can be simulated via a very large computer
    program, then there is a FSA that simulated consciousness. If the
    phase space trajectory of a rock can
    be identified with an arbitrary FSA, then that rock is simulating
    consciousness and we have to either accept Panpsychism or reject our belief
    that consciousness can be simulated.

    But I think Putnam can be attacked on the following grounds.

    It seems to me that his argument carries through to a much simpler system: namely,
    a child writing out the natural numbers on a paper notepad. Certainly
    we can do the same ABAB…AB even/odd automata trick above to the child’s
    sequence of 1, 2, 3, 4, … That is, a finite sequence of increasing
    natural numbers shares the essential property of classical mechanics that
    Putnam is exploiting, so if we accept his “proof” then we have to accept
    that a sequence of natural numbers is simulating consciousness, which I
    hope is an absurd conclusion.

  9. Miguel says:

    Does computation have meaning? Searle argues that computation does not have intrinsic semantics outside of that given by an interpreting observer. Chalmers (1996) counters that, insofar as computation involves a causal system embedded in physical reality, some interpretations are intrinsically feasible while others are not, regardless of the observer.

    One can make a more straightforward case, however, using computational complexity. Consider an observer O interested in computing a recursive set A \subseteq \mathbb{N}. We say O seeks a ‘computational interpretation’ of some observed recursive set B \in \mathbb{N} (a phenomenon) if A \le_P B: that is, if the observed can reduce A to B in at most polynomial time (ie if the observer can feasibly carry through the implementation). Put in this way, the absurdity of the claim that computation is ‘observer-relative’ becomes apparent: if that were the case, then any arbitrary decision problem A would be polynomial-time reducible to any other B!

    Furthermore, resource-bound reducibility establishes an observer-independent way to obtain semantics. Since the reduction relation \le_P is a pre-order and induces equivalence classes within the set of all decision problems, then some computations are intrinsically more related than others in terms of not only how feasibly one can reduce another, but also how useful/non-trivial the resulting reduction is. We can then define the ‘semantic distance’ s between two decision problems s(A, B) as a measure of how weak is their resource-bound reducibility.

    Note that the sole (albeit key) physical claim of this argument is the requirement of ‘feasible implementation’ by restricting reductions to polynomial-time, which is reasonable enough. Also note that this observer-independent notion of semantics emerges from mathematical properties of computation that would strike Searle as ‘merely syntactical’.

    One could also speculate on whether this notion of semantics has any relevance to the notion of knowledge as ‘knowing how to compute efficiently’. Consider a Turing machine f that reduces A to B in polynomial-time; A to C in exponential time; and given access to oracles for both B and C, determines s(A, B) < s(A, C) and so ignores the oracle for C. Can one say that f formally represents the knowledge that A and B are semantically related, while A and C are not?

  10. Hagi says:

    I do agree that the computational complexity ideas show why the “waterfall argument” does not work. If the process of reducing chess on a waterfall is as hard as chess itself, it makes no sense practically to use that reduction. But even without using such rational arguments, I think the waterfall argument supports computationalism.

    Even if it were possible to map the computation a waterfall does to the computation that a brain does, it would require one to understand each computation well individually. The fact that one can talk about the relation between them shows that computation is the fundamental reality of both.

    Structure of the TSP is quite fundamental, so much that any NP problem can be reduced to it. We do not conclude that since any NP problem can be reduced to that structure, the structure itself is context dependent and doesn’t have any semantic value. On the contrary, the structure of the TSP has so much significance because it encapsulates the structure of many other problems.

    In my opinion this is why computationalism applies/will apply to so many different fields so well. The fact that it can be used to relate completely different processes does not imply that it is observer/context dependent, but that it is the fundamental description of those processes.

  11. sbanerjee says:

    I was definitely amazed at zero-knowledge proofs when I first heard about them. Specifically because they seemed to have some magic to them. It was just like some magic tricks, you get to see the card to make sure that the magician’s not ‘cheating’ and he still is able to get to the end result without letting you know the main steps of the trick.

    But why are avant-garde proof techniques magical and traditional proof techniques not? I would argue that when we say that all the cards are on the table in a traditional proof technique, the trick is visibly written in the proof. To use the magician’s metaphor again, it is almost that in traditional proof techniques, the magician makes his tricks open-source and in more avant-garde techniques like zero-knowledge proofs, he is allowed keep it a secret. If it takes a trillion lines to write out the details of the trick – i.e. the proof written out in the conventional way, then I would argue that it still would count as having all cards on the table because the trick is still being detailed out in the proof.

    Would I still have the same feeling of understanding I did with a smaller conventional proof that a conscious person would actually be able to understand? Probably not, but it still would not be the same thing as not knowing that you have access to the trick. Going back to the metaphor of the magician, if the magician is an extraordinary genius and his trick is extraordinarily complex. The magic trick is finished in a minute and the audience believes that he is consistent and it was not just a fluke. It will take him 30 years to explain this trick to his apprentice and even then the apprentice won’t be able to understand it. But the magician did try to teach the apprentice, he laid out all of the “cards” of the trick. There were just too many cards.

    When one attempts to draw the line on this understandability issue, he or she realizes that there is something fundamental about the way these non-traditional proof systems differ form traditional proof systems. In a way they are a form of lossy compression, sometimes what is lost are sets of ‘cards’ that are so complex that one cannot comprehend them, but at other times the cards are on the table and they are understandable.

    • D.R.C. says:

      I think that the magician analogy is quite apt, and is exactly why interactive proofs can be very useful compared to traditional proofs in some situations.

      Consider the case of whether or not every map is four colorable, which was proven using a computer using over 1900 base cases for maps and essentially reducing all graphs to one of those. As was mentioned previously, even though this is theoretically understandable by somebody, it is not realistic for anybody to do this (even Gauss). While it is possible that there is a more elegant proof than this, this is the only know proof that we have (and it is likely the newer proof would be based on similar ideas, rather than a completely different approach). This is where an interactive proof becomes very useful. Instead of attempting to understand the proof completely, you can randomly generate very large maps and use an interactive proof to determine if it seems to be four colorable. If you repeat the process a polynomial number of times (some reasonable time period), you get an exponential decrease in the probability that you are being fooled. So if you do this with a fairly large number of random maps, you can conclude one of the following:
      a) The computer has a method of finding four-coloring for all graphs.
      b) There is a method for finding four-colorings for “almost all” graphs (but it is not necessarily true for all graphs)
      c) Either you were very unlucky in generating random maps or in choosing random edges to check (which we know the probability of due to the proof system, and we know it is be extremely small, basically insignificant)

      So this leads use to conclude, at the very least, that the average case four-coloring can be efficiently found.

      This is much like the magician who can consistently perform a certain trick without fail (so far), but who cannot teach the trick to another in a reasonable amount of time. While it is possible to spend the 30 years learning the trick, it would also be possible to go to every show for a month, and even if you can’t figure out how the trick is done (which the magician probably wants), you can be pretty convinced that he knows how to perform it.

      Even though for the most most classical proofs tend to be better, there are definitely cases for which interactive proofs are more useful.

  12. Cuellar says:

    I think Chalmer got it right. The reason a rock doesn’t simulate any FSA is because it lacks the appropriate causal relations. I’m interested in exploring if this can be reduced to a question about complexity. Suppose there exists a reduction of a game of chess to the functioning of the Niagara falls. Will Chalmer argue that the causal relations are still missing even though the reduction exists? If a waterfall was constructed (I imagine more like a complicated clepsydra with a system of siphons) specifically to compute the solution of chess, it will certainly have the causal powers Chalmer requires. So what is the difference between the two cases, is it just a matter of how hard it is two find the reduction in the first case? Is it possible that we ‘give’ the causal powers to the waterfall when we find the reduction? I propose to consider two type of causal relations, one a priori and one a posteriori. The second kind is a derived relation which is given on interpretation. Hence we can say that chess is not easily reducible to the first type of waterfall because it is hard to give it the necessary causal relations.

    On the other hand, The problem cannot be reduce to complexity only. What if the machine knowing how to play perfect chess can only communicate in some alien language. We would not say that such a machine is not computing the right program, we agree that is just a failure of communication. In this sense, the observations by Chalmer about the right causal relations are necessary, but not sufficient without the considerations of complexity.

  13. wjarjoui says:

    On proof protocols –

    Which proofs put “all the cards on the table”? I think before talking that question it is important to think about the importance of having “all cards on the table” – why do we need such a property, and why do we even think about it?
    All proofs are the same in that they do what they are supposed to do. Whether the 4-coloring theorem was proven by a computer or a human, we would still be able to use it to reach other conclusions about the universe. The only difference between the two scenarios is the insight gained during the proof process, which I believe resonates with Deutch’s arguments. The hope is, by gaining insight from the proof process itself, perhaps we would be able to reach even more conclusion about the universe than what we would have by only realizing the truthfulness of the theorem in question.
    Hence I define a proof that “puts all cards on the table” as a proof which help humans reach more conclusions about the universe that are not reachable from simply knowing the result of the proof (theorem is true). Obviously all proofs that provide a checkable signature of their process “put all cards on the table’, hence the interesting question here is not whether or not we can verify the proof, rather whether or not we gain insight gained from the signature of the proof (we gain insight if we can reach more conclusions about the universe than we would have with only the result of the proof).
    Perhaps we can argue whether all proofs are checkable to 100% accuracy, but as we saw in zero-knowledge proofs, protocols can be built to make the certainty high enough for practical purposes.

    Regarding our discussion of the waterfall argument –

    I tend to agree with Chalmers’ argument regarding proving a general mapping between open systems and FSAs. However, I do believe that there could exist open systems that precisely implement an FSA (or perhaps a more expressive automaton). The key is however, as Chalmers’ paper makes clear, is to be able to map in both directions the states and state transitions of the automaton to those of the open system implementing it, as well as prove that those state transitions are lawful, particularly for the open system.

    There are probably multiple approaches to find an open system that would implement a particular automaton, but I can think of two. One is to start with finding an open system that has lawful state transitions, and then try to map the states of the system to the states of the automaton. The second way is to start with any open system, map all of it’s state transitions into a subset of state transitions that is lawful, and then map it’s states to those of the automaton. The complexity of mapping the open system to the automaton is a function of the number of states and state transitions of the automaton. Proving that the complexity of this function cannot be sub-linear would prove that simulating the automaton using an open system is as hard as running the automaton itself. Would you agree? And is such a proof trivial?

  14. kasittig says:

    I believe that “all the cards on the table” in a traditional proof essentially means that, if you agree with the axioms of the proof, and you agree with the assertions written by the proof’s author, and you read the entire thing, you will be able to understand (and reconstruct) a proof of correctness of the same hypothesis by yourself. Traditional proofs generally rest on top of axioms that were proven in other traditional proofs – so, once you buy into the traditional proof schema of proving things, you can theoretically work backwards to understand where any of the assertions came from. When you read a line in a traditional proof, there is somewhat of a guarantee that, if you don’t immediately understand, you can re-derive the solution and it will make sense, as long as you buy into logical reasoning.

    Zero-knowledge proofs, however, don’t “teach” you anything. Where a traditional proof does answer the question of “is this true”, it also answers the questions of “why and how”, which zero-knowledge proofs obviously don’t answer. It isn’t possible to reconstruct a proof of a statement after having seen a zero-knowledge proof of the same, which is convenient in some cases but intellectually frustrating in others.

    In terms of where the line should be drawn for traditional proofs, I don’t think that we need to inherently draw a line – an infinite proof would never get to the end, and therefore, it wouldn’t actually prove anything, and so anything finite seems relatively reasonable. In terms of understanding, it appears as though there is already a method built in to a traditional proof – just follow each axiom back to its first principles. I think that this is the core difference between traditional and other sorts of proofs – an individual, if he or she is unconvinced of the truth of a statement, can just keep going backwards (keep asking “why”) until he or she is satisfied. Zero-knowledge proofs are essentially the antithesis of this, which is one reason why they’re so conceptually interesting.

  15. Doug says:

    “We discussed the arguments of Searle, Putnam, and other philosophers that the ‘meaning’ of a computation is always observer-dependent…”

    Would anyone be able and/or willing to provide this interested observer with some references to the works alluded to in the statement above? I’m unfamiliar with these philosophers but am interested to learn more about their arguments on this topic, so I’d be most grateful if someone could point me to relevant works. Also a couple of comments above refer to related argumentation by Chalmers; I’d be keen to have references to that as well. Thanks!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s