Class #7: Kolmogorov Complexity and Homomorphic Encryption

Does K(x), the Kolmogorov complexity of a string x, provide an objective, observer-independent notion of the “amount of patternlessness” in x?  Or is the notion circular, because of the “additive constant problem”?  Do some choices of universal programming language yield a better version of K(x) than others?  If so, what might the criteria be?

Are scientific hypotheses with low Kolmogorov complexity more likely to be true, all else being equal?  If so, why?  Is that the only reason to prefer such hypotheses, or are there separate reasons as well?

Rather than preferring scientific hypotheses with low Kolmogorov complexity, should we prefer hypotheses with low resource-bounded Kolmogorov complexity (i.e., whose predictions can be calculated not only by a short computer program, but by a short program that runs in a reasonable amount of time)?  If we did that, then would we have rejected quantum mechanics right off the bat, because of the seemingly immense computations that it requires?  At an even simpler level, would we have refused to believe that the observable universe could be billions of light-years across (“all that computation going to waste”)?

Scientists—especially physicists—often talk about “simple” theories being preferable (all else being equal), and also about “beautiful” theories being preferable.  What, if anything, is the relation between these two criteria?  Can you give examples of simple theories that aren’t beautiful, or of beautiful theories that aren’t simple?  Within the context of scientific theories, is “beauty” basically just an imperfect, human proxy for “simplicity” (i.e., minimum description length or something of that kind)?  Or are there other reasons to prefer beautiful theories?

Does fully homomorphic encryption have any interesting philosophical implications?  Recall Andy’s thought experiment of “Homomorphic Man”, all of whose neural processing is homomorphically encrypted—with the encryption and decryption operations being carried out at the sensory nerves and motor nerves respectively.  Given that the contents of Homomorphic Man’s brain look identical to any polynomial-time algorithm, regardless of whether he’s looking at (say) a blue image or a red image, do you think Homomorphic Man would have qualitatively-different subjective experiences from a normal person?  How different can two computations look on the inside, while still giving rise to the same qualia?

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

40 Responses to Class #7: Kolmogorov Complexity and Homomorphic Encryption

  1. Zygohistomorphic Man says:

    I’m not convinced that I see the distinction between Homomorphic Man and Neuron Man, who encodes all of his sensory input into electrical impulses for processing inside his brain. Are there efficient algorithms for decoding these impulses? I get the impression current research tends to interpret neural activity by matching it to recorded responses to controlled stimuli — known plaintexts, in other words. But I suspect I’m both wrong and missing the point.

    • bobthebayesian says:

      I also missed the point in lecture but understood it much better talking with others about it later on. Homomorphic encryption doesn’t really have much to say about how to interpret / understand a human mind today. After all, unless nature evolved these encryption mechanisms, then there is so much about the structure of one brain that correlates with the structure of another that we’d definitely expect to be able to do at least some pattern matching of one against the other and to understand principles about how they work. In lecture, I was distracted by thinking that homomorphic encryption might mean we could not even in principle simulate human cognition, but that’s not the point.

      The real point was that if you start out with a brain that you already have some digital control over, either because you’ve done the hard work of figuring out how to map between the neuron/digital worlds, or else because you’ve been given an algorithm that successfully solves the A.I. problem and you’re running it on same machine, then you could go in and choose to place the encryption barrier around all the parts of the brain that would need to be observed to physically disentangle the qualia. That is, by choosing to put the homomorphic encryption in there, you can make it computationally intractable to even observe the portion of cognition corresponding to qualia, and you are literally left with only the “access consciousness” part of the organism.

      It really is a remarkable result and Andy did a great job explaining it… I was just distracted by the idea that it said something about human brains as they currently exist. Based on this result, you could potentially meet an alien, say, and the access consciousness portion of your interaction might leave you thinking the alien is a little dull. But then you do some alien brain surgery and you start seeing that what’s going on in most of the brain looks more or less like white noise or gibberish to you — that’s *not* proof that the organism is unintelligent. It could be having amazingly insightful thoughts and experiencing rich, varied qualia, but it’s just not tractable for you to ever decode its cognitive signals to discover that.

      This would (will?) have profound effects if (when?) it is discovered how to digitally simulate consciousness. There might be a market for encrypting one’s internal brain states, for example. Identity theft would be a whole new sort of thing.

      (My apologies if you had already understood this much and I am just rehashing the part you already knew well. If so, perhaps someone else can address the other parts you wanted to probe more deeply.)

      • D says:

        Even moreso than that, the question is not just about other people looking in on a brain’s operation using brain scans or (in the case of AI) software or circuit simulations, but how the entity itself could experience qualia. The point to me seemed to be that since the decryption key is only on the “periphery” and not in the brain itself, the brain could have the input/output behavior indistinguishable from normal–including reporting to experience qualia, and reacting as if it experiences qualia and processes information–yet the brain has no information about what is actually being seen or experienced!

        Thus, the point of Homomorphic Man would seem to be a counterargument to a “very optimistic AI” point of view:
        1) Humans experience qualia;
        2) Hypothetically, an artificial entity could be built that precisely simulates the actions of a specific human’s brain (say, Andy’s);
        3) That entity would experience qualia as well;
        One could also construct a Homomorphic Man (HM) which is the same as the entity from #2 except the entire neuronal “circuit” operates under fully homomorphic encryption. This entity’s input-output behavior is indistinguishable from that of #2 (and thus that of Andy), and therefore it would report to experience qualia if asked, and otherwise act in a manner consistent with experiencing qualia.
        4) HM actually experiences qualia;
        5) HM’s brain state is indistinguishable even to HM when receiving different inputs (i.e. looking at different things), therefore HM cannot experience qualia, contradicting #4.

        Essentially, one would have to argue against one of the points 1-5 (including 5, if one wanted to state that there wasn’t actually a contradiction). One obvious target might be #4, though one would have to make the case for why the homomorphic encryption makes a difference–if hypothetically an an alien with very different thought processes from humans, or an AI algorithm, could experience qualia, then why not HM? (My personal take on it is that the argument fails at #3, but I’m a dualist, so that’s not necessarily surprising.)

      • Zygohistomorphic Man says:

        > yet the brain has no information about what is actually being seen or experienced!

        But isn’t this true for Neuron Man as well? The brain itself doesn’t somehow check its internal neural impulses against something external; it receives inputs, processes them, and produces outputs. It is, functionally, identical to Homomorphic Man’s brain.

        I’m not getting what you mean by indistinguishable. As I understand it, encryption only means that given polynomially bounded resources, one cannot determine the plaintext using only the ciphertext. But okay; so what? The sight of red produces this arbitrary string of neural impulses, and the sight of green produces this other one. I still see no difference between Homomorphic Man and Neuron Man here. What if the real world were actually completely different and alien, and our bodies have encryption barriers that translate what our sense organs actually perceive into what we seem to experience, and what we think we command our bodies to do into what our actual bodies end up doing? In fact, what does the strength of the encryption matter at all?

        What I’m trying to get at, I think, is that the entire topic is unproductive skepticism. A mind receives inputs, processes them, and produces outputs. You can conjure some arbitrary definition of qualia if you want, and argue about whether Homomorphic Man has them or Neuron Man has them or what they experience and what they know about what they experience, but I don’t see how this is at all relevant to anything else. But then I’m biased against the whole idea that qualia and p-zombies and such are anything more than an abstract intellectual curiosity.

      • bobthebayesian says:

        I agree that qualia are ultimately just physical. My guess would be that we evolved aspects of consciousness in order to benefit from social interaction and that internal qualia are more or less a side effect of the way evolution architected access consciousness. Moreover, there’s tons of literature about the way particular cortices work for doing very specific tasks like recognizing objects. I was just talking with a researcher in the Brain and Cognitive Science lab at Harvard last week about how there is actually a well-documented cluster of neurons that fire in a particular way upon the sight of a Coke bottle, and are robust against orientation changes, lighting changes, and whether it is on the periphery of your vision or not.

        The thing I find hard to believe is that a brain could process arbitrary encrypted streams of input. I think brains inspect the input for pattern matching. If the incoming stream of symbols did not correlate with the outside world, you would die. The only way this wouldn’t be the case is if you already knew how a brain was working and you could engineer the encryption such that the brain operations performed still faithfully related their correspondence with the external world *and* it was intractable for anyone *else* to decrypt them.

        Basically, I think that Homomorphic Man would “see” white noise when “looking” at a red object, and would “feel” white noise when touching a rough object. The operations applied to this white noise input could not then correspond to the same operations in a non-encrypted brain that produce access consciousness actions like saying the sentence “I see a red object.” I’m not convinced that input streams that are uncorrelated with the outside world can be processed by a brain and produce outputs that are correlated with the outside world. I don’t think brains perform “general purpose” operations. The operations they perform are contingent on the input.

        Maybe I really am missing something then. Let’s say that the function F() represents how a brain maps incoming signals to symbols for vision processing. Then F(signals generated from photons from a red object) = (symbols used to distinguish red object) but F(encrypted signals from photons from a red object) is not equal to (encrypted symbols used to distinguish red object). In human brains we can actually go measure this right now for things like the visual cortex (https://sites.google.com/site/gallantlabucb/publications/nishimoto-et-al-2011).

        The only interesting point that I saw was that the internal state of the encrypted brain could not actually be accessing the qualia, even if it was somehow engineered such that on input X, HM will output an appropriate response that confirms access consciousness of input X. Thus, if you have digital access to modify a brain (or have an AI program) then it’s inputs/outputs don’t tell you what it is thinking. It’s inputs/outputs could be gibberish but it could be perceiving tons of interesting qualia (think about human beings who suffer “locked-in syndrome”) or it could be carrying on a conversation with you perfectly with no access to qualia.

        Regarding the idea of unconscious zombies, a very good post that really brings in a lot of David Chalmers’ writing on it is here: (http://lesswrong.com/lw/p7/zombies_zombies/)

      • D says:

        bobthebayesian:
        The thing I find hard to believe is that a brain could process arbitrary encrypted streams of input.
        There’s an intermediate step of “Artificial Man” in the thought experiment. (And, if you like, a series of hybrid steps between Neuron Man and Artificial Man, replacing neurons by relays one by one.) So we first talk about an artificial brain, and then replace the artificial neurons by their homomorphically-encrypted equivalents.

        Maybe I really am missing something then. Let’s say that the function F() represents how a brain maps incoming signals to symbols for vision processing. Then F(signals generated from photons from a red object) = (symbols used to distinguish red object) but F(encrypted signals from photons from a red object) is not equal to (encrypted symbols used to distinguish red object).
        No, but if Eval is the evaluation function of a fully-homomorphic encryption scheme, then Eval(F, (encrypted signals from photons from a red object)) = (encrypted symbols used to distinguish a red object). That’s the idea of FHE: one can evaluate an arbitrary function on the underlying plaintext without knowing how to decrypt the ciphertext.

      • I find the Less Wrong discussion super frustrating, because Eliezer is operating on the weird implicit assumption that the mere *logical* possibility of our statements about consciousness coming apart from the facts about consciousness would make our statements about consciousness frivolous. I can use the same weird reasoning to prove that Eliezer should give up Ockham’s Razor, since Eliezer’s twin in a more complicated universe in which Ockham’s Razor isn’t true would be arguing for Ockham’s Razor in the exact same way that Eliezer does, and for the exact same reasons, and be wrong.
        I just don’t understand why when it comes to P-Zombies people start acting as if the fact that a theory implies that something that would undermine that theory is *logically* possible undermines the theory. For every belief that’s not about math or logic, I can construct an appropriate world where your twin has this belief but is wrong. Why do we suddenly get so freaked out by this trivial fact when it’s implied by the claim to the logical possibility of P-Zombies?

      • Btw, I think Chalmers’ critique of what he terms “Type C Materialism” is devestating for the kind of “we can’t imagine what kind of explanation can do the job, but we will find an explanation that does the job” approach that Eliezer is advocating. Because the ultimate issue isn’t anything about our current physics or current neuroscience, but the in-principle conceptual gulf between qualia and any anlysis in terms of structure and dynamics:

        “According to type-C materialism, there is a deep epistemic gap between the physical and phenomenal domains, but it is closable in principle. On this view, zombies and the like are conceivable for us now, but they will not be conceivable in the limit. On this view, it currently seems that Mary lacks information about the phenomenal, but in the limit there would be no information that she lacks. And on this view, while we cannot see now how to solve the hard problem in physical terms, the problem is solvable in principle.

        This view is initially very attractive. It seems to acknowledge the deep explanatory gap with which we seem to be faced, while at the same time allowing that the apparent gap may be due to our own limitations. There are different versions of the view. Nagel (1974) has suggested that just as the pre-Socratics could not have understood how matter could be energy, we cannot understand how consciousness could be physical, but a conceptual revolution might allow the relevant understanding. Churchland (1997) suggests that even if we cannot now imagine how consciousness could be a physical process, that is simply a psychological limitation on our part that further progress in science will overcome. Van Gulick (1993) suggests that conceivability arguments are question-begging, since once we have a good explanation of consciousness, zombies and the like will no longer be conceivable. McGinn (1989) has suggested that the problem may be unsolvable by humans due to deep limitations in our cognitive abilities, but that it nevertheless has a solution in principle.

        One way to put the view is as follows. Zombies and the like are prima facie conceivable (for us now, with our current cognitive processes), but they are not ideally conceivable (under idealized rational reflection). Or we could say: phenomenal truths are deducible in principle from physical truths, but the deducibility is akin to that of a complex truth of mathematics: it is accessible in principle (perhaps accessible a priori), but is not accessible to us now, perhaps because the reasoning required is currently beyond us, or perhaps because we do not currently grasp all the required physical truths. If this is so, then there will appear to us that there is a gap between physical processes and consciousness, but there will be no gap in nature.

        Despite its appeal, I think that the type-C view is inherently unstable. Upon examination, it turns out either to be untenable, or to collapse into one of the other views on the table. In particular, it seems that the view must collapse into a version of type-A materialism, type-B materialism, type-D dualism, or type-F monism, and so is not ultimately a distinct option.

        One way to hold that the epistemic gap might be closed in the limit is to hold that in the limit, we will see that explaining the functions explains everything, and that there is no further explanandum. It is at least coherent to hold that we currently suffer from some sort of conceptual confusion or unclarity that leads us to believe that there is a further explanandum, and that this situation could be cleared up by better reasoning. I will count this position as a version of type-A materialism, not type-C materialism: it is obviously closely related to standard type-A materialism (the main difference is whether we have yet had the relevant insight), and the same issues arise. Like standard type-A materialism, this view ultimately stands or fall with the strength of (actual and potential) first-order arguments that dissolve any apparent further explanandum.

        Once type-A materialism is set aside, the potential options for closing the epistemic gap are highly constrained. These constraints are grounded in the nature of physical concepts, and in the nature of the concept of consciousness. The basic problem has already been mentioned. First: Physical descriptions of the world characterize the world in terms of structure and dynamics. Second: From truths about structure and dynamics, one can deduce only further truths about structure and dynamics. And third: truths about consciousness are not truths about structure and dynamics. But we can take these steps one at a time.

        First: A microphysical description of the world specifies a distribution of particles, fields, and waves in space and time. These basic systems are characterized by their spatiotemporal properties, and properties such as mass, charge, and quantum wavefunction state. These latter properties are ultimately defined in terms of spaces of states that have a certain abstract structure (e.g., the space of continuously varying real quantities, or of Hilbert space states), such that the states play a certain causal role with respect to other states. We can subsume spatiotemporal descriptions and descriptions in terms of properties in these formal spaces under the rubric of structural descriptions. The state of these systems can change over time in accord with dynamic principles defined over the relevant properties. The result is a description of the world in terms of its underlying spatiotemporal and formal structure, and dynamic evolution over this structure.

        Some type-C materialists hold we do not yet have a complete physics, so we cannot know what such a physics might explain. But here we do not need to have a complete physics: we simply need the claim that physical descriptions are in terms of structure and dynamics. This point is general across physical theories. Such novel theories as relativity, quantum mechanics, and the like may introduce new structures, and new dynamics over those structures, but the general point (and the gap with consciousness) remains.

        A type-C materialist might hold that there could be new physical theories that go beyond structure and dynamics. But given the character of physical explanation, it is unclear what sort of theory this could be. Novel physical properties are postulated for their potential in explaining existing physical phenomena, themselves characterized in terms of structure and dynamics, and it seems that structure and dynamics always suffices here. One possibility is that instead of postulating novel properties, physics might end up appealing to consciousness itself, in the way that some theorists hold that quantum mechanics does. This possibility cannot be excluded, but it leads to a view on which consciousness is itself irreducible, and is therefore to be classed in a nonreductive category (type D or type F).

        There is one appeal to a “complete physics” that should be taken seriously. This is the idea that current physics characterizes its underlying properties (such as mass and charge) in terms of abstract structures and relations, but it leaves open their intrinsic natures. On this view, a complete physical description of the world must also characterize the intrinsic properties that ground these structures and relations; and once such intrinsic properties are invoked, physics will go beyond structure and dynamics, in such a way that truths about consciousness may be entailed. The relevant intrinsic properties are unknown to us, but they are knowable in principle. This is an important position, but it is precisely the position discussed under type F, so I defer discussion of it until then.

        Second: What can be inferred from this sort of description in terms of structure and dynamics? A low-level microphysical description can entail all sorts of surprising and interesting macroscopic properties, as with the emergence of chemistry from physics, of biology from chemistry, or more generally of complex emergent behaviors in complex systems theory. But in all these cases, the complex properties that are entailed are nevertheless structural and dynamic: they describe complex spatiotemporal structures and complex dynamic patterns of behavior over those structures. So these cases support the general principle that from structure and dynamics, one can infer only structure and dynamics.

        A type-C materialist might suggest there are some truths that are not themselves structural-dynamical that are nevertheless implied by a structural-dynamical description. It might be argued, perhaps, that truths about representation or belief have this character. But as we saw earlier, it seems clear that any sense in which these truths are implied by a structural-dynamic description involves a tacitly functional sense of representation or of belief. This is what we would expect: if claims involving these can be seen (on conceptual grounds) to be true in virtue of a structural-dynamic descriptions holding, the notions involved must themselves be structural-dynamic, at some level.

        One might hold that there is some intermediate notion X, such that truths about X hold in virtue of structural-dynamic descriptions, and truths about consciousness hold in virtue of X. But as in the case of type-A materialism, either X is functionally analyzable (in the broad sense), in which case the second step fails, or X is not functionally analyzable, in which case the first step fails. This is brought out clearly in the case of representation: for the notion of functional representation, the first step fails, and for the notion of phenomenal representation, the second step fails. So this sort of strategy can only work by equivocation.

        Third: does explaining or deducing complex structure and dynamics suffice to explain or deduce consciousness? It seems clearly not, for the usual reasons. Mary could know from her black-and-white room all about the spatiotemporal structure and dynamics of the world at all levels, but this will not tell her what it is like to see red. For any complex macroscopic structural or dynamic description of a system, one can conceive of that description being instantiated without consciousness. And explaining structure and dynamics of a human system is only to solve the easy problems, while leaving the hard problems untouched. To resist this last step, an opponent would have to hold that explaining structure and dynamics thereby suffices to explain consciousness. The only remotely tenable way to do this would be to embrace type-A materialism, which we have set aside.

        A type-C materialist might suggest that instead of leaning on dynamics (as a type-A materialist does), one could lean on structure. Here, spatiotemporal structure seems very unpromising: to explain a system’s size, shape, position, motion, and so on is clearly not to explain consciousness. A final possibility is leaning on the structure present in conscious states themselves. Conscious states have structure: there is both internal structure within a single complex conscious state, and there are patterns of similarities and differences between conscious states. But this structure is a distinctively phenomenal structure, quite different in kind from the spatiotemporal and formal structure present in physics. The structure of a complex phenomenal state is not spatiotemporal structure (although it may involve the representation of spatiotemporal structure), and the similarities and differences between phenomenal states are not formal similarities and differences, but differences between specific phenomenal characters. This is reflected in the fact that one can conceive of any spatiotemporal structure and formal structure without any associated phenomenal structure; one can know about the first without knowing about the second; and so on. So the epistemic gap is as wide as ever.

        The basic problem with any type-C materialist strategy is that epistemic implication from A to B requires some sort of conceptual hook by virtue of which the condition described in A can satisfy the conceptual requirements for the truth of B. When a physical account implies truths about life, for example, it does so in virtue of implying information about the macroscopic functioning of physical systems, of the sort required for life: here, broadly functional notions provide the conceptual hook. But in the case of consciousness, no such conceptual hook is available, given the structural-dynamic character of physical concepts, and the quite different character of the concept of consciousness.

        Ultimately, it seems that any type-C strategy is doomed for familiar reasons. Once we accept that the concept of consciousness is not itself a functional concept, and that physical descriptions of the world are structural-dynamic descriptions, there is simply no conceptual room for it to be implied by a physical description. So the only room left is to hold that consciousness is a broadly functional concept after all (accepting type-A materialism), hold that there is more in physics than structure and dynamics (accepting type-D dualism or type-F monism), or holding that the truth of materialism does not require an implication from physics to consciousness (accepting type-B materialism).[*] So in the end, there is no separate space for the type-C materialist.”

      • bobthebayesian says:

        @Peli: I’m not sure I agree with you about Okham’s razor in a more complicated universe. In such a universe, you’d still have to select theories that are consistent with the data. If you did know you were in such a complicated universe, then that’s some data that Ockham’s razor applies to. If you don’t know, then how is that any different from the world we’re in now?

        I think Eliezer’s points about epiphenomenalism are very good. Why postulate physically ineffectual properties of things if there are causal explanations available to you? I am very interested in reading the passage you quoted from Chalmers once I get some extra free time (haha), because it will probably force me to think very hard about my position.

      • Bob: Consider a world where everything is grue rather than green. Your grue-world twin has the exact same data you do, only his use of Ockham’s Razor leads him into false beliefs. So you have to admit that your grue-world twin will believe in Ockham’s Razor for the exact same reasons that you do, and give the very same arguments for Ockham’s razor that you do, but be wrong — just as bad as Chalmers’ p-zombie twin. Though I have to confess that I do find the epistemology of epiphenomenal qualia very troubling despite this argument.

        Chalmers has an interesting thing to say to the effect that the epiphenomenalism problem will come up even given a causal explanation. I think it’s pretty mind-blowing point, though I don’t know how to deal with its implications:

        “The real “epiphenomenalism” problem, I think, does not arise from the causal closure of the physical world. Rather, it arises from the causal closure of the world! Even on an interactionist picture, there will be some broader causally closed story that explains behavior, and such a story can always be told in a way that neither includes nor implies experience. Even on the interactionist picture, we can view minds as just further nodes in the causal network, like the physical nodes, and the fact that these nodes are experiential is inessential to the causal dynamics. The basic worry arises not because experience is logically independent of physics, but because it is logically independent of causal dynamics more generally.”

  2. D says:

    > yet the brain has no information about what is actually being seen or experienced!

    But isn’t this true for Neuron Man as well? The brain itself doesn’t somehow check its internal neural impulses against something external; it receives inputs, processes them, and produces outputs. It is, functionally, identical to Homomorphic Man’s brain.

    Consciously, of course, I have no idea what my neurons are doing. But in a practical sense, the physical inputs to my brain result in something (I know not what) that I deem “experiential”, or “qualia”, or “conscious awareness”, or such. If you like, the brain has an output to my conscious mind (in additional to my muscles).

    Neuron Man might not understand the process by which this qualia-output is created, but it’s not inconceivable (especially given the hypothetical in #2) that this is actually some efficient natural process. But with Homomorphic Man, we’re essentially removing this possibility, and stating that no efficient process can recover the “experiences” the individual would undergo, yet that individual would still by all accounts–including their own–still undergo them. The two strings of neural impulses in your example would be indistinguishable to any mechanism we could use to try to distinguish them–including whatever mechanism the brain would normally use to “be conscious of seeing red” and “be conscious of seeing green.”

    Excuse me if I’ve misread you, but some of your statements seem like you reject the idea of qualia wholesale, and only consider the mind as effectively a dumb input-output machine. If this is the case, then you’re essentially rejecting my statement #1–which is a logically consistent position; as I see it, the HM argument simply shows that there’s some inconsistency in the chain of 1-5, but makes no claim about where that inconsistency is.

    • Zygohistomorphic Man says:

      Even if we accept the existence of qualia, my point is that in both Neuron Man and Homomorphic Man, sensory inputs get translated into some mental representation that the brain works with. It’s true that this representation is easier to convert back into a description of the sensory inputs for Neuron Man than for Homomorphic Man, but I don’t think it’s relevant. I don’t see how this makes one of HM’s experiences any less a quale than one of NM’s. And functionally, they are identical because they correspond to the same inputs and produce the same outputs. That is, HM and NM can both call a brain function that tells them whether two images are the same color, and this will say yes for two red things and no for a red thing and green thing — it doesn’t matter whether someone else looking at their brains can tell the difference. The same goes for anything their brains can do. HM’s mind has no less access to qualia than NM’s, because both are working with their respective mental representations. I find the claim that HM sees white noise when he looks at a red object nonsensical, and analogous to the claim that you see your own head when you look at a red object. Basically, HM’s mental representations are in no sense uncorrelated with the outside world, and in no sense less real — or less red — than NM’s.

      But yes, I personally take a functionalist view, and see little practical value in the concept of qualia. What happens if tomorrow scientists discover that we all have internal homomorphic sense filters like HM’s, or that aliens have placed external filters on all of us that correspond to our brains’ natural functions, so that they continue to operate as if they were receiving “real” sensory input? Would our conception of our own consciousness have to change overnight? You can keep coming up with hypotheticals like this, and working out the logical implications, but I don’t see the relevance; I don’t see how the answers affect anything. (Which is not to say it can’t be an interesting thought experiment in its own right.) As far as I can tell, the concept of qualia is born out of a need to believe that our thoughts are “real” and correspond to some external reality. And okay, sure, but why not just come out and address that question directly?

      • D says:

        That is, HM and NM can both call a brain function that tells them whether two images are the same color, and this will say yes for two red things and no for a red thing and green thing — it doesn’t matter whether someone else looking at their brains can tell the difference. The same goes for anything their brains can do. HM’s mind has no less access to qualia than NM’s, because both are working with their respective mental representations.
        If HM can call such a function (however he might do so), then a hypothetical observer of their brain state could perform the same operation, and thus break the encryption. Put another way, ignore the third-party observer entirely–the claim is that there’s no way HM himself could access this information, except at the “very periphery” (after it gets decrypted), which we assume is on the nerves outside the brain and thus “outside” the normal level of introspection/consciousness.

        As far as I can tell, the concept of qualia is born out of a need to believe that our thoughts are “real” and correspond to some external reality. And okay, sure, but why not just come out and address that question directly?
        In one sense, qualia is an attempt to do exactly this. Many people have an intuitive sense that there is an “experience” involved in perception that is above and beyond the raw processing of fact. Many might say that the ability to experience qualia is essential to consciousness (hence the reference to hypothetical entities that don’t experience qualia as “zombies”). Thus, it’s an attempt to put one’s finger on aspects of consciousness/intelligence/etc.

      • bobthebayesian says:

        D: I guess my problem is that I believe every part of the brain is at the “very periphery” and that experiencing the qualia of red is not operationally different than returning neural information that causes the vocal cords to utter the phrase “I see a red object.” My views are informed a little by the book “The Ecological Theory of Visual Perception” by James Gibson, and I think his idea extends to all forms of perception.

      • Zygohistomorphic Man says:

        D: But distinguishing colors is something that a brain can do, and therefore HM has a homomorphic version of it, no? I’m saying the whole “access the information” argument is meaningless, because both NM and HM convert sense impressions into encoded forms, and they can perform functionally the same operations on them. HM has exactly as much access to the information as he needs, which is also exactly as much as NM has.

        Yes, and it seems totally analogous to thinking about brains in vats. Just as we can never actually know that we have bodies that interact with the external reality we seem to perceive, we can never know that our experiences or qualia are authentic or real — if a zombie realized that they were a zombie, they would no longer behave identically to a non-zombie. So, while it may be an interesting exercise, it’s irrelevant to any discussion that doesn’t specifically deal with the possibility.

  3. I would argue that not only should we not prefer hypotheses with low resource-bounded Kolmogorov complexity, we should in fact reject any language that allows us to deliver physical reality via a program with low resource-bounded Kolmogorov complexity. What we’re looking for in physics are programs that deliver physical reality via massive iteration, not programs that have primitive expressions that magically produce our entire physical reality ready-made.

    • Cuellar says:

      Are you assuming that the only way to compute physical reality is to carry out all the iterations. Is it possible that physical reality has some simple laws behind it, that we can’t find because of our current ‘language’? Imagine living in a universe generated by the cellular automata of rule 110. Without the appropriate language, one could define very complicated rules to describe the behavior of the world without ever finding the eight simple rules that govern it.

      • Ignorant question: are there interesting-seeming universes generatd by rule 110 in a *low number* of time-steps?

      • Cuellar says:

        Rule 110 is universal, so it can simulate any turing machine, obviously with enormous overhead…
        I think Wolfram has done some research on how these rules seem chaotic while having very simple generating rules. He claims that the universe is very similar and has some simple rules. These rules cannot be described by physics, our current language…. he suggests computer science is the solution.
        I don’t necessarily agree with much of what he says, but I don’t think it’s obviously false.

  4. Katrina LaCurts says:

    A few thoughts on homomorphic man:

    First, I don’t think that changing the encoding of the input to one’s brain necessarily means that that person will have a different phenomenal experience. I’m actually not even convinced that if two different brains react to the same experience in different ways (as measured by some sort of brain scan, say) it necessarily means the two people are having different phenomenal experiences.

    What I think is the most interesting part about homomorphic man is this: homomorphic encryption is non-deterministic, so the same input isn’t consistently encrypted in the same way. We also can’t tell if two inputs a and b are the same unencrypted by looking at how the evaluation function proceeds on input a and then input b; these functions are designed not to leak any information.

    Given this, I wonder if homomorphic man is still able to group similar inputs together. E.g., every time he sees red he gets a totally different encrypted input; does he know, then, that “red” is a thing? (Even though my brain receives a different input when I see a stop sign vs. a fire truck, but I imagine there is a consistent “red” bit in there somewhere). Is homomorphic man still able to do introspection, or to remember things? Are those abilities somehow embedded in the functions we’ve passed to him to operate on the inputs? If so, then what’s to say he doesn’t have the same phenomenal experience as us?

    • D.R.C. says:

      I do not see why HM would not be able to group similar inputs together, since it seems that our visual system can be modeled similar to computer vision. You could have an encrypted version of some function that allows us to differentiate objects in our field of vision, so EncSeenObjects = f(vision, findObjects()). You could do this twice, and then test the pairs of EncSeenObjects to see if they have some property (which is fairly simple to do for regular vision, so it must have a encrypted version). For instance, red is some wavelength that our eyes register, so what we consider “red” would be that wavelength with some threshold above and below (so at the very edges you might say something is “reddish”, or the thresholds might overlap, etc.). Even though this is a simplistic view of what is happening (it’s probably closer to an overlap of normal distributions, depending on how well versed in color naming the person is), it shows that they should be some algorithmic way of performing the operation for similarity.

      I would assume that the memory would still have to exist somewhere in HM, possibly an encrypted version of the same memory structure as NM uses. It might not be specifically called or described, but might just be inherent in the functions that we are creating encrypted versions of, and other functions might use that memory to optimization their runtime/result.

  5. A small observation to be made about homomorphic man: the “outputs” of the brain don’t apply only to things we associate with purely conscious movement, i.e. motor control, but also things which we don’t notice at all: hormones, heart rate, blood pressure, etc. We need to decrypt all of these outgoing signals. Homomorphic man sweats when he feels nervous—his heart rate increases.

    At the limit, every neuron encrypts its incoming electrical pulses, before routing them, decrypting them, and passing them on to the next neuron. One might wonder what the difference is with metallic neurons would be.

    Here’s another experiment. Bob learns that 90% of the capacity of his brain goes unused at any given time, so he donates the remaining capacity to the Institvte of Mind Simulations. They teach his brain how to perform homomorphic operations, and then starts sending him homomorphically encrypted messages for his brain to compute, and then send back (after all, the privacy of the brains being simulated is important!)

    The point of the homomorphic encryption is that Bob’s wetware isn’t privy to any of the qualia that the mind he is simulating may be experiencing. Now suppose Bob’s curiosity gets the better of him and he purloins the decryption key. He now has the capacity to peek into these extra thoughts that he has (though he doesn’t have to.) Is the simulated mind’s consciousness in Bob, or not?

  6. bobthebayesian says:

    This may be a really naive question, but is it possible to perform case logic on homomorphically encrypted data? What if I have a function F(input) and F first says something like if(input == case_1) then … if(input == case_2) then … and so on? If input gets encrypted, then is it still possible to apply F directly to the encrypted version and have the logically correct encrypted output? If not, this would suggest to mean that any “memory location” of the brain that is partially responsible for the brain state would always have to decrypt everything anyway, giving access to qualia. I’m guessing it must be “straightforward” to perform the case logic under encryption, but I do not see how.

    Alternatively, if instead the brain was required to perform case logic by checking if(encrypted_input == encryption(case_1)) then …, then each part of the brain has to have a copy of what the encrypted version of things looks like, which would be just the same as having direct access to it and would also give access to qualia.

    This makes me feel that I am very confused, so any help would be appreciated!

    • D says:

      Under fully homomorphic encryption, you can perform any computation on encrypted data you could perform on the unencrypted data, except without decrypting (or even knowing how to decrypt or what the underlying data is). This is what the “fully” in “fully homomorphic encryption” means.

      It is, however, a little more complicated than simply applying F to the encrypted input. Essentially, as part of the FHE scheme, there is an evaluation procedure Eval, that takes in an operation and encrypted data, and outputs the encryption of the operation applied to the data (without decrypting in the middle). So essentially, instead of calculating F(encrypted input), you would calculate Eval(F, encrypted input).

      The mathematical details of how this works are not necessarily straightforward, which is why FHE remained an open problem for 30 years. 🙂 But the central point is that the brain can perform the case logic while “still under encryption”–while never decrypting, and never learning the unencrypted value–though it’s more complicated than simply applying the bare function to the encrypted data.

      Does that make more sense?

      • bobthebayesian says:

        It’s still not clear to me. What if I evolved a set of nerve cells that fire in a special way when they see a very specific spectral pattern related to smiling faces? I do not understand how HM’s brain can “sneak a peak” at what the unencrypted version of the incoming visual stimuli looks like, perform the neural operation, and then output the appropriate encrypted result. Literally as soon as the encrypted data starts hitting neurons, the precise characteristics of what the data superficially looks like (e.g. random bits if it is encrypted) will determine how it gets routed around in the brain. I think I would need to understand a lot more about the specific implementation details of H.E. before I can remove my ignorance about how pattern-sensitive software could still be performed on it on the fly.

      • D says:

        Think about it this way: Instead of having a bunch of special-purpose devices, we now generally use universal Turing machines (in the form of modern computers), and program them to take on any specific task (essentially emulating a special-purpose machine).

        The neurons you describe are essentially special-purpose–they aren’t running a general program; as you mention, they perform a very specific computation based on their input.

        But we could (hypothetically) simulate them just as well with a program on a general-purpose machine. At a low level, the universal Turing machine would be doing very different things from the neuron it’s simulating, but it will eventually generate the same output.

        Now, consider FHE. Like a universal TM, the Eval() procedure defined for an FHE scheme takes both a program and a set of input data (except in this case, the input data is encrypted). It’s true that at a low level the basic operations performed by the FHE’s Eval procedure will look different from the function it’s homomorphically computing, but it will eventually generate the encrypted version of the proper output.

        You are correct that feeding encrypted data to the original neuron/program would result in essentially gibberish, but HM’s neurons do not rely on the superficial appearance of their input data in order to determine how to route (if he even has individual neurons). Rather, the entire brain essentially simulates the interaction of NM’s neurons as a system, including routing and interactions, homomorphically as a system.

        Bottom line: At a very low level, HM’s brain operates in a very different manner than NM’s. But if one encrypts the inputs and decrypts the outputs appropriately, the input/output behavior of both is identical.

  7. nemion says:

    This will seem a little bit unrelated, but here we go. I was reading about a way to define consciousness and I found the following Quora page, maybe some of you might find it interesting:

    http://www.quora.com/Neuroscience-1/What-consequences-would-Giulio-Tononis-formula-for-consciousness-have-if-accepted#

    Some of you might find Giulio Tononi’s theory a troubling version of panpsychism, but otherwise the survey is somewhat interesting. Basically Tononi tries to come up with a sort of semantic information theory that he theorizes being the basis of consciousness.

    And for the bald naturalists of the class, I pose the following questions, I agree the functioning of consciousness and etc can be reduced to the study of brain activity, and neuronal firings. I also agree that from a general standpoint, the information contents/processing of the brain are indistinguishable in both models (neuron man, homomorphic man). Nevertheless, I am worried about the following tensions:

    there is a causal relation between our experience and our concepts
    there are rational relations between thoughts

    How then can we analyze these issues without the idea of qualia? how do we take into account the (semantic?) structure inherent to conscious experience? It seems that the information reductionist approach that many seem to take in this forum will not solve this questions. What should we do?

    If we come up with an answer, would Kolmogorov complexity in any of its presentations be a good information measure? Or do we find ourselves with the question of the information presentation language haunting us down? If so, does this mean, that an unstructured zero ones presentation is as good as we can do?

    • bobthebayesian says:

      I think Peter Norvig’s “The Unreasonable Effectiveness of Data” answers you well but requires some analogies to be drawn with natural language processing. Essentially, “semantics” is a bad way to process natural language. A better way is to just put a bunch of data in a machine learning blender. Even if what you care about are human-semantic concepts that we superimpose into language constructs, the data-driven way is still better. To me, semantics are a bit like emergent behavior. They are just nice intermediary modularizations that encapsulate useful patterns. But the real description of things is still down at the reductionist level. Planes can’t fly because they have engines that generate lift via airflow over the wings. That’s just a nice way for us to summarize what every quark is doing in the airplane-fuel-engine-wings-air system. There’s actually a lot of very interesting psychology research on “privileged description” which is the idea that human language semantics are determined by finding a good trade-off between specifying concepts and functioning in social situations where not every can specify every concept.

  8. D says:

    Philosophically, I don’t see a reason why a hypothesis with low Kolmogorov complexity is more likely to be true–yet it certainly seems to happen frequently! It does seem to bound the number of hypotheses, but this is true of any restricted class of hypotheses. It’s not clear why we should prefer the subset of hypotheses with low Kolmogorov complexity other than “they’re easier for humans to work with” and the somewhat-circular “it seems to have worked reasonably well so far.”

    In this context–if the choice of low Kolmogorov complexity is made for essentially human reasons, rather than reasons of Fundamental Truth–then it seems that low resource-bounded Kolmogorov complexity would be as good or even better a notion. After all, that would seem to better capture what humans can work with easily.

    However, all of the hypotheses we look for have to fit the observed data. I don’t think that this necessarily rules out a huge observable universe; we’d look for a low-rbKc explanation to fit the explanation of astronomical observations; the Big Bang theory–while complex in its details–doesn’t seem fundamentally “more complex” as an explanation with more matter or a greater size; indeed, the size of the observable universe might be the result of very simple physical laws.

    Quantum theory is similar yet different. Here you again need to meet observations, and it doesn’t seem like any simple theory will do. But this time we’re changing not only our model of the universe, but also potentially our model of computation as well. Is quantum mechanics easily expressible using a quantum TM? (I don’t actually know the answer to this question.)

    • This seems to reduce to the problem of induction, or something similar. Simpler explanations are those that can be derived from a smaller number of examples; extrapolation becomes less accurate as the underlying theory becomes more complex. I wonder whether there’s an anthropic argument to be made here: intelligence, when viewed as the ability to abstract from limited experience to more fundamental principles, can only develop in a universe where there are such fundamental principles to be discovered. If the universe were complex enough not to be describable in significantly less space than it took up, there would be no one around to observe it.

      • How about universes that have a simple description that captures the data up to the present time, but have only a complex description of their total space-time? (Basically universes where induction works for a few billion years then stops working.)

      • bobthebayesian says:

        This is a thing that nags me about induction though. If induction doesn’t work, there would be no way to figure that out. If you tried to use induction several times and it failed, that would *be* induction. So for all we know, we already do live in a universe where induction is basically wrong and we just can’t figure it out.

        I’m not sure how much I believe any argument based on a thought experiment involving a universe where induction doesn’t work. If it didn’t work because after going out and getting more evidence you discover that your treasured, simple, old theories are no longer consistent with the data, then all it means is that the distribution from which Nature draws samples has changed (or was different than your expectations all along but only in some very rare way). You see new evidence, update your estimate of the distribution, and then induction is successful once again.

        For induction to literally not work, there could be no exploitable structure in our observations of any kind at all. It can’t be just that we need to zoom out really far and get lots of data before induction is successful — we already live in that universe — it would have to be some kind of bizarre, acausal world. I’m not sure the concept of life can make sense in such a world. You could kind of view life itself as evolution exploiting induction (I know, it’s dangerous to ever make any argument where you give “volition” to evolution, but I think there is a kernel of truth to it).

  9. In class we discussed, among others, the following two questions:

    (1) “Should theories about the physical world have low Kolmogorov complexity?”

    (2) “Should theories about the physical world have low *resource-bounded* Kolmogorov complexity?”

    While a positive answer to (1) seems intuitively appealing, I do not know how to argue for it. On the other hand, I believe that it is simple to argue for a negative answer to (2).

    Specifically, note that (2) is highly sensitive to what is the “most-up-to-date” Extended Church-Turing Thesis. Exactly what is efficiently computable in our universe directly depends on the present state of knowledge about how the universe works; indeed, the more we know about the universe, the more “tricks” we can use to speed up computation.

    Therefore, unless we believe that with the current state of knowledge of the physical world we have exhausted all the useful tricks to speed up computations, we cannot argue for a positive answer to (2).

    However, since we do not have any reason to believe that with the current state of knowledge of the physical world we have exhausted all the useful “tricks” to speed up computations, we have instead reason to believe in a negative answer to (2) (of course, relative to today’s Extended Church-Turing Thesis).

  10. Miguel says:

    Does resource-bounded Kolmogorov complexity K_P provide a good criterion for judging scientific theories? Before we go ahead and automate the peer-review process by just weighing papers by their GZIP size, we must face the fact that, historically, in some domains K_P has in fact increased after scientific revolutions —QM being a notorious example. If science progresses by choosing theories with low K_P, how come K_P often increases as scientific progress unfolds?

    The answer, in my opinion, is that the resource-bounded Kolmogorov complexity of the empirical facts under scrutiny increases during scientific revolutions, and so does the complexity of the theories needed to explain them. To fix ideas, consider the following PAC-like setup: we have some data S_m = \{x_1, \cdots, x_m \} and unknown function f : S \rightarrow \{0, 1\}; and assume science consists on finding a hypothesis $\latex h$ that ‘explains’ the observed data S (e.g. a ‘good’ hypothesis), by which we mean it agrees with f on S; e.g. (\forall x \in S)\text{ } h(x) = f(x). I claim that the resource-bounded complexity of S puts a lower bound on the (resource-bounded) complexity of a good hypothesis h: K_P(S) \le K_P(h). In other words, the best hypothesis explaining S cannot be shorter than the shortest efficient enumeration of S itself. Then clearly if the resource-bounded complexity of S increases for some reason, so does the lower bound on any good hypothesis K_P(h).

    Now why would K_P(S) change at all? Isn’t science supposed to be striving to explain a fixed set of empirical facts S out there in the universe? Borrowing Kuhn’s terminology, I claim that whenever ‘scientific anomalies’ grow up to the point that ‘normal science’ cannot ignore them anymore, this new data becomes accepted as genuine empirical facts up for scrutiny (as opposed to errors or otherwise inconsequential puzzles), thus resulting in a change in S and an increase in K_P(S) (since the anomalies are, by construction, not explained by the existing hypotheses). So one can have K_P-minimizing science, and yet observe increasing K_P across time: newer theories are still trying to minimize complexity, but often are dealing with more complex data –and so are necessarily more complex.

  11. amosw says:

    The implications of Drucker’s lecture for SETI are fascinating.

    Many astro-biologists take it as a given that we will be able to
    detect extra-terrestrial life either by observation of
    non-equilibrium chemistry or, and especially,
    by regions of low entropy. Given that the tape contents of a
    Turing machine carrying out a homomorphically-encrypted
    “simulation” of a conscious and intelligent being are
    statistically indistinguishable from uniform Bernoulli noise,
    it is conceivable that we could have intelligent life
    right under our noses and still not be able to detect it!
    Now, the tape head motion of a (non random access) Turing
    machine are, at least locally, easily fit by linear regression,
    so I don’t think an alien intelligence will be completely
    undetectable via entropic means, but Drucker has pointed out
    that it is not nearly so clear cut any more.

    Regarding Kolmogorov complexity: a possibility
    that cannot be dismissed out of hand, however
    far-fetched it may seem, is that we are able to find scientific
    theories with low Kolmogorov complexity because our Universe
    is indeed a simulation. That is, our creators chose the simplest
    rules they could find that are capable of supporting the
    emergence of life.

  12. Andy Drucker says:

    Hi Amos,

    Actually, the memory contents of a Turing machine running a homomorphically-encrypted computation *will* typically be distinguishable from random bits. This computation has a distinctive structure. It’s just the underlying “encrypted” information about which we can derive no information in polynomial-time. (More correctly, in poly-time we can obtain only a “negligible” statistical advantage in guessing these encrypted bits. All of this assumes that hom. encryption is indeed secure.)

    There has been work done on hiding the very *existence* of a computation, while carrying it out via interaction over a public channel; see for instance this paper of von Ahn and colleagues:
    http://www.cs.cmu.edu/~biglou/c2pc.pdf
    My opinion is that highly advanced “covert aliens” of the type you imagine probably could exist and be successfully hidden by cryptographic techniques. It would take a feat of social cohesion to get everyone to abide by the law of self-hiding all the time, however; just one defector could blow their cover.

  13. Hagi says:

    On the possible relation between beautiful and simple theories: It is certainly interesting to ask when beauty refers to the simplicity of an idea or a creation; and why it does when this is the case. Certainly there have been many cases in science when the simplicity of an idea has motivated its discovery; however in literature and art things seem to be very different, at least on the surface. Although genres have different aesthetics, in general writers and composers are praised for the complexity of their work. On the other hand, Jürgen Schmidhuber’s idea that beauty is the unexpected amount of compression a piece allows seems to explain the attraction to certain works of art. Nicholas Hudson applied this idea to music, and concluded that music generally regarded as “better” and more complicated, including orchestral work, seems to be more compressible than pop and techno music which is often perceived as aesthetically inferior and more compressible. This hypothesis claims that Beethoven’s work is not monumental because of its high Kolmogorov complexity, but because of the illusion it gives that it is much more complicated than it actually is. This resonates personally with me as well, as I seem to experience most excitement when I think I have understood a theorem that seems really complicated although in some abstract “reality” it may not be.

    Certainly it would require much more investigation on human creativity to see how widely these complexity ideas can be applied to topics outside of science. However it is quite interesting, and might offer insights to why complexity plays an important role in science.

  14. humantruthseeker says:

    I believe the question of low Kolmogorov complexity scientific hypotheses having a higher likelihood of being true is fundamentally related to the reliance of science on observation. In a universe where entropy and chaos is always increasing, it is intriguing that humans try so hard to find order within this chaos. Looking at this attempt to find order in chaos, I suppose it is understandable why scientists use observation to decipher and identify this order… observation using our senses are one of the main ways we experience the world around us.

    In a way, the observation of a low Kolmogorov complexity is often taken to be the sign of greater order. As such, when we consider the question of whether or not a scientific hypothesis is true, low Kolmogorov complexity hypotheses are more likely to be true to humans. I say true to humans because I do think a lower Kolmogorov complexity is correlated to the truth of a scientific hypothesis, but it is correlated to whether or not a human takes it to be truth. A lot of the hypotheses we have ‘proved’ may just be true based on what we have observed. There might be other truths that cannot be currently observed and which might have different characteristics which make them more likely to be true if they are less ordered. These truths might invalidate many of our hypotheses. So essentially, I am arguing that scientists look for patterns or order and low Kolmogorov complexity hypotheses give that to them and that is why they are likely to believe them to be true even if there is no reason for the to be true of a fundamental level.

    • wjarjoui says:

      Humantruthseeker, you raise an interesting point. I’m not sure if we explicitly mentioned this in class, but considering the fact that the entropy of the universe is ever increasing, does that mean that the lower-bound of the Kolmogorov complexity of any description of the universe increases over time? Are the patterns found the world ever-decreasing (until entropy in the universe reaches a maximum)?
      I think the answer to this question would depend on whether or not that description would also be able to describe how the universe will change due to entropy. If it does, in which case the answer would be yes, that means that we could describe all the states that the universe will ever be in. However, if we cannot describe precisely how entropy changes the universe over time, then it might be very well the case that as time goes on, our models of the universe start failing, because the patterns they describe simply do not exist anymore.

      *I understand that descriptions are relative, however, we can make this argument for any or all perspectives, I believe.

  15. kasittig says:

    I think that the concept of a beautiful and simple solution to a problem is generally one that is short, relatively easy to understand (the simple aspects), but that is not necessaryily obvious and that is also intellectually satisfying (beautiful). Simple has a fairly straightforward definition – short and intuitive – but beauty is the interesting component.

    People tend to not find solutions beautiful if they’re too obvious – it is hard to see the elegance in a solution when you feel as though you should have been able to come up with it on your own with relatively little difficulty. Beauty in general is extremely difficult to quantify, but I think that this may be all it actually takes to define what makes a solution satisfying – easy to understand, short, and surprising and novel for the reader.

    Maybe the core distinction between simplicity and beauty is that beauty is simplicity that wasn’t immediately apparent to the reader.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s