Does K(x), the Kolmogorov complexity of a string x, provide an objective, observer-independent notion of the “amount of patternlessness” in x? Or is the notion circular, because of the “additive constant problem”? Do some choices of universal programming language yield a better version of K(x) than others? If so, what might the criteria be?
Are scientific hypotheses with low Kolmogorov complexity more likely to be true, all else being equal? If so, why? Is that the only reason to prefer such hypotheses, or are there separate reasons as well?
Rather than preferring scientific hypotheses with low Kolmogorov complexity, should we prefer hypotheses with low resource-bounded Kolmogorov complexity (i.e., whose predictions can be calculated not only by a short computer program, but by a short program that runs in a reasonable amount of time)? If we did that, then would we have rejected quantum mechanics right off the bat, because of the seemingly immense computations that it requires? At an even simpler level, would we have refused to believe that the observable universe could be billions of light-years across (“all that computation going to waste”)?
Scientists—especially physicists—often talk about “simple” theories being preferable (all else being equal), and also about “beautiful” theories being preferable. What, if anything, is the relation between these two criteria? Can you give examples of simple theories that aren’t beautiful, or of beautiful theories that aren’t simple? Within the context of scientific theories, is “beauty” basically just an imperfect, human proxy for “simplicity” (i.e., minimum description length or something of that kind)? Or are there other reasons to prefer beautiful theories?
Does fully homomorphic encryption have any interesting philosophical implications? Recall Andy’s thought experiment of “Homomorphic Man”, all of whose neural processing is homomorphically encrypted—with the encryption and decryption operations being carried out at the sensory nerves and motor nerves respectively. Given that the contents of Homomorphic Man’s brain look identical to any polynomial-time algorithm, regardless of whether he’s looking at (say) a blue image or a red image, do you think Homomorphic Man would have qualitatively-different subjective experiences from a normal person? How different can two computations look on the inside, while still giving rise to the same qualia?