Class #12a: Evolution

If philosophers, mathematicians, etc. had been clever enough, could they have figured out that natural selection was the right explanation for life a priori, without input from naturalists like Darwin?  If life exists on other planets, should we expect that it, too, arose by Darwinian natural selection—or rather, that if the life was “intelligently designed,” then natural selection was ultimately needed to bring the intelligent designer(s) themselves into existence?  Is there any conceivable mechanism other than natural selection that’s capable, in principle, of doing the same sort of explanatory work (i.e., that would qualify as a “good explanation” in David Deutsch’s sense)?

Consider a “brute-force search algorithm,” which cycles through all possible DNA sequences until it finds some that lead to interesting and complex organisms.  Many people would say that this algorithm does not do the same sort of explanatory work as natural selection—but if so, why not?  Is it because brute-force search takes exponential time?  Or because the goal of “interesting and complex organisms” is too vague?  Or both reasons, or something else entirely?

Likewise, what are the similarities and differences between Darwinian natural selection and the anthropic explanation for the apparent “fine-tuning” of physical constants?  If the former explanation satisfies us while the latter doesn’t, then why?

Is there a puzzle about the speed of evolution?  Is it reasonable to want an explanation for why evolution on Earth took roughly 4 billion years, rather than a much longer (or shorter) time?  If so, what could such an explanation look like?  Can Valiant’s “Evolvability” model shed any light on these questions?

What are the differences between genetic and memetic evolution?

What are the differences between natural selection and genetic algorithms, as applied (for example) to find approximate solutions to NP-hard optimization problems?

This entry was posted in Uncategorized. Bookmark the permalink.

18 Responses to Class #12a: Evolution

  1. D says:

    Consider a “brute-force search algorithm,” which cycles through all possible DNA sequences until it finds some that lead to interesting and complex organisms. Many people would say that this algorithm does not do the same sort of explanatory work as natural selection—but if so, why not? Is it because brute-force search takes exponential time? Or because the goal of “interesting and complex organisms” is too vague? Or both reasons, or something else entirely?

    I’d say part of it is due to the missing piece of how this search algorithm operates in the real world. You can say there’s a brute-force search algorithm that, say, tries random DNA sequences, but you need to explain how these DNA sequences get formed and how they might be viable (since the right environmental/cellular/etc. conditions have to come together in order for life to happen, not just DNA). If the explanation is “amino acids and base pairs randomly combine in the primordial soup”, then we’re left with the tornado-creating-a-fully-formed-house argument. The probability of this happening for anything with complex DNA is negligible, even over billions of years. By contrast, if this process creates a simple self-replicating structure, variations of that structure can introduce more complexity.

    Likewise, what are the similarities and differences between Darwinian natural selection and the anthropic explanation for the apparent “fine-tuning” of physical constants? If the former explanation satisfies us while the latter doesn’t, then why?

    Because we can, through observation, confirm that there are billions of stars, and hypothesize that some of them would have planets that support life, and that life would actually arise in some of them, and that intelligent life would arise in some of those. This is basically the Drake Equation, and while there are certainly quibbles about the numbers (and criticism of the equation itself), the basic idea is that the stars, at least, are observable and therefore give evidence that this might be the case.

    By contrast, we don’t have evidence that there are a multiplicity of universes with different physical constants, nor that the constants “evolve” to fit some sort of fitness criterion (and if there were some sort of “fitness function” for physical constants, why would it correspond with what is amenable to life?). Thus, while we have good evidence that there are a large number of planets with different environments (a few of which might be suitable for life), we don’t have any evidence that there is a similar multiplicity of universes with different physical laws.

  2. bobthebayesian says:

    Consider a “brute-force search algorithm,” which cycles through all possible DNA sequences until it finds some that lead to interesting and complex organisms. Many people would say that this algorithm does not do the same sort of explanatory work as natural selection—but if so, why not? Is it because brute-force search takes exponential time? Or because the goal of “interesting and complex organisms” is too vague? Or both reasons, or something else entirely?

    I don’t understand the phrasing of this question. How can a brute force algorithm cycle through all possible DNA sequences until [stopping condition of your choice]? Doesn’t everything hinge on your chosen stopping condition. One could take the view that evolution/the universe actually is cycling through them all, just that the environment induces the stopping condition and part of the stopping condition is that it can just become disinterested in whole swaths of the solutions space (for example, in our local region of the universe, it seems that for the post part evolution/the universe is disinterested in non-carbon-based life, though perhaps not all together).

    There is no globally “correct” lifeform that a search algorithm is looking for — unless it has a fitness function to optimize. It could be that there is “one-step evolution” as we mentioned in class, where the global optimum is chosen to survive out of all combinatorially many possibilities. But (a) this is not different than evolution by natural selection; it’s just that the ‘selection function’ here is a Dirac delta over the one global optimum that was somehow hard-coded into the environment; and (b) we still have to use evidence to test the “one-step-combinatorial-evolution” hypothesis against the natural-selection-gradual-mutation hypothesis. I think complexity theory gives us some reason to prefer prior beliefs that brute force search should be more unlikely than local stochastic optimization, but all the evidence we have gives us even more confidence that brute force search is not at work at any scale level that we’ve yet observed.

    I’m interested in where the “knowledge” that such a brute force method requires could have come from. The global optimum for a complicated biological environmental fitness landscape must be tremendously improbable, and the Evolutionary Principle would say that if some entity, be it evolution or God or some guy down the street, can just jump straight to the optimum in one hop, we ought to demand to know how. In some sense, this would be like the local universe has an analytic formula and can perform derivative tests to check for global optima.

    Alternatively, one might suppose that all possible offspring are produced and the fitness landscape of the environment is outrageously sensitive to any tiny deviation from the global optimum. Again, we should be really suspicious of such an ambient landscape could be so arranged, but at least in this case we don’t also have to believe that evolution would stop the combinatorial search early. But what kind of local environment would this be? It would be like simulated annealing with literally a zero temperature, so that any difference in the fitness function would blow up to infinite cost. I’m not sure I buy that as plausible.

    Even if the temperature was just super super low, you could still tolerate very very tiny changes out in the Nth decimal place of the fitness function and there would be room for traditional evolution to work. Are you suggesting that there would be physical circumstances where the asymptotics of the farness out of that decimal place would grow intractably fast? Can you give some physical intuition behind that? Should we really think such a fitness landscape is possible?

    • bobthebayesian says:

      In fact, if such evolution did happen on a small enough, controllable scale, then we could solve hard optimization problems by genetically engineering lifeforms and fitness landscapes such that optimally fit life forms must have DNA corresponding to the solution of our hard problem. To me this sounds like the soap bubbles thing you mentioned “debunking” in your recent Buhl lecture, but I might be mixing things up (and, of course, ‘harnessing’ this evolution would be the hard part, maybe even like waterfalls-and-chess).

  3. This may be a long shot, but… I wonder:

    What role does quantum mechanics play in evolution?
    Does quantum mechanics play a role that is crucial to evolution being an efficient algorithm?
    (So that “classical” evolutionary mechanisms are bound to be too slow to be feasible?)

    It seems that the objective functions that evolution maximizes are for the most part classical; for example, “the fraction of individuals within a given population with a given gene” feels like a classical quantity to me.

    On the other hand, it seems that the algorithm of evolution that we know runs on quantum: even if the information stored in DNA is digital, our understanding of the way this information is used / mutated / shared does rely on quantum mechanics. (E.g., biochemistry I guess.)
    (And one may even argue that the information stored in DNA itself is not even digital, as for all I know there could be subtle effects due to how DNA is packed/arranged, so that more information that can be seen from mere GATTACA… sequences is actually stored, and to really “explain” all the information one must take into account quantum mechanics.)

    At high level, we seem to understand some very general properties of evolution, even without bringing quantum mechanics into the picture. (E.g., “[…] while at any one generation natural selection operates on genotypic fitnesses, over the generations and in the presence of sex it is particularly efficient not in increasing population mean fitness but in increasing the ability of alleles to perform well across different combinations” — Livnat, Papadimitriou, and others.)

    Yet, the question of efficiency of evolution for the specific task of evolving life, and eventually intelligent life, may be grounded in quantum mechanical phenomena.

    (How to even get started? It seems that we lack credible tractable models for both evolution and learning, with or without quantum mechanics…)

    • Scott says:

      Personally, I’d find it hard to understand how QM could play any important role in evolution—DNA seems about as close to a classical digital code as it could possibly be (and arguably it has to be classical, since otherwise it couldn’t be replicated!). To the extent quantum effects are important in biology, it seems to be for processes that take place on extremely short timescales (such as protein folding, photosynthesis…), rather than long-term processes like evolution. At least, if it were relevant to evolution in any way, then someone would need to suggest an extremely novel mechanism!

      (On the other hand, if we restrict ourselves to the level of analogies, I really, really like the analogy between speciation and the “branching of parallel universes” in MWI. In both cases, you actually have a dag that approximates a tree when seen from sufficiently far away. Very close branches are able to recombine, as in crossbreeding of greyhounds and golden retrievers or the double-slit experiment. But once two branches become far enough apart, they’ve decohered and are now out of contact forever.)

  4. Katrina LaCurts says:

    Before I write this, I’d like to give the disclaimer that I did an excellent job in college of not taking any biology courses.

    I’m imagining that a brute-force search algorithm would work as follows: Cycle through all possible DNA sequences of length 1, see which ones create “interesting” organisms, then do length 2, length 3, etc.

    The first problem that comes to mind is that even though organism complexity is generally related to genome size, it isn’t always (see here). Given the above brute-force algorithm, we shouldn’t expect to see the
    single-cell organisms with more DNA than humans appear (in the evolutionary timescale) prior to humans. So maybe our brute-force algorithm occasionally chooses random values to try. This seems surprising, but fine.

    The second problem I see is that this algorithm is operating fairly locally: it makes an organism, sees if it’s interesting, and moves on. This doesn’t take into account a population of organisms, where variation among organisms is helpful (for avoiding disease, natural disaster, etc.). So, okay. Possibly the brute-force algorithm would create that variation naturally (DNA sequences that are close together form organisms that are essentially the same, but vary slightly, etc.). Or maybe our algorithm should create various populations composed of organisms of sequences of some length, and if that population survives, move on. The latter seems like it would take a very long time.

    Third, it seems like it would take an incredible amount of time not just to run through all DNA sequences, but to actually wait and see if the organisms are interesting. Does the algorithm just keep making new organisms (with new sequences), and the ones that don’t die are by definition the interesting ones (i.e., the algorithm doesn’t wait around to see of they’re interesting before moving on)? How did the algorithm know to stop once it created humans?

    Mostly, though, I don’t see how a brute-force algorithm would be able to create the organisms that we’ve seen in the correct sequence (i.e., the sequence that we’ve observed in the past 4 billion years). It’s easy for us to observe small, gradual changes as organisms have evolved, and I’d be surprised to see a brute-force algorithm that generates the right organisms at the right times. If one did, it seems like we should be able to observe DNA sequences and reverse engineer how a brute-force algorithm would’ve worked (i.e., in what order it generated sequences).

    Or, possibly, the brute-force algorithm brute-forced its way through DNA sequences entirely differently than I imagined.

    • D.R.C. says:

      If the brute force algorithm does work like what you describe (which I don’t know how else it could work), then you also may not get certain types of species due to the ordering. Consider something like a symbiote, which must have a corresponding/symbiotic relationship with different organism. Since it requires another organism to be considered “interesting”, it would not develop unless the other organism was already created (I’m assuming that the algorithm can’t tell if it will create an appropriate organism in the future). If the creatures are completely symbiotic, they need each other to survive, but that means that they would need to be developed at the same time, but that would mean that they were the same organism, which was specifically forbidden by the definition, so thus none can exist.

      I suppose there might be some “evolution” of the system using brute force that would be something like: Creature A, Creature B (which is reliant on A, and similar is dna length), Creature A’ (Creature A with several more nucleotides, which makes changes such that creature A’ is now reliant on creature B, but similar enough to A that it is isomorphic as far as Creature B is concerned). Not sure how symbiotes actually seem to develop in real life, so I don’t know if this is an accurate model or not.

  5. kasittig says:

    Genetic algorithms are interesting, although the number one thing that springs to mind when I hear the phrase “genetic algorithm” is that genetic algorithms often do not work very well (probably a result of this being drilled into me by 6.034). It is somewhat surprising that such a seemingly-fundamental biological mechanism can make such a poor starting point for a learning algorithm when we often successfully take our inspiration from the natural world for many other algorithms. I think the key aspect of natural selection that we fail to take into account when designing genetic algorithms is the goal state of each – natural selection theoretically aims for small, incremental improvements in fitness over time by applying random mutations to the genetic code, which genetic algorithms are able to capture fairly well – but natural selection does not have an explicitly defined “goal state”, which is a consequence of a fitness function in a genetic algorithm.

    How is fitness measured in natural selection? If an individual manages to spread his or her genetic material to the next generation, he or she is considered to be fit. The more the genetic material manages to continue and spread through further generations, the fitter the individual is. An additional component is that unlike genetic algorithms, the selection mechanism for who gets to go on to the next generation in natural selection is fairly simple – either you die before you can spread your genetic material or you don’t. It seems to be difficult to translate this idea into an algorithmic equivalent – and in fact, some instances of genetic algorithms throw out this concept all together, choosing instead to randomly select candidate solutions to proceed to the next round in order to reduce the computational power required for the algorithm to run quickly.

    Fundamentally, I believe that genetic algorithms, while perhaps an interesting tool, are a very difficult tool to use well. I am unconvinced that our failures with genetic algorithms are entirely due to their inadequacies – instead, I believe that we just haven’t found a really good candidate problem to solve using a genetic algorithm.

  6. > Is there a puzzle about the speed of evolution? Is it reasonable to want an explanation for why evolution on Earth took roughly 4 billion years, rather than a much longer (or shorter) time? If so, what could such an explanation look like?

    Another reason that evolution was much faster on Earth than you might expect given an exponential search, is that evolution on the earth is *massively parallel.* There are many, many organisms simultaneously growing, reproducing, testing out mutations; if we consider the number of permutations that happen on the molecular level simply for antibody generation, we already easily reach millions of possible permutations of DNA.

    It’s worth remarking that what made Darwin’s theory of evolution novel (and note, some of his contemporaries accused him of not saying anything new) was not the idea that organisms evolved over time, but that the *mechanism* by which this evolution took place was natural selection. This is worth emphasizing, since writers like Robert Chambers in *Vestiges of the Natural History of Creation* or, of course, Lamarckian evolution, had proposed a sort of evolution of this sort. The description of this mechanism was backed up by voluminous evidence in Darwin’s Origin of Species (evidence that probably would have grown even bigger had Darwin not been forced to publish earlier than he wanted to, due to the famous letter from Wallace), and one part of the reason why we, as scientists, believe that natural selection has more explanatory power than brute force organismal development: natural selection makes predictions, and those predictions are born out by the data.

  7. Nemion says:

    Let’s discuss the following:

    “Consider a “brute-force search algorithm,” which cycles through all possible DNA sequences until it finds some that lead to interesting and complex organisms. Many people would say that this algorithm does not do the same sort of explanatory work as natural selection—but if so, why not? Is it because brute-force search takes exponential time? Or because the goal of “interesting and complex organisms” is too vague? Or both reasons, or something else entirely?”

    I believe the differences between evolution and brute force search lies in the following observations:

    As people have remarked above and during class, the fitness function of evolution is not a fixed one, or at least there is no clear guess of what exactly could it be. Of course it factors several things such as intelligence, success in finding food, etc … but there is not a path of evolution towards an optimal organism.

    On the other hand, I claim that the biggest difference lies in the local optimality of genetic algorithms. They optimize for local solutions, but that does not mean they will find a global maximum in their objective function. The question comes to mind if any search algorithm has a genetic algorithm that solves it. The answer is clearly yes, namely, one that has fitness function zero for every instance that is not and one for good instances. The problem this fitness function raises is that from a given “random” start point it might be really improbable to evolve a solution.

    It is worth to ask if there are always fitness functions that increase the probability of finding a good instance, or that make evolution feasible towards a good instance.

    Evidently one such function is one that orders the point of the search space giving them a value inversely proportional to the min distance (for some distance function in the space of 2^n inputs, say hamming) to a good instance. Nevertheless, computing such a fitness function will be as hard (up to polynomial factors) as computing the solution itself. Particularly for NP complete problems, the existence of an easy to compute fitness function “might” (at least from the top of my head) mean that NP = P.

    In this case, it could then be that the best we can do is opt for a easy to compute fitness, which means, only local optimality. Local optimality can also lead to local maxima that are different from global ones. In any case, it seems evolution when looked through the light of complexity theory and computability has a big difference from brute force search.

    • nemion says:

      The question here is if for any search problem there exists a fitness function that is easy to compute that renders the problem easy with high probability. A further question to ask is if such functions are easy to find.

  8. bobthebayesian says:

    The problem this fitness function raises is that from a given “random” start point it might be really improbable to evolve a solution.

    Actually, I think this raises the problem of “where did the fitness function come from in the first place” if it can check for global optimality? What is the actual physical mechanism that checks an organism and determines if it is globally optimal, and how did the knowledge of what the global optimum looks like become encoded into that physical mechanism?

    Another issue here is that evolution is probably not solving easy problems, it’s just solving small instances of hard problems. For example, genetic algorithms are quite good at solving things like the Traveling Salesman problem, even when the fitness function is something “trivially easy” like the length of whatever random paths you generate. This won’t allow you to solve general instances of TSP very fast, but then again, evolution doesn’t care about solving arbitrary instances. For problems with fixed instance size, it is indeed “easy to find” useful fitness/Gibbs energy functions and these methods often outperform deterministic optimization methods for a host of practical reasons (i.e. you don’t know enough to decide which parameter values are most important, so just do a Monte Carlo sweep over a range of parameters… in high dimensions, this is almost always more effective than doing gradient descent under some model of the parameter space).

    In the end, I still fail to see how this is categorically different than regular evolution. The only difference I see is that our environmental fitness function would be a Dirac delta probability landscape instead of some “more reasonably distributed” probability landscape. The same selection mechanism is at work, just with a highly suspicious zero-entropy probability landscape.

  9. sbanerjee says:

    I would argue that the “brute-force search algorithm” is not the same as the algorithm of natural selection. First of all, it seems that natural selection does not attempt to maximize the interesting-ness or complexity of organisms. It rather attempts to focus on a specific heuristic in the algorithm of evolution that allows it to not have to consider all possible outcomes – that of survival probability. The “brute-force search algorithm” tries out all possible out comes and natural selection just considers a subset of these outcomes – the subset that it gets to as it tries to maximize for survival.

    A related issue I want to discuss is the issue of how good the algorithm of evolution or natural selection is… i.e. was it the best way to get to humans? The two problems I have with this question is that first of all, its a very human-centered view of the world and secondly, it may be the most efficient way to get to humans – not analyzing it from a third party perspective, but looking at it as a problem of how if you were an army of microscopic organisms, how would you go forth to make sure you ‘improve’ life on earth. Consider the first problem, even if it might not be the most efficient way to get to humans, it might be the best way to a species that life will evolve to in the future. For the second problem, what I am trying to argue is that this question comes up when we look at the evolution of life and its performance as it has gone from the original spark to humans…. and then we try to find what the best performance is. However, if you ask the question… if you were the first organisms and could decide how you want to evolve and you did not know what was possible… you didn’t have the two points of the original spark point and the human point…. how would you evolve? I think that this perspective might cause us to find that evolution or natural selection was one of the best ways to go forth and evolve life.

  10. amosw says:

    As someone who has spent way too many hours “tuning” genetic algorithms, the No Free Lunch theorem of Wolpert and Macready seem relevant here. In colloquial terms, it says that any two optimization algorithms are equivalent when averaged over all possible problems. So what special structure must your problem have in order to expect that a genetic algorithm will help?

    It is pretty obvious to most people that genetic algorithms work very poorly when the energy landscape has lots of of flat surfaces, since any strategy that employs (smart) hill-climbing is likely to have trouble with such landscapes. For example, we don’t use GAs to factor numbers or break RSA, since knowing that composite X is not divisible by prime Y in general tells an algorithm very little about whether it is divisible by nearby prime Z.

    But there is another case that seems to give GAs trouble: when there is no good way to do crossover. I.e. problems for which it is difficult to take half of one solution and combine with half of another solution.

    I have definitely seen cases in which crossover works exceedingly well, so I was a bit surprised when in class sexual reproduction was criticized for not being that useful. I wasn’t able to turn up references to this result via Google. I suspect that the real reason for this results is that the researchers had simply failed to find a good crossover operator, or one doesn’t exist for their particular problem.

    Given that a genetic algorithm seems to have worked very well for the energy landscape of … dominating Earth, we expect that this problem is not predominately nearly flat regions and has a good crossover operator: both of which seem empirically true.

  11. Hagi says:

    The discussion in class on how the cell cannot compute the XOR function reminded me of this paper I had found very exciting as a freshman in college ( ) . However, now that I had a chance to read it again, it is not as obvious to me that the cell must be computing the XOR function for error-correcting schemes.

    It gets confusing to compare evolution with other optimization methods in terms of their potential for leading to interesting organisms. I think this is mainly because evolution is not really an optimization algorithm. Sure, it does have several optimization methods built into it, conceptually, by humans. Given the tautological nature of the concept of evolution, it is easy to put any mechanism that lead to the survival of organisms under the umbrella of evolution.

    For example, imagine that biologists found a simulated annealing-like mechanism built into some species on earth. Right before an individual is going to reproduce, he compares the mating average per time to that of his parents. According to that comparison, decides to pass his parents’ or his own genes to his offspring with different probabilities. Now, this is basically simulated annealing, however, we would call it part of evolution.

    I gave this confusing and weird example to answer this question: If we ran into interesting life on another planet, should we assume it is the result of evolution? The answer is yes, not because of the generality of GA’s as used in optimization; but because of the generality of evolution as defined in biology.

    Following on this, a brute force search can only lead to stable solutions of interesting organisms if the result of the search has the built-in programming for striving to keep existing and possibly improving. But then, to me, this describes exactly what happened on Earth. Brute-force search ran into some structure that strives to keep existing by reproducing, and then came us.

  12. Cuellar says:

    Let’s imagine for a moment a new life form (call it Stymphalian) created by accident by a catastrophic astronomic disaster. Stymphalians are not the product of evolution, because they don’t come from other species and have not been selected for. Suppose that Stymphalians happen to very adaptable creatures. They can survive in any weather and be comfortable at any temperature; they can eat almost anything; they can fly, swim and run faster than any animal on earth. In almost all aspects Stymphalian have robust survival skills. Nevertheless, Stymphalians have rigid genetic code, which does not mutate. It’s not hard to imagine that such a creature will quickly very successful on earth and for thousands of years plains, forests and lakes would be filled by Stymphalians.

    Imagine further that thousands of years after the appearance of Stymphalians, another catastrophic event reduces drastically life in the planet and food sources become very scarce. Stymphalians, having a significant size can’t fulfill their energy requirement and soon get extinguished.

    I don’t see a fundamental reason why Stymphalian could not exists. An entire planet could be filled with life forms created by chance as the Stymphalian and last for centuries. But life would quickly cease to exists when the environment does. This is all to say that there is no reason to conclude that life has to be explained by evolution. Successful life forms could, in principle, be produced by random chance or by a ‘brut-force’ process.

    On the other hand, it is possible to understand evolution a priori. One can understand the process and have it as an explanation for some life. One can further bet that evolution is much more likely to happen and, in that sense, assume that evolution would explain life (by a mere probabilistic argument). But there is no way to prove it without evidence.

    • bobthebayesian says:

      But the initial life creation event would be completely analogous to the 747-created-by-tornado-in-junk-yard discussed in WPSCACC. Yes, it’s possible, but it’s so exponentially unlikely that in almost any plausible situation, one’s prior belief would be stacked heavily against instantaneous-life-from-cosmic-accident. It’s hard to imagine a physical situation in which any evidence would be in favor of believing that. But I do agree that if a cosmic accident automatically produced a fully-formed being capable of reproduction, this would be different than evolution. But it would also be different than a brute force search too, for the same reason that it’s different from evolution — nothing was selected, only randomly organized. Whether or not the randomly organized thing would be suitable for living in the post-cosmic-accident environment is a total coincidence, rather than being a direct effect of the selection process in regular or brute-force evolution.

  13. wjarjoui says:

    “Consider a “brute-force search algorithm,” which cycles through all possible DNA sequences until it finds some that lead to interesting and complex organisms. Many people would say that this algorithm does not do the same sort of explanatory work as natural selection—but if so, why not? Is it because brute-force search takes exponential time? Or because the goal of “interesting and complex organisms” is too vague? Or both reasons, or something else entirely?”
    I don’t really see evolution different from a brute force algorithm in essence. I think evolution could easily be modeled as a brute force algorithm that simulates pieces of DNA as a way of figuring out whether or not they will create an “interesting and complex organisms”. The survival of the fittest “heuristic” can be seen as nothing but a way for the brute-force algorithm to find out whether or not the holy grail of organisms was found.
    To make another case for my line of thinking, I can try to attack another difference that people may attribute as separating a brute-force and evolution, that difference is complexity. Usually the argument goes along the lines of “a brute-force algorithm would take exponential time to go through all possible combinations of DNA and find ‘the right ones’, and this would be a lot more than the 4b it took evolution so far”. I disagree with this argument for two reason:
    a) It is not proof that a brute-force algorithm would always take longer than evolution to find a particular set of DNA – i.e. I haven’t seen a proof that evolution would take assymptotically less time to find that set of DNA faster than a brute-force algorithm (worst-case analysis). Please point me it it’s direction if it exists.
    b) Has evolution actually found a DNA for an “interesting and complex organism” yet? When will we know? It seems the algorithm is still running.

    In any case, I think it is worth mentioning that one of the biggest differences between memetic and genetic evolution is one that we mentioned in class: memetic evolution somehow considers the future, genetic evolution does not. However, one can still argue that genetic evolution also considers the future and refer to the example Deutch makes in his book, namely that a gene in a plant that is about to go extinct will affect human’s decision to save that plant. This argument could certainly make sense if we were to claim the gene in the plant somehow contributed in giving rise to humans many years ago, but can we make that claim?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s