Intelligent Design The Definitive Source on Intelligent Design

Entry on Intelligent Design

Intelligent design begins with a seemingly innocuous question: Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause? To see what’s at stake, consider Mount Rushmore. The evidence for Mount Rushmore’s design is direct — eyewitnesses saw the sculptor Gutzon Borglum spend the better part of his life designing and building this structure. But what if there were no direct evidence for Mount Rushmore’s design? What if humans went extinct and aliens, visiting the earth, discovered Mount Rushmore in substantially the same condition as it is now? In that case, what about this rock formation would provide convincing circumstantial evidence that it was due to a designing intelligence and not merely to wind and erosion? Designed objects like Mount Rushmore exhibit characteristic features or patterns that point to an intelligence. Such features or patterns constitute signs of intelligence. Proponents of intelligent design, known as design theorists, purport to study such signs formally, rigorously, and scientifically. Intelligent design may therefore be defined as the science that studies signs of intelligence. Because a sign is not the thing signified, intelligent design does not presume to identify the purposes of a designer. Intelligent design focuses not on the designer’s purposes (the thing signified) but on the artifacts resulting from a designer’s purposes (the sign). What a designer intends or purposes is, to be sure, an interesting question, and one may be able to infer something about a designer’s purposes from the designed objects that a designer produces. Nevertheless, the purposes of a designer lie outside the scope of intelligent design. As a scientific research program, intelligent design investigates the effects of intelligence and not intelligence as such. Intelligent design is controversial because it purports to find signs of intelligence in nature, and specifically in biological systems. According to the evolutionary biologist Francisco Ayala, Darwin’s greatest achievement was to show how the organized complexity of organisms could be attained apart from a designing intelligence. Intelligent design therefore directly challenges Darwinism and other naturalistic approaches to the origin and evolution of life. The idea that an intrinsic intelligence or teleology inheres in and is expressed through nature has a long history and is embraced by many religious traditions. The main difficulty with this idea since Darwin’s day, however, has been to discover a conceptually powerful formulation of design that can fruitfully advance science. What has kept design outside the scientific mainstream since the rise of Darwinism has been the lack of precise methods for distinguishing intelligently caused objects from unintelligently caused ones. For design to be a fruitful scientific concept, scientists have to be sure that they can reliably determine whether something is designed. Johannes Kepler, for instance, thought the craters on the moon were intelligently designed by moon dwellers. We now know that the craters were formed by purely material factors (like meteor impacts). This fear of falsely attributing something to design, only to have it overturned later, has hindered design from entering the scientific mainstream. But design theorists argue that they now have formulated precise methods for discriminating designed from undesigned objects. These methods, they contend, enable them to avoid Kepler’s mistake and reliably locate design in biological systems. As a theory of biological origins and development, intelligent design’s central claim is that only intelligent causes adequately explain the complex, information-rich structures of biology and that these causes are empirically detectable. To say intelligent causes are empirically detectable is to say there exist well-defined methods that, based on observable features of the world, can reliably distinguish intelligent causes from undirected natural causes. Many special sciences have already developed such methods for drawing this distinction — notably forensic science, cryptography, archeology, and the search for extraterrestrial intelligence (SETI). Essential to all these methods is the ability to eliminate chance and necessity. Astronomer Carl Sagan wrote a novel about SETI called Contact, which was later made into a movie. The plot and the extraterrestrials were fictional, but Sagan based the SETI astronomers’ methods of design detection squarely on scientific practice. Real-life SETI researchers have thus far failed to conclusively detect designed signals from distant space, but if they encountered such a signal, as the film’s astronomers’ did, they too would infer design. Why did the radio astronomers in Contact draw such a design inference from the signals they monitored from space? SETI researchers run signals collected from distant space through computers programmed to recognize preset patterns. These patterns serve as a sieve. Signals that do not match any of the patterns pass through the sieve and are classified as random. After years of receiving apparently meaningless, random signals, the Contact researchers discovered a pattern of beats and pauses that corresponded to the sequence of all the prime numbers between two and one-hundred and one. (Prime numbers are divisible only by themselves and by one.) That startled the astronomers, and they immediately inferred an intelligent cause. When a sequence begins with two beats and then a pause, three beats and then a pause, and continues through each prime number all the way to one-hundred and one beats, researchers must infer the presence of an extraterrestrial intelligence. Here’s the rationale for this inference: Nothing in the laws of physics requires radio signals to take one form or another. The prime sequence is therefore contingent rather than necessary. Also, the prime sequence is long and hence complex. Note that if the sequence were extremely short and therefore lacked complexity, it could easily have happened by chance. Finally, the sequence was not merely complex but also exhibited an independently given pattern or specification (it was not just any old sequence of numbers but a mathematically significant one — the prime numbers). Intelligence leaves behind a characteristic trademark or signature — what within the intelligent design community is now called specified complexity. An event exhibits specified complexity if it is contingent and therefore not necessary; if it is complex and therefore not readily repeatable by chance; and if it is specified in the sense of exhibiting an independently given pattern. Note that a merely improbable event is not sufficient to eliminate chance — by flipping a coin long enough, one will witness a highly complex or improbable event. Even so, one will have no reason to attribute it to anything other than chance. The important thing about specifications is that they be objectively given and not arbitrarily imposed on events after the fact. For instance, if an archer fires arrows at a wall and then paints bull’s-eyes around them, the archer imposes a pattern after the fact. On the other hand, if the targets are set up in advance (“specified”), and then the archer hits them accurately, one legitimately concludes that it was by design. The combination of complexity and specification convincingly pointed the radio astronomers in the movie Contact to an extraterrestrial intelligence. Note that the evidence was purely circumstantial — the radio astronomers knew nothing about the aliens responsible for the signal or how they transmitted it. Design theorists contend that specified complexity provides compelling circumstantial evidence for intelligence. Accordingly, specified complexity is a reliable empirical marker of intelligence in the same way that fingerprints are a reliable empirical marker of an individual’s presence. Moreover, design theorists argue that purely material factors cannot adequately account for specified complexity. In determining whether biological organisms exhibit specified complexity, design theorists focus on identifiable systems (e.g., individual enzymes, metabolic pathways, and molecular machines). These systems are not only specified by their independent functional requirements but also exhibit a high degree of complexity. In Darwin’s Black Box, biochemist Michael Behe connects specified complexity to biological design through his concept of irreducible complexity. Behe defines a system as irreducibly complex if it consists of several interrelated parts for which removing even one part renders the system’s basic function unrecoverable. For Behe, irreducible complexity is a sure indicator of design. One irreducibly complex biochemical system that Behe considers is the bacterial flagellum. The flagellum is an acid-powered rotary motor with a whip-like tail that spins at twenty-thousand revolutions per minute and whose rotating motion enables a bacterium to navigate through its watery environment. Behe shows that the intricate machinery in this molecular motor — including a rotor, a stator, O-rings, bushings, and a drive shaft — requires the coordinated interaction of approximately forty complex proteins and that the absence of any one of these proteins would result in the complete loss of motor function. Behe argues that the Darwinian mechanism faces grave obstacles in trying to account for such irreducibly complex systems. In No Free Lunch, William Dembski shows how Behe’s notion of irreducible complexity constitutes a particular instance of specified complexity. Once an essential constituent of an organism exhibits specified complexity, any design attributable to that constituent carries over to the organism as a whole. To attribute design to an organism one need not demonstrate that every aspect of the organism was designed. Organisms, like all material objects, are products of history and thus subject to the buffeting of purely material factors. Automobiles, for instance, get old and exhibit the effects of corrosion, hail, and frictional forces. But that doesn’t make them any less designed. Likewise design theorists argue that organisms, though exhibiting the effects of history (and that includes Darwinian factors such as genetic mutations and natural selection), also include an ineliminable core that is designed. Intelligent design’s main tie to religion is through the design argument. Perhaps the best-known design argument is William Paley’s. Paley published his argument in 1802 in a book titled Natural Theology. The subtitle of that book is revealing: Evidences of the Existence and Attributes of the Deity, Collected from the Appearances of Nature. Paley’s project was to examine features of the natural world (what he called “appearances of nature”) and from there draw conclusions about the existence and attributes of a designing intelligence responsible for those features (whom Paley identified with the God of Christianity). According to Paley, if one finds a watch in a field (and thus lacks all knowledge of how the watch arose), the adaptation of the watch’s parts to telling time ensures that it is the product of an intelligence. So too, according to Paley, the marvelous adaptations of means to ends in organisms (like the intricacy of the human eye with its capacity for vision) ensure that organisms are the product of an intelligence. The theory of intelligent design updates Paley’s watchmaker argument in light of contemporary information theory and molecular biology, purporting to bring this argument squarely within science. In arguing for the design of natural systems, intelligent design is more modest than the design arguments of natural theology. For natural theologians like Paley, the validity of the design argument did not depend on the fruitfulness of design-theoretic ideas for science but on the metaphysical and theological mileage one could get out of design. A natural theologian might point to nature and say, “Clearly, the designer of this ecosystem prized variety over neatness.” A design theorist attempting to do actual design-theoretic research on that ecosystem might reply, “Although that’s an intriguing theological possibility, as a design theorist I need to keep focused on the informational pathways capable of producing that variety.” In his Critique of Pure Reason, Immanuel Kant claimed that the most the design argument can establish is “an architect of the world who is constrained by the adaptability of the material in which he works, not a creator of the world to whose idea everything is subject.” Far from rejecting the design argument, Kant objected to overextending it. For Kant, the design argument legitimately establishes an architect (that is, an intelligent cause whose contrivances are constrained by the materials that make up the world), but it can never establish a creator who originates the very materials that the architect then fashions. Intelligent design is entirely consonant with this observation by Kant. Creation is always about the source of being of the world. Intelligent design, as the science that studies signs of intelligence, is about arrangements of preexisting materials that point to a designing intelligence. Creation and intelligent design are therefore quite different. One can have creation without intelligent design and intelligent design without creation. For instance, one can have a doctrine of creation in which God creates the world in such a way that nothing about the world points to design. The evolutionary biologist Richard Dawkins wrote a book titled The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design. Even if Dawkins is right about the universe revealing no evidence of design, it would not logically follow that it was not created. It is logically possible that God created a world that provides no evidence of design. On the other hand, it is logically possible that the world is full of signs of intelligence but was not created. This was the ancient Stoic view, in which the world was eternal and uncreated, and yet a rational principle pervaded the world and produced marks of intelligence in it. The implications of intelligent design for religious belief are profound. The rise of modern science led to a vigorous attack on all religions that treat purpose, intelligence, and wisdom as fundamental and irreducible features of reality. The high point of this attack came with Darwin’s theory of evolution. The central claim of Darwin’s theory is that an unguided material process (random variation and natural selection) could account for the emergence of all biological complexity and order. In other words, Darwin appeared to show that the design in biology (and, by implication, in nature generally) was dispensable. By showing that design is indispensable to the scientific understanding of the natural world, intelligent design is reinvigorating the design argument and at the same time overturning the widespread misconception that the only tenable form of religious belief is one that treats purpose, intelligence, and wisdom as byproducts of unintelligent material processes.

Bibliography

  • Beckwith, Francis J. Law, Darwinism, and Public Education: The Establishment Clause and the Challenge of Intelligent Design. Lanham, Md., 2003.
  • Behe, Michael J. Darwin’s Black Box: The Biochemical Challenge to Evolution. New York, 1996.
  • Dawkins, Richard. The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design. New York, 1986.
  • Dembski, William A. No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence. Lanham, Md., 2002.
  • Forrest, Barbara. “The Wedge at Work: How Intelligent Design Creationism Is Wedging Its Way into the Cultural and Academic Mainstream.” In Intelligent Design Creationism and Its Critics: Philosophical, Theological, and Scientific Perspectives, edited by Robert T. Pennock, pp. 5–53, Cambridge, Mass., 2001.
  • Giberson, Karl W. and Donald A. Yerxa. Species of Origins: America’s Search for a Creation Story. Lanham, Md., 2002.
  • Hunter, Cornelius G. Darwin’s God: Evolution and the Problem of Evil. Grand Rapids, Mich., 2002.
  • Manson, Neil A., ed. God and Design: The Teleological Argument and Modern Science. London, 2003.
  • Miller, Kenneth R. Finding Darwin’s God: A Scientist’s Search for Common Ground between God and Evolution. San Francisco, 1999.
  • Rea, Michael C. World without Design: The Ontological Consequences of Naturalism. Oxford, 2002.
  • Witham, Larry. By Design: Science and the Search for God. San Francisco, 2003.
  • Woodward, Thomas. Doubts about Darwin: A History of Intelligent Design. Grand Rapids, Mich., 2003.

Bibliographic Essay

Larry Witham provides the best overview of intelligent design, even-handedly treating its scientific, cultural, and religious dimensions. As a journalist, Witham has personally interviewed all the main players in the debate over intelligent design and allows them to tell their story. For intelligent design’s place in the science and religion dialogue, see Giberson and Yerxa. For histories of the intelligent design movement, see Woodward (a supporter) and Forrest (a critic). See Behe and Dembski to overview intelligent design’s scientific research program. For a critique of that program, see Miller. For an impassioned defense of Darwinism against any form of teleology or design, see Dawkins. Manson’s anthology situates intelligent design within broader discussions about teleology. Rea probes intelligent design’s metaphysical underpinnings. Hunter provides an interesting analysis of how intelligent design and Darwinism play off the problem of evil. Beckwith examines whether intelligent design is inherently religious and thus, on account of church-state separation, must be barred from public school science curricula.
(more…)
Abstract: Conservation of information theorems indicate that any search algorithm performs, on average, as well as random search without replacement unless it takes advantage of problem-specific information about the search target or the search-space structure. Combinatorics shows that even a moderately sized search requires problem-specific information to be successful. Computers, despite their speed in performing queries, are completely inadequate for resolving even moderately sized search problems without accurate information to guide them. We propose three measures to characterize the information required for successful search: 1) endogenous information, which measures the difficulty of finding a target using random search; 2) exogenous information, which measures the difficulty that remains in finding a target once a search takes advantage of problem-specific information; and 3) active information, which, as the difference between endogenous and exogenous information, measures the contribution of problem-specific information for successfully finding a target. This paper develops a methodology based on these information measures to gauge the effectiveness with which problem-specific information facilitates successful search. It then applies this methodology to various search tools widely used in evolutionary search. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans ( Volume: 39, Issue: 5, Sept. 2009 )
(more…)
Searching for small targets in large spaces is a common problem in the sciences. Because blind search is inadequate for such searches, it needs to be supplemented with additional information, thereby transforming a blind search into an assisted search. This additional information can be quantified and indicates that assisted searches themselves result from searching higher-level search spaces–by conducting, as it were, a search for a search. Thus, the original search gets displaced to a higher-level search. The key result in this paper is a displacement theorem, which shows that successfully resolving such a higher-level search is exponentially more difficult than successfully resolving the original search. Leading up to this result, a measure-theoretic version of the No Free Lunch theorems is formulated and proven. The paper shows that stochastic mechanisms, though able to explain the success of assisted searches in locating targets, cannot, in turn, explain the source of assisted searches.
(more…)
Talk delivered at the American Museum of Natural History, 23 April 2002 at a discussion titled “Evolution or Intelligent Design?” The participants included ID proponents William A. Dembski and Michael J. Behe as well as evolutionists Kenneth R. Miller and Robert T. Pennock. Eugenie C. Scott moderated the discussion. An introduction was given by National History Editor, Richard Milner. For coverage of this debate, see Scott Stevens' article in The Cleveland Plains Dealer.
Evolutionary biology teaches that all biological complexity is the result of material mechanisms. These include principally the Darwinian mechanism of natural selection and random variation, but also include other mechanisms (symbiosis, gene transfer, genetic drift, the action of regulatory genes in development, self-organizational processes, etc.). These mechanisms are just that: mindless material mechanisms that do what they do irrespective of intelligence. To be sure, mechanisms can be programmed by an intelligence. But any such intelligent programming of evolutionary mechanisms is not properly part of evolutionary biology. Intelligent design, by contrast, teaches that biological complexity is not exclusively the result of material mechanisms but also requires intelligence, where the intelligence in question is not reducible to such mechanisms. The central issue, therefore, is not the relatedness of all organisms, or what typically is called common descent. Indeed, intelligent design is perfectly compatible with common descent. Rather, the central issue is how biological complexity emerged and whether intelligence played a pivotal role in its emergence. Suppose, therefore, for the sake of argument that intelligence--one irreducible to material mechanisms--actually did play a decisive role in the emergence of life’s complexity and diversity. How could we know it? To answer this question, let’s run a thought experiment. Imagine that Alice is sending Bob encrypted messages over a communication channel and that Eve is eavesdropping. For simplicity let’s assume all the signals are bit strings. How could Eve know that Alice is not merely sending Bob random coin flips but meaningful messages? To answer this question, Eve will require two things: First, the bit strings sent across the communication channel need to be reasonably long--in other words, they need to be complex. If not, chance can readily account for them. Just as there’s no way to reconstruct a piece of music given just one note, so there is no way to preclude chance for a bit string that consists of only a few bits. For instance, there are only eight strings consisting of three bits, and chance readily accounts for any of them. There’s a second requirement for Eve to know that Alice is not sending Bob random gibberish: Eve needs to observe a suitable pattern in the signal Alice sends Bob. Even if the signal is complex, it may exhibit no pattern characteristic of intelligence. Flip a coin enough times, and you’ll observe a complex sequence of coin tosses. But that sequence will exhibit no pattern characteristic of intelligence. For cryptanalysts like Eve, observing a pattern suitable for identifying intelligence amounts to finding a cryptographic key that deciphers the message. Patterns suitable for identifying intelligence I call specifications. In sum, Eve requires both complexity and specification to infer intelligence in the signals Alice is sending to Bob. This combination of complexity and specification, or specified complexity as I call it, is the basis for design inferences across numerous special sciences, including archaeology, cryptography, forensics, and the Search for Extraterrestrial Intelligence (SETI). I detail this in my book The Design Inference, a peer-reviewed statistical monograph that appeared with Cambridge University Press in 1998. So, what’s all the fuss about specified complexity? The actual term specified complexity is not original with me. It first occurs in the origin-of-life literature, where Leslie Orgel used it to describe what he regards as the essence of life. That was thirty years ago. More recently, in 1999, surveying the state of origin-of-life research, Paul Davies remarked: “Living organisms are mysterious not for their complexity per se, but for their tightly specified complexity” (The Fifth Miracle, p. 112). Orgel and Davies used specified complexity loosely. In my own research I’ve formalized it as a statistical criterion for identifying the effects of intelligence. For identifying the effects of animal, human, and extraterrestrial intelligence the criterion works just fine. Yet when anyone attempts to apply the criterion to biological systems, all hell breaks loose. Let’s consider why. Evolutionary biologists claim to have demonstrated that design is superfluous for understanding biological complexity. The only way to actually demonstrate this, however, is to exhibit material mechanisms that account for the various forms of biological complexity out there. Now, if for every instance of biological complexity some mechanism could readily be produced that accounts for it, intelligent design would drop out of scientific discussion. Occam’s razor, by proscribing superfluous causes, would in this instance finish off intelligent design quite nicely. But that hasn’t happened. Why not? The reason is that there are plenty of complex biological systems for which no biologist has a clue how they emerged. I’m not talking about handwaving just-so stories. Biologists have plenty of those. I’m talking about detailed testable accounts of how such systems could have emerged. To see what’s at stake, consider how biologists propose to explain the emergence of the bacterial flagellum, a molecular machine that has become the mascot of the intelligent design movement. Howard Berg at Harvard calls the bacterial flagellum the most efficient machine in the universe. The flagellum is a nano-engineered outboard rotary motor on the backs of certain bacteria. It spins at tens of thousands of rpm, can change direction in a quarter turn, and propels a bacterium through its watery environment. According to evolutionary biology it had to emerge via some material mechanism. Fine, but how? The usual story is that the flagellum is composed of parts that previously were targeted for different uses and that natural selection then co-opted to form a flagellum. This seems reasonable until we try to fill in the details. The only well-documented examples that we have of successful co-optation come from human engineering. For instance, an electrical engineer might co-opt components from a microwave oven, a radio, and a computer screen to form a working television. But in that case, we have an intelligent agent who knows all about electrical gadgets and about televisions in particular. But natural selection doesn’t know a thing about bacterial flagella. So how is natural selection going to take extant protein parts and co-opt them to form a flagellum? The problem is that natural selection can only select for pre-existing function. It can, for instance, select for larger finch beaks when the available nuts are harder to open. Here the finch beak is already in place and natural selection merely enhances its present functionality. Natural selection might even adapt a pre-existing structure to a new function; for example, it might start with finch beaks adapted to opening nuts and end with beaks adapted to eating insects. But for co-optation to result in a structure like the bacterial flagellum, we are not talking about enhancing the function of an existing structure or reassigning an existing structure to a different function, but reassigning multiple structures previously targeted for different functions to a novel structure exhibiting a novel function. The bacterial flagellum requires around fifty proteins for its assembly and structure. All these proteins are necessary in the sense that lacking any of them, a working flagellum does not result. The only way for natural selection to form such a structure by co-optation, then, is for natural selection gradually to enfold existing protein parts into evolving structures whose functions co-evolve with the structures. We might, for instance, imagine a five-part mousetrap consisting of a platform, spring, hammer, holding bar, and catch evolving as follows: It starts as a doorstop (thus consisting merely of the platform), then evolves into a tie-clip (by attaching the spring and hammer to the platform), and finally becomes a full mousetrap (by also including the holding bar and catch). Ken Miller finds such scenarios not only completely plausible but also deeply relevant to biology (in fact, he regularly sports a modified mousetrap cum tie-clip). Intelligent design proponents, by contrast, regard such scenarios as rubbish. Here’s why. First, in such scenarios the hand of human design and intention meddles everywhere. Evolutionary biologists assure us that eventually they will discover just how the evolutionary process can take the right and needed steps without the meddling hand of design. But all such assurances presuppose that intelligence is dispensable in explaining biological complexity. The only evidence we have of successful co-optation, however, comes from engineering and confirms that intelligence is indispensable in explaining complex structures like the mousetrap and by implication the flagellum. Intelligence is known to have the causal power to produce such structures. We’re still waiting for the promised material mechanisms. The other reason design theorists are less than impressed with co-optation concerns an inherent limitation of the Darwinian mechanism. The whole point of the Darwinian selection mechanism is that you can get from anywhere in configuration space to anywhere else provided you can take small steps. How small? Small enough that they are reasonably probable. But what guarantee do you have that a sequence of baby-steps connects any two points in configuration space? Richard Dawkins compares the emergence of biological complexity to climbing a mountain--Mount Improbable, as he calls it. According to him, Mount Improbable always has a gradual serpentine path leading to the top that can be traversed in baby-steps. But that’s hardly an empirical claim. Indeed, the claim is entirely gratuitous. It might be a fact about nature that Mount Improbable is sheer on all sides and getting to the top from the bottom via baby-steps is effectively impossible. A gap like that would reside in nature herself and not in our knowledge of nature (it would not, in other words, constitute a god-of-the-gaps). The problem is worse yet. For the Darwinian selection mechanism to connect point A to point B in configuration space, it is not enough that there merely exist a sequence of baby-steps connecting the two. In addition, each baby-step needs in some sense to be “successful.” In biological terms, each step requires an increase in fitness as measured in terms of survival and reproduction. Natural selection, after all, is the motive force behind each baby-step, and selection only selects what is advantageous to the organism. Thus, for the Darwinian mechanism to connect two organisms, there must be a sequence of successful baby-steps connecting the two. Again, it is not enough merely to presuppose this--it must be demonstrated. For instance, it is not enough to point out that some genes for the bacterial flagellum are the same as those for a type III secretory system (a type of pump) and then handwave that one was co-opted from the other. Anybody can arrange complex systems in a series. But such series do nothing to establish whether the end evolved in a Darwinian fashion from the beginning unless the probability of each step in the series can be quantified, the probability at each step turns out to be reasonably large, and each step constitutes an advantage to the organism (in particular, viability of the whole organism must at all times be preserved). Convinced that the Darwinian mechanism must be capable of doing such evolutionary design work, evolutionary biologists rarely ask whether such a sequence of successful baby-steps even exists; much less do they attempt to quantify the probabilities involved. I attempt that in chapter 5 of my most recent book No Free Lunch. There I lay out techniques for assessing the probabilistic hurdles that the Darwinian mechanism faces in trying to account for complex biological structures like the bacterial flagellum. The probabilities I calculate--and I try to be conservative--are horrendous and render natural selection entirely implausible as a mechanism for generating the flagellum and structures like it. If I’m right and the probabilities really are horrendous, then the bacterial flagellum exhibits specified complexity. Furthermore, if specified complexity is a reliable marker of intelligent agency, then systems like the bacterial flagellum bespeak intelligent design and are not solely the effect of material mechanisms. It’s here that critics of intelligent design raise the argument-from-ignorance objection. For something to exhibit specified complexity entails that no known material mechanism operating in known ways is able to account for it. But that leaves unknown material mechanisms. It also leaves known material mechanisms operating in unknown ways. Isn’t arguing for design on the basis of specified complexity therefore merely an argument from ignorance? Two comments to this objection: First, the great promise of Darwinian and other naturalistic accounts of evolution was precisely to show how known material mechanisms operating in known ways could produce all of biological complexity. So at the very least, specified complexity is showing that problems claimed to be solved by naturalistic means have not been solved. Second, the argument from ignorance objection could in principle be raised for any design inference that employs specified complexity, including those where humans are implicated in constructing artifacts. An unknown material mechanism might explain the origin of the Mona Lisa in the Louvre, or the Louvre itself, or Stonehenge, or how two students wrote exactly the same essay. But no one is looking for such mechanisms. It would be madness even to try. Intelligent design caused these objects to exist, and we know that because of their specified complexity. Specified complexity, by being defined relative to known material mechanisms operating in known ways, might always be defeated by showing that some relevant mechanism was omitted. That’s always a possibility (though as with the plagiarism example and with many other cases, we don’t take it seriously). As William James put it, there are live possibilities and then again there are bare possibilities. There are many design inferences which, to question or doubt, require invoking a bare possibility. Such bare possibilities, if realized, would defeat specified complexity. But defeat specified complexity in what way? Not by rendering the concept incoherent but by dissolving it. In fact, that is how Darwinists, complexity theorists, and anyone intent on defeating specified complexity as a marker of intelligence usually attempts it, namely, by showing that it dissolves once we have a better understanding of the underlying material mechanisms that render the object in question reasonably probable. By contrast, design theorists argue that specified complexity in biology is real: that any attempt to palliate the complexities or improbabilities by invoking as yet unknown mechanisms or known mechanisms operating in unknown ways is destined to fail. This can in some cases be argued convincingly, as with Michael Behe’s irreducibly complex biochemical machines and with biological structures whose geometry allows complete freedom in possible arrangements of parts. Consider, for instance, a configuration space comprising all possible character sequences from a fixed alphabet (such spaces model not only written texts but also polymers like DNA, RNA, and proteins). Configuration spaces like this are perfectly homogeneous, with one character string geometrically interchangeable with the next. The geometry therefore precludes any underlying mechanisms from distinguishing or preferring some character strings over others. Not material mechanisms but external semantic information (in the case of written texts) or functional information (in the case of polymers) is needed to generate specified complexity in these instances. To argue that this semantic or functional information reduces to material mechanisms is like arguing that Scrabble pieces have inherent in them preferential ways they like to be sequenced. They don’t. Michael Polanyi offered such arguments for biological design in the 1960s. In summary, evolutionary biology contends that material mechanisms are capable of accounting for all of biological complexity. Yet for biological systems that exhibit specified complexity, these mechanisms provide no explanation of how they were produced. Moreover, in contexts where the causal history is independently verifiable, specified complexity is reliably correlated with intelligence. At a minimum, biology should therefore allow the possibility of design in cases of biological specified complexity. But that’s not the case. Evolutionary biology allows only one line of criticism, namely, to show that a complex specified biological structure could not have evolved via any material mechanism. In other words, so long as some unknown material mechanism might have evolved the structure in question, intelligent design is proscribed. This renders evolutionary theory immune to disconfirmation in principle, because the universe of unknown material mechanisms can never be exhausted. Furthermore, the evolutionist has no burden of evidence. Instead, the burden of evidence is shifted entirely to the evolution skeptic. And what is required of the skeptic? The skeptic must prove nothing less than a universal negative. That is not how science is supposed to work. Science is supposed to pursue the full range of possible explanations. Evolutionary biology, by limiting itself to material mechanisms, has settled in advance which biological explanations are true apart from any consideration of empirical evidence. This is arm-chair philosophy. Intelligent design may not be correct. But the only way we could discover that is by admitting design as a real possibility, not ruling it out a priori. Darwin himself agreed. In the Origin of Species he wrote: “A fair result can be obtained only by fully stating and balancing the facts and arguments on both sides of each question.”
(more…)

Abstract: For the scientific community intelligent design represents creationism's latest grasp at scientific legitimacy. Accordingly, intelligent design is viewed as yet another ill-conceived attempt by creationists to straightjacket science within a religious ideology. But in fact intelligent design can be formulated as a scientific theory having empirical consequences and devoid of religious commitments. Intelligent design can be unpacked as a theory of information. Within such a theory, information becomes a reliable indicator of design as well as a proper object for scientific investigation. In my paper I shall (1) show how information can be reliably detected and measured, and (2) formulate a conservation law that governs the origin and flow of information. My broad conclusion is that information is not reducible to natural causes, and that the origin of information is best sought in intelligent causes. Intelligent design thereby becomes a theory for detecting and measuring information, explaining its origin, and tracing its flow.


1. Information

In Steps Towards Life Manfred Eigen (1992, p. 12) identifies what he regards as the central problem facing origins-of-life research: "Our task is to find an algorithm, a natural law that leads to the origin of information." Eigen is only half right. To determine how life began, it is indeed necessary to understand the origin of information. Even so, neither algorithms nor natural laws are capable of producing information. The great myth of modern evolutionary biology is that information can be gotten on the cheap without recourse to intelligence. It is this myth I seek to dispel, but to do so I shall need to give an account of information. No one disputes that there is such a thing as information. As Keith Devlin (1991, p. 1) remarks, "Our very lives depend upon it, upon its gathering, storage, manipulation, transmission, security, and so on. Huge amounts of money change hands in exchange for information. People talk about it all the time. Lives are lost in its pursuit. Vast commercial empires are created in order to manufacture equipment to handle it." But what exactly is information? The burden of this paper is to answer this question, presenting an account of information that is relevant to biology.

What then is information? The fundamental intuition underlying information is not, as is sometimes thought, the transmission of signals across a communication channel, but rather, the actualization of one possibility to the exclusion of others. As Fred Dretske (1981, p. 4) puts it, "Information theory identifies the amount of information associated with, or generated by, the occurrence of an event (or the realization of a state of affairs) with the reduction in uncertainty, the elimination of possibilities, represented by that event or state of affairs." To be sure, whenever signals are transmitted across a communication channel, one possibility is actualized to the exclusion of others, namely, the signal that was transmitted to the exclusion of those that weren't. But this is only a special case. Information in the first instance presupposes not some medium of communication, but contingency. Robert Stalnaker (1984, p. 85) makes this point clearly: "Content requires contingency. To learn something, to acquire information, is to rule out possibilities. To understand the information conveyed in a communication is to know what possibilities would be excluded by its truth." For there to be information, there must be a multiplicity of distinct possibilities any one of which might happen. When one of these possibilities does happen and the others are ruled out, information becomes actualized. Indeed, information in its most general sense can be defined as the actualization of one possibility to the exclusion of others (observe that this definition encompasses both syntactic and semantic information).

This way of defining information may seem counterintuitive since we often speak of the information inherent in possibilities that are never actualized. Thus we may speak of the information inherent in flipping one-hundred heads in a row with a fair coin even if this event never happens. There is no difficulty here. In counterfactual situations the definition of information needs to be applied counterfactually. Thus to consider the information inherent in flipping one-hundred heads in a row with a fair coin, we treat this event/possibility as though it were actualized. Information needs to referenced not just to the actual world, but also cross-referenced with all possible worlds.

2. Complex Information

How does our definition of information apply to biology, and to science more generally? To render information a useful concept for science we need to do two things: first, show how to measure information; second, introduce a crucial distinction--the distinction between specified and unspecified information. First, let us show how to measure information. In measuring information it is not enough to count the number of possibilities that were excluded, and offer this number as the relevant measure of information. The problem is that a simple enumeration of excluded possibilities tells us nothing about how those possibilities were individuated in the first place. Consider, for instance, the following individuation of poker hands:

  • (i) A royal flush.
  • (ii) Everything else.

To learn that something other than a royal flush was dealt (i.e., possibility (ii)) is clearly to acquire less information than to learn that a royal flush was dealt (i.e., possibility (i)). Yet if our measure of information is simply an enumeration of excluded possibilities, the same numerical value must be assigned in both instances since in both instances a single possibility is excluded.

It follows, therefore, that how we measure information needs to be independent of whatever procedure we use to individuate the possibilities under consideration. And the way to do this is not simply to count possibilities, but to assign probabilities to these possibilities. For a thoroughly shuffled deck of cards, the probability of being dealt a royal flush (i.e., possibility (i)) is approximately .000002 whereas the probability of being dealt anything other than a royal flush (i.e., possibility (ii)) is approximately .999998. Probabilities by themselves, however, are not information measures. Although probabilities properly distinguish possibilities according to the information they contain, nonetheless probabilities remain an inconvenient way of measuring information. There are two reasons for this. First, the scaling and directionality of the numbers assigned by probabilities needs to be recalibrated. We are clearly acquiring more information when we learn someone was dealt a royal flush than when we learn someone wasn't dealt a royal flush. And yet the probability of being dealt a royal flush (i.e., .000002) is minuscule compared to the probability of being dealt something other than a royal flush (i.e., .999998). Smaller probabilities signify more information, not less.

The second reason probabilities are inconvenient for measuring information is that they are multiplicative rather than additive. If I learn that Alice was dealt a royal flush playing poker at Caesar's Palace and that Bob was dealt a royal flush playing poker at the Mirage, the probability that both Alice and Bob were dealt royal flushes is the product of the individual probabilities. Nonetheless, it is convenient for information to be measured additively so that the measure of information assigned to Alice and Bob jointly being dealt royal flushes equals the measure of information assigned to Alice being dealt a royal flush plus the measure of information assigned to Bob being dealt a royal flush.

Now there is an obvious way to transform probabilities which circumvents both these difficulties, and that is to apply a negative logarithm to the probabilities. Applying a negative logarithm assigns the more information to the less probability and, because the logarithm of a product is the sum of the logarithms, transforms multiplicative probability measures into additive information measures. What's more, in deference to communication theorists, it is customary to use the logarithm to the base 2. The rationale for this choice of logarithmic base is as follows. The most convenient way for communication theorists to measure information is in bits. Any message sent across a communication channel can be viewed as a string of 0's and 1's. For instance, the ASCII code uses strings of eight 0's and 1's to represent the characters on a typewriter, with whole words and sentences in turn represented as strings of such character strings. In like manner all communication may be reduced to the transmission of sequences of 0's and 1's. Given this reduction, the obvious way for communication theorists to measure information is in number of bits transmitted across a communication channel. And since the negative logarithm to the base 2 of a probability corresponds to the average number of bits needed to identify an event of that probability, the logarithm to the base 2 is the canonical logarithm for communication theorists. Thus we define the measure of information in an event of probability p as -log2p (see Shannon and Weaver, 1949, p. 32; Hamming, 1986; or indeed any mathematical introduction to information theory).

What about the additivity of this information measure? Recall the example of Alice being dealt a royal flush playing poker at Caesar's Palace and that Bob being dealt a royal flush playing poker at the Mirage. Let's call the first event A and the second B. Since randomly dealt poker hands are probabilistically independent, the probability of A and B taken jointly equals the product of the probabilities of A and B taken individually. Symbolically, P(A&B) = P(A)xP(B). Given our logarithmic definition of information we therefore define the amount of information in an event E as I(E) =def -log2P(E). It then follows that P(A&B) = P(A)xP(B) if and only if I(A&B) = I(A)+I(B). Since in the example of Alice and Bob P(A) = P(B) = .000002, I(A) = I(B) = 19, and I(A&B) = I(A)+I(B) = 19 + 19 = 38. Thus the amount of information inherent in Alice and Bob jointly obtaining royal flushes is 38 bits.

Since lots of events are probabilistically independent, information measures exhibit lots of additivity. But since lots of events are also correlated, information measures exhibit lots of non-additivity as well. In the case of Alice and Bob, Alice being dealt a royal flush is probabilistically independent of Bob being dealt a royal flush, and so the amount of information in Alice and Bob both being dealt royal flushes equals the sum of the individual amounts of information. But consider now a different example. Alice and Bob together toss a coin five times. Alice observes the first four tosses but is distracted, and so misses the fifth toss. On the other hand, Bob misses the first toss, but observes the last four tosses. Let's say the actual sequence of tosses is 11001 (1 = heads, 0 = tails). Thus Alice observes 1100* and Bob observes *1001. Let A denote the first observation, B the second. It follows that the amount of information in A&B is the amount of information in the completed sequence 11001, namely, 5 bits. On the other hand, the amount of information in A alone is the amount of information in the incomplete sequence 1100*, namely 4 bits. Similarly, the amount of information in B alone is the amount of information in the incomplete sequence *1001, also 4 bits. This time information doesn't add up: 5 = I(A&B) _ I(A)+I(B) = 4+4 = 8.

Here A and B are correlated. Alice knows all but the last bit of information in the completed sequence 11001. Thus when Bob gives her the incomplete sequence *1001, all Alice really learns is the last bit in this sequence. Similarly, Bob knows all but the first bit of information in the completed sequence 11001. Thus when Alice gives him the incomplete sequence 1100*, all Bob really learns is the first bit in this sequence. What appears to be four bits of information actually ends up being only one bit of information once Alice and Bob factor in the prior information they possess about the completed sequence 11001. If we introduce the idea of conditional information, this is just to say that 5 = I(A&B) = I(A)+I(B|A) = 4+1. I(B|A), the conditional information of B given A, is the amount of information in Bob's observation once Alice's observation is taken into account. And this, as we just saw, is 1 bit.

I(B|A), like I(A&B), I(A), and I(B), can be represented as the negative logarithm to the base two of a probability, only this time the probability under the logarithm is a conditional as opposed to an unconditional probability. By definition I(B|A) =def -log2P(B|A), where P(B|A) is the conditional probability of B given A. But since P(B|A) =def P(A&B)/P(A), and since the logarithm of a quotient is the difference of the logarithms, log2P(B|A) = log2P(A&B) - log2P(A), and so -log2P(B|A) = -log2P(A&B) + log2P(A), which is just I(B|A) = I(A&B) - I(A). This last equation is equivalent to

(*) I(A&B) = I(A)+I(B|A)

Formula (*) holds with full generality, reducing to I(A&B) = I(A)+I(B) when A and B are probabilistically independent (in which case P(B|A) = P(B) and thus I(B|A) = I(B)).

Formula (*) asserts that the information in both A and B jointly is the information in A plus the information in B that is not in A. Its point, therefore, is to spell out how much additional information B contributes to A. As such, this formula places tight constraints on the generation of new information. Does, for instance, a computer program, call it A, by outputting some data, call the data B, generate new information? Computer programs are fully deterministic, and so B is fully determined by A. It follows that P(B|A) = 1, and thus I(B|A) = 0 (the logarithm of 1 is always 0). From Formula (*) it therefore follows that I(A&B) = I(A), and therefore that the amount of information in A and B jointly is no more than the amount of information in A by itself.

For an example in the same spirit consider that there is no more information in two copies of Shakespeare's Hamlet than in a single copy. This is of course patently obvious, and any formal account of information had better agree. To see that our formal account does indeed agree, let A denote the printing of the first copy of Hamlet, and B the printing of the second copy. Once A is given, B is entirely determined. Indeed, the correlation between A and B is perfect. Probabilistically this is expressed by saying the conditional probability of B given A is 1, namely, P(B|A) = 1. In information-theoretic terms this is to say that I(B|A) = 0. As a result I(B|A) drops out of Formula (*), and so I(A&B) = I(A). Our information-theoretic formalism therefore agrees with our intuition that two copies of Hamlet contain no more information than a single copy.

Information is a complexity-theoretic notion. Indeed, as a purely formal object, the information measure described here is a complexity measure (cf. Dembski, 1998, ch. 4). Complexity measures arise whenever we assign numbers to degrees of complication. A set of possibilities will often admit varying degrees of complication, ranging from extremely simple to extremely complicated. Complexity measures assign non-negative numbers to these possibilities so that 0 corresponds to the most simple and _ to the most complicated. For instance, computational complexity is always measured in terms of either time (i.e., number of computational steps) or space (i.e., size of memory, usually measured in bits or bytes) or some combination of the two. The more difficult a computational problem, the more time and space are required to run the algorithm that solves the problem. For information measures, degree of complication is measured in bits. Given an event A of probability P(A), I(A) = -log2P(A) measures the number of bits associated with the probability P(A). We therefore speak of the "complexity of information" and say that the complexity of information increases as I(A) increases (or, correspondingly, as P(A) decreases). We also speak of "simple" and "complex" information according to whether I(A) signifies few or many bits of information. This notion of complexity is important to biology since not just the origin of information stands in question, but the origin of complex information.

3. Complex Specified Information

Given a means of measuring information and determining its complexity, we turn now to the distinction between specified and unspecified information. This is a vast topic whose full elucidation is beyond the scope of this paper (the details can be found in my monograph The Design Inference). Nonetheless, in what follows I shall try to make this distinction intelligible, and offer some hints on how to make it rigorous. For an intuitive grasp of the difference between specified and unspecified information, consider the following example. Suppose an archer stands 50 meters from a large blank wall with bow and arrow in hand. The wall, let us say, is sufficiently large that the archer cannot help but hit it. Consider now two alternative scenarios. In the first scenario the archer simply shoots at the wall. In the second scenario the archer first paints a target on the wall, and then shoots at the wall, squarely hitting the target's bull's-eye. Let us suppose that in both scenarios where the arrow lands is identical. In both scenarios the arrow might have landed anywhere on the wall. What's more, any place where it might land is highly improbable. It follows that in both scenarios highly complex information is actualized. Yet the conclusions we draw from these scenarios are very different. In the first scenario we can conclude absolutely nothing about the archer's ability as an archer, whereas in the second scenario we have evidence of the archer's skill.

The obvious difference between the two scenarios is of course that in the first the information follows no pattern whereas in the second it does. Now the information that tends to interest us as rational inquirers generally, and scientists in particular, is not the actualization of arbitrary possibilities which correspond to no patterns, but rather the actualization of circumscribed possibilities which do correspond to patterns. There's more. Patterned information, though a step in the right direction, still doesn't quite get us specified information. The problem is that patterns can be concocted after the fact so that instead of helping elucidate information, the patterns are merely read off already actualized information.

To see this, consider a third scenario in which an archer shoots at a wall. As before, we suppose the archer stands 50 meters from a large blank wall with bow and arrow in hand, the wall being so large that the archer cannot help but hit it. And as in the first scenario, the archer shoots at the wall while it is still blank. But this time suppose that after having shot the arrow, and finding the arrow stuck in the wall, the archer paints a target around the arrow so that the arrow sticks squarely in the bull's-eye. Let us suppose further that the precise place where the arrow lands in this scenario is identical with where it landed in the first two scenarios. Since any place where the arrow might land is highly improbable, in this as in the other scenarios highly complex information has been actualized. What's more, since the information corresponds to a pattern, we can even say that in this third scenario highly complex patterned information has been actualized. Nevertheless, it would be wrong to say that highly complex specified information has been actualized. Of the three scenarios, only the information in the second scenario is specified. In that scenario, by first painting the target and then shooting the arrow, the pattern is given independently of the information. On the other hand, in this, the third scenario, by first shooting the arrow and then painting the target around it, the pattern is merely read off the information.

Specified information is always patterned information, but patterned information is not always specified information. For specified information not just any pattern will do. We therefore distinguish between the "good" patterns and the "bad" patterns. The "good" patterns will henceforth be called specifications. Specifications are the independently given patterns that are not simply read off information. By contrast, the "bad" patterns will be called fabrications. Fabrications are the post hoc patterns that are simply read off already existing information.

Unlike specifications, fabrications are wholly unenlightening. We are no better off with a fabrication than without one. This is clear from comparing the first and third scenarios. Whether an arrow lands on a blank wall and the wall stays blank (as in the first scenario), or an arrow lands on a blank wall and a target is then painted around the arrow (as in the third scenario), any conclusions we draw about the arrow's flight remain the same. In either case chance is as good an explanation as any for the arrow's flight. The fact that the target in the third scenario constitutes a pattern makes no difference since the pattern is constructed entirely in response to where the arrow lands. Only when the pattern is given independently of the arrow's flight does a hypothesis other than chance come into play. Thus only in the second scenario does it make sense to ask whether we are dealing with a skilled archer. Only in the second scenario does the pattern constitute a specification. In the third scenario the pattern constitutes a mere fabrication.

The distinction between specified and unspecified information may now be defined as follows: the actualization of a possibility (i.e., information) is specified if independently of the possibility's actualization, the possibility is identifiable by means of a pattern. If not, then the information is unspecified. Note that this definition implies an asymmetry between specified and unspecified information: specified information cannot become unspecified information, though unspecified information may become specified information. Unspecified information need not remain unspecified, but can become specified as our background knowledge increases. For instance, a cryptographic transmission whose cryptosystem we have yet to break will constitute unspecified information. Yet as soon as we break the cryptosystem, the cryptographic transmission becomes specified information.

What is it for a possibility to be identifiable by means of an independently given pattern? A full exposition of specification requires a detailed answer to this question. Unfortunately, such an exposition is beyond the scope of this paper. The key conceptual difficulty here is to characterize the independence condition between patterns and information. This independence condition breaks into two subsidiary conditions: (1) a condition to stochastic conditional independence between the information in question and certain relevant background knowledge; and (2) a tractability condition whereby the pattern in question can be constructed from the aforementioned background knowledge. Although these conditions make good intuitive sense, they are not easily formalized. For the details refer to my monograph The Design Inference.

If formalizing what it means for a pattern to be given independently of a possibility is difficult, determining in practice whether a pattern is given independently of a possibility is much easier. If the pattern is given prior to the possibility being actualized--as in the second scenario above where the target was painted before the arrow was shot--then the pattern is automatically independent of the possibility, and we are dealing with specified information. Patterns given prior to the actualization of a possibility are just the rejection regions of statistics. There is a well-established statistical theory that describes such patterns and their use in probabilistic reasoning. These are clearly specifications since having been given prior to the actualization of some possibility, they have already been identified, and thus are identifiable independently of the possibility being actualized (cf. Hacking, 1965).

Many of the interesting cases of specified information, however, are those in which the pattern is given after a possibility has been actualized. This is certainly the case with the origin of life: life originates first and only afterwards do pattern-forming rational agents (like ourselves) enter the scene. It remains the case, however, that a pattern corresponding to a possibility, though formulated after the possibility has been actualized, can constitute a specification. Certainly this was not the case in the third scenario above where the target was painted around the arrow only after it hit the wall. But consider the following example. Alice and Bob are celebrating their fiftieth wedding anniversary. Their six children all show up bearing gifts. Each gift is part of a matching set of china. There is no duplication of gifts, and together the gifts constitute a complete set of china. Suppose Alice and Bob were satisfied with their old set of china, and had no inkling prior to opening their gifts that they might expect a new set of china. Alice and Bob are therefore without a relevant pattern whither to refer their gifts prior to actually receiving the gifts from their children. Nevertheless, the pattern they explicitly formulate only after receiving the gifts could be formed independently of receiving the gifts--indeed, we all know about matching sets of china and how to distinguish them from unmatched sets. This pattern therefore constitutes a specification. What's more, there is an obvious inference connected with this specification: Alice and Bob's children were in collusion, and did not present their gifts as random acts of kindness.

But what about the origin of life? Is life specified? If so, to what patterns does life correspond, and how are these patterns given independently of life's origin? Obviously, pattern-forming rational agents like ourselves don't enter the scene till after life originates. Nonetheless, there are functional patterns to which life corresponds, and which are given independently of the actual living systems. An organism is a functional system comprising many functional subsystems. The functionality of organisms can be cashed out in any number of ways. Arno Wouters (1995) cashes it out globally in terms of viability of whole organisms. Michael Behe (1996) cashes it out in terms of the irreducible complexity and minimal function of biochemical systems. Even the staunch Darwinist Richard Dawkins will admit that life is specified functionally, cashing out the functionality of organisms in terms of reproduction of genes. Thus Dawkins (1987, p. 9) will write: "Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction."

Information can be specified. Information can be complex. Information can be both complex and specified. Information that is both complex and specified I call "complex specified information," or CSI for short. CSI is what all the fuss over information has been about in recent years, not just in biology, but in science generally. It is CSI that for Manfred Eigen constitutes the great mystery of biology, and one he hopes eventually to unravel in terms of algorithms and natural laws. It is CSI that for cosmologists underlies the fine-tuning of the universe, and which the various anthropic principles attempt to understand (cf. Barrow and Tipler, 1986). It is CSI that David Bohm's quantum potentials are extracting when they scour the microworld for what Bohm calls "active information" (cf. Bohm, 1993, pp. 35-38). It is CSI that enables Maxwell's demon to outsmart a thermodynamic system tending towards thermal equilibrium (cf. Landauer, 1991, p. 26). It is CSI on which David Chalmers hopes to base a comprehensive theory of human consciousness (cf. Chalmers, 1996, ch. 8). It is CSI that within the Kolmogorov-Chaitin theory of algorithmic information takes the form of highly compressible, non-random strings of digits (cf. Kolmogorov, 1965; Chaitin, 1966).

Nor is CSI confined to science. CSI is indispensable in our everyday lives. The 16-digit number on your VISA card is an example of CSI. The complexity of this number ensures that a would-be thief cannot randomly pick a number and have it turn out to be a valid VISA card number. What's more, the specification of this number ensures that it is your number, and not anyone else's. Even your phone number constitutes CSI. As with the VISA card number, the complexity ensures that this number won't be dialed randomly (at least not too often), and the specification ensures that this number is yours and yours only. All the numbers on our bills, credit slips, and purchase orders represent CSI. CSI makes the world go round. It follows that CSI is a rife field for criminality. CSI is what motivated the greedy Michael Douglas character in the movie Wall Street to lie, cheat, and steal. CSI's total and absolute control was the objective of the monomaniacal Ben Kingsley character in the movie Sneakers. CSI is the artifact of interest in most techno-thrillers. Ours is an information age, and the information that captivates us is CSI.

4. Intelligent Design

Whence the origin of complex specified information? In this section I shall argue that intelligent causation, or equivalently design, accounts for the origin of complex specified information. My argument focuses on the nature of intelligent causation, and specifically, on what it is about intelligent causes that makes them detectable. To see why CSI is a reliable indicator of design, we need to examine the nature of intelligent causation. The principal characteristic of intelligent causation is directed contingency, or what we call choice. Whenever an intelligent cause acts, it chooses from a range of competing possibilities. This is true not just of humans, but of animals as well as extra-terrestrial intelligences. A rat navigating a maze must choose whether to go right or left at various points in the maze. When SETI (Search for Extra-Terrestrial Intelligence) researchers attempt to discover intelligence in the extra-terrestrial radio transmissions they are monitoring, they assume an extra-terrestrial intelligence could have chosen any number of possible radio transmissions, and then attempt to match the transmissions they observe with certain patterns as opposed to others (patterns that presumably are markers of intelligence). Whenever a human being utters meaningful speech, a choice is made from a range of possible sound-combinations that might have been uttered. Intelligent causation always entails discrimination, choosing certain things, ruling out others.

Given this characterization of intelligent causes, the crucial question is how to recognize their operation. Intelligent causes act by making a choice. How then do we recognize that an intelligent cause has made a choice? A bottle of ink spills accidentally onto a sheet of paper; someone takes a fountain pen and writes a message on a sheet of paper. In both instances ink is applied to paper. In both instances one among an almost infinite set of possibilities is realized. In both instances a contingency is actualized and others are ruled out. Yet in one instance we infer design, in the other chance. What is the relevant difference? Not only do we need to observe that a contingency was actualized, but we ourselves need also to be able to specify that contingency. The contingency must conform to an independently given pattern, and we must be able independently to formulate that pattern. A random ink blot is unspecifiable; a message written with ink on paper is specifiable. Wittgenstein (1980, p. 1e) made the same point as follows: "We tend to take the speech of a Chinese for inarticulate gurgling. Someone who understands Chinese will recognize language in what he hears. Similarly I often cannot discern the humanity in man."

In hearing a Chinese utterance, someone who understands Chinese not only recognizes that one from a range of all possible utterances was actualized, but is also able to specify the utterance as coherent Chinese speech. Contrast this with someone who does not understand Chinese. In hearing a Chinese utterance, someone who does not understand Chinese also recognizes that one from a range of possible utterances was actualized, but this time, because lacking the ability to understand Chinese, is unable to specify the utterance as coherent speech. To someone who does not understand Chinese, the utterance will appear gibberish. Gibberish--the utterance of nonsense syllables uninterpretable within any natural language--always actualizes one utterance from the range of possible utterances. Nevertheless, gibberish, by corresponding to nothing we can understand in any language, also cannot be specified. As a result, gibberish is never taken for intelligent communication, but always for what Wittgenstein calls "inarticulate gurgling."

The actualization of one among several competing possibilities, the exclusion of the rest, and the specification of the possibility that was actualized encapsulates how we recognize intelligent causes, or equivalently, how we detect design. Actualization-Exclusion-Specification, this triad constitutes a general criterion for detecting intelligence, be it animal, human, or extra-terrestrial. Actualization establishes that the possibility in question is the one that actually occurred. Exclusion establishes that there was genuine contingency (i.e., that there were other live possibilities, and that these were ruled out). Specification establishes that the actualized possibility conforms to a pattern given independently of its actualization.

Now where does choice, which we've cited as the principal characteristic of intelligent causation, figure into this criterion? The problem is that we never witness choice directly. Instead, we witness actualizations of contingency which might be the result of choice (i.e., directed contingency), but which also might be the result of chance (i.e., blind contingency). Now there is only one way to tell the difference--specification. Specification is the only means available to us for distinguishing choice from chance, directed contingency from blind contingency. Actualization and exclusion together guarantee we are dealing with contingency. Specification guarantees we are dealing with a directed contingency. The Actualization-Exclusion-Specification triad is therefore precisely what we need to identify choice and therewith intelligent causation.

Psychologists who study animal learning and behavior have known of the Actualization-Exclusion-Specification triad all along, albeit implicitly. For these psychologists--known as learning theorists--learning is discrimination (cf. Mazur, 1990; Schwartz, 1984). To learn a task an animal must acquire the ability to actualize behaviors suitable for the task as well as the ability to exclude behaviors unsuitable for the task. Moreover, for a psychologist to recognize that an animal has learned a task, it is necessary not only to observe the animal making the appropriate behavior, but also to specify this behavior. Thus to recognize whether a rat has successfully learned how to traverse a maze, a psychologist must first specify the sequence of right and left turns that conducts the rat out of the maze. No doubt, a rat randomly wandering a maze also discriminates a sequence of right and left turns. But by randomly wandering the maze, the rat gives no indication that it can discriminate the appropriate sequence of right and left turns for exiting the maze. Consequently, the psychologist studying the rat will have no reason to think the rat has learned how to traverse the maze. Only if the rat executes the sequence of right and left turns specified by the psychologist will the psychologist recognize that the rat has learned how to traverse the maze. Now it is precisely the learned behaviors we regard as intelligent in animals. Hence it is no surprise that the same scheme for recognizing animal learning recurs for recognizing intelligent causes generally, to wit, actualization, exclusion, and specification.

Now this general scheme for recognizing intelligent causes coincides precisely with how we recognize complex specified information: First, the basic precondition for information to exist must hold, namely, contingency. Thus one must establish that any one of a multiplicity of distinct possibilities might obtain. Next, one must establish that the possibility which was actualized after the others were excluded was also specified. So far the match between this general scheme for recognizing intelligent causation and how we recognize complex specified information is exact. Only one loose end remains--complexity. Although complexity is essential to CSI (corresponding to the first letter of the acronym), its role in this general scheme for recognizing intelligent causation is not immediately evident. In this scheme one among several competing possibilities is actualized, the rest are excluded, and the possibility which was actualized is specified. Where in this scheme does complexity figure in?

The answer is that it is there implicitly. To see this, consider again a rat traversing a maze, but now take a very simple maze in which two right turns conduct the rat out of the maze. How will a psychologist studying the rat determine whether it has learned to exit the maze. Just putting the rat in the maze will not be enough. Because the maze is so simple, the rat could by chance just happen to take two right turns, and thereby exit the maze. The psychologist will therefore be uncertain whether the rat actually learned to exit this maze, or whether the rat just got lucky. But contrast this now with a complicated maze in which a rat must take just the right sequence of left and right turns to exit the maze. Suppose the rat must take one hundred appropriate right and left turns, and that any mistake will prevent the rat from exiting the maze. A psychologist who sees the rat take no erroneous turns and in short order exit the maze will be convinced that the rat has indeed learned how to exit the maze, and that this was not dumb luck. With the simple maze there is a substantial probability that the rat will exit the maze by chance; with the complicated maze this is exceedingly improbable. The role of complexity in detecting design is now clear since improbability is precisely what we mean by complexity (cf. section 2).

This argument for showing that CSI is a reliable indicator of design may now be summarized as follows: CSI is a reliable indicator of design because its recognition coincides with how we recognize intelligent causation generally. In general, to recognize intelligent causation we must establish that one from a range of competing possibilities was actualized, determine which possibilities were excluded, and then specify the possibility that was actualized. What's more, the competing possibilities that were excluded must be live possibilities, sufficiently numerous so that specifying the possibility that was actualized cannot be attributed to chance. In terms of probability, this means that the possibility that was specified is highly improbable. In terms of complexity, this means that the possibility that was specified is highly complex. All the elements in the general scheme for recognizing intelligent causation (i.e., Actualization-Exclusion-Specification) find their counterpart in complex specified information--CSI. CSI pinpoints what we need to be looking for when we detect design.

As a postscript, I call the reader's attention to the etymology of the word "intelligent." The word "intelligent" derives from two Latin words, the preposition inter, meaning between, and the verb lego, meaning to choose or select. Thus according to its etymology, intelligence consists in choosing between. It follows that the etymology of the word "intelligent" parallels the formal analysis of intelligent causation just given. "Intelligent design" is therefore a thoroughly apt phrase, signifying that design is inferred precisely because an intelligent cause has done what only an intelligent cause can do--make a choice.

5. The Law of the Conversation of Information

Evolutionary biology has steadfastly resisted attributing CSI to intelligent causation. Although Manfred Eigen recognizes that the central problem of evolutionary biology is the origin of CSI, he has no thought of attributing CSI to intelligent causation. According to Eigen natural causes are adequate to explain the origin of CSI. The only question for Eigen is which natural causes explain the origin of CSI. The logically prior question of whether natural causes are even in-principle capable of explaining the origin of CSI he ignores. And yet it is a question that undermines Eigen's entire project. Natural causes are in-principle incapable of explaining the origin of CSI. To be sure, natural causes can explain the flow of CSI, being ideally suited for transmitting already existing CSI. What natural causes cannot do, however, is originate CSI. This strong proscriptive claim, that natural causes can only transmit CSI but never originate it, I call the Law of Conservation of Information. It is this law that gives definite scientific content to the claim that CSI is intelligently caused. The aim of this last section is briefly to sketch the Law of Conservation of Information (a full treatment will be given in Uncommon Descent, a book I am jointly authoring with Stephen Meyer and Paul Nelson).

To see that natural causes cannot account for CSI is straightforward. Natural causes comprise chance and necessity (cf. Jacques Monod's book by that title). Because information presupposes contingency, necessity is by definition incapable of producing information, much less complex specified information. For there to be information there must be a multiplicity of live possibilities, one of which is actualized, and the rest of which are excluded. This is contingency. But if some outcome B is necessary given antecedent conditions A, then the probability of B given A is one, and the information in B given A is zero. If B is necessary given A, Formula (*) reduces to I(A&B) = I(A), which is to say that B contributes no new information to A. It follows that necessity is incapable of generating new information. Observe that what Eigen calls "algorithms" and "natural laws" fall under necessity.

Since information presupposes contingency, let us take a closer look at contingency. Contingency can assume only one of two forms. Either the contingency is a blind, purposeless contingency--which is chance; or it is a guided, purposeful contingency--which is intelligent causation. Since we already know that intelligent causation is capable of generating CSI (cf. section 4), let us next consider whether chance might also be capable of generating CSI. First notice that pure chance, entirely unsupplemented and left to its own devices, is incapable of generating CSI. Chance can generate complex unspecified information, and chance can generate non-complex specified information. What chance cannot generate is information that is jointly complex and specified.

Biologists by and large do not dispute this claim. Most agree that pure chance--what Hume called the Epicurean hypothesis--does not adequately explain CSI. Jacques Monod (1972) is one of the few exceptions, arguing that the origin of life, though vastly improbable, can nonetheless be attributed to chance because of a selection effect. Just as the winner of a lottery is shocked at winning, so we are shocked to have evolved. But the lottery was bound to have a winner, and so too something was bound to have evolved. Something vastly improbable was bound to happen, and so, the fact that it happened to us (i.e., that we were selected--hence the name selection effect) does not preclude chance. This is Monod's argument and it is fallacious. It fails utterly to come to grips with specification. Moreover, it confuses a necessary condition for life's existence with its explanation. Monod's argument has been refuted by the philosophers John Leslie (1989), John Earman (1987), and Richard Swinburne (1979). It has also been refuted by the biologists Francis Crick (1981, ch. 7), Bernd-Olaf Küppers (1990, ch. 6), and Hubert Yockey (1992, ch. 9). Selection effects do nothing to render chance an adequate explanation of CSI.

Most biologists therefore reject pure chance as an adequate explanation of CSI. The problem here is not simply one of faulty statistical reasoning. Pure chance is also scientifically unsatisfying as an explanation of CSI. To explain CSI in terms of pure chance is no more instructive than pleading ignorance or proclaiming CSI a mystery. It is one thing to explain the occurrence of heads on a single coin toss by appealing to chance. It is quite another, as Küppers (1990, p. 59) points out, to follow Monod and take the view that "the specific sequence of the nucleotides in the DNA molecule of the first organism came about by a purely random process in the early history of the earth." CSI cries out for explanation, and pure chance won't do. As Richard Dawkins (1987, p. 139) correctly notes, "We can accept a certain amount of luck in our [scientific] explanations, but not too much."

If chance and necessity left to themselves cannot generate CSI, is it possible that chance and necessity working together might generate CSI? The answer is No. Whenever chance and necessity work together, the respective contributions of chance and necessity can be arranged sequentially. But by arranging the respective contributions of chance and necessity sequentially, it becomes clear that at no point in the sequence is CSI generated. Consider the case of trial-and-error (trial corresponds to necessity and error to chance). Once considered a crude method of problem solving, trial-and-error has so risen in the estimation of scientists that it is now regarded as the ultimate source of wisdom and creativity in nature. The probabilistic algorithms of computer science (e.g., genetic algorithms--see Forrest, 1993) all depend on trial-and-error. So too, the Darwinian mechanism of mutation and natural selection is a trial-and-error combination in which mutation supplies the error and selection the trial. An error is committed after which a trial is made. But at no point is CSI generated.

Natural causes are therefore incapable of generating CSI. This broad conclusion I call the Law of Conservation of Information, or LCI for short. LCI has profound implications for science. Among its corollaries are the following: (1) The CSI in a closed system of natural causes remains constant or decreases. (2) CSI cannot be generated spontaneously, originate endogenously, or organize itself (as these terms are used in origins-of-life research). (3) The CSI in a closed system of natural causes either has been in the system eternally or was at some point added exogenously (implying that the system though now closed was not always closed). (4) In particular, any closed system of natural causes that is also of finite duration received whatever CSI it contains before it became a closed system.

This last corollary is especially pertinent to the nature of science for it shows that scientific explanation is not coextensive with reductive explanation. Richard Dawkins, Daniel Dennett, and many scientists are convinced that proper scientific explanations must be reductive, moving from the complex to the simple. Thus Dawkins (1987, p. 316) will write, "The one thing that makes evolution such a neat theory is that it explains how organized complexity can arise out of primeval simplicity." Thus Dennett (1995, p. 153) will view any scientific explanation that moves from simple to complex as "question-begging." Thus Dawkins (1987, p. 13) will explicitly equate proper scientific explanation with what he calls "hierarchical reductionism," according to which "a complex entity at any particular level in the hierarchy of organization" must properly be explained "in terms of entities only one level down the hierarchy." While no one will deny that reductive explanation is extremely effective within science, it is hardly the only type of explanation available to science. The divide-and-conquer mode of analysis behind reductive explanation has strictly limited applicability within science. In particular, this mode of analysis is utterly incapable of making headway with CSI. CSI demands an intelligent cause. Natural causes will not do.


William A. Dembski, presented at Naturalism, Theism and the Scientific Enterprise: An Interdisciplinary Conference at the University of Texas, Feb. 20-23, 1997.


References

Barrow, John D. and Frank J. Tipler. 1986. The Anthropic Cosmological Principle. Oxford: Oxford University Press.
Behe, Michael. 1996. Darwin's Black Box: The Biochemical Challenge to Evolution. New York: The Free Press.
Bohm, David. 1993. The Undivided Universe: An Ontological Interpretation of Quantum Theory. London: Routledge.
Chaitin, Gregory J. 1966. On the Length of Programs for Computing Finite Binary Sequences. Journal of the ACM, 13:547-569.
Chalmers, David J. 1996. The Conscious Mind: In Search of a Fundamental Theory. New York : Oxford University Press.
Crick, Francis. 1981. Life Itself: Its Origin and Nature. New York: Simon and Schuster.
Dawkins, Richard. 1987. The Blind Watchmaker. New York: Norton.
Dembski, William A. 1998. The Design Inference: Eliminating Chance through Small Probabilities. Forthcoming, Cambridge University Press.
Dennett, Daniel C. 1995. Darwin's Dangerous Idea: Evolution and the Meanings of Life. New York: Simon & Schuster.
Devlin, Keith J. 1991. Logic and Information. New York: Cambridge University Press.
Dretske, Fred I. 1981. Knowledge and the Flow of Information. Cambridge, Mass.: MIT Press.
Earman, John. 1987. The Sap Also Rises: A Critical Examination of the Anthropic Principle. American Philosophical Quarterly, 24(4): 307­317.
Eigen, Manfred. 1992. Steps Towards Life: A Perspective on Evolution, translated by Paul Woolley. Oxford: Oxford University Press.
Forrest, Stephanie. 1993. Genetic Algorithms: Principles of Natural Selection Applied to Computation. Science, 261:872-878.
Hacking, Ian. 1965. Logic of Statistical Inference. Cambridge: Cambridge University Press.
Hamming, R. W. 1986. Coding and Information Theory, 2nd edition. Englewood Cliffs, N. J.: Prentice-Hall.
Kolmogorov, Andrei N. 1965. Three Approaches to the Quantitative Definition of Information. Problemy Peredachi Informatsii (in translation), 1(1): 3-11.
Küppers, Bernd-Olaf. 1990. Information and the Origin of Life. Cambridge, Mass.: MIT Press.
Landauer, Rolf. 1991. Information is Physical. Physics Today, May: 23­29.
Leslie, John. 1989. Universes. London: Routledge.
Mazur, James. E. 1990. Learning and Behavior, 2nd edition. Englewood Cliffs, N.J.: Prentice Hall.
Monod, Jacques. 1972. Chance and Necessity. New York: Vintage.
Schwartz, Barry. 1984. Psychology of Learning and Behavior, 2nd edition. New York: Norton.
Shannon, Claude E. and W. Weaver. 1949. The Mathematical Theory of Communication. Urbana, Ill.: University of Illinois Press.
Stalnaker, Robert. 1984. Inquiry. Cambridge, Mass.: MIT Press.
Swinburne, Richard. 1979. The Existence of God. Oxford: Oxford University Press.
Wittgenstein, Ludwig. 1980. Culture and Value, edited by G. H. von Wright, translated by P. Winch. Chicago: University of Chicago Press.
Wouters, Arno. 1995. Viability Explanation. Biology and Philosophy, 10:435-457.
Yockey, Hubert P. 1992. Information Theory and Molecular Biology. Cambridge: Cambridge University Press.

(more…)