Intelligent Design The Definitive Source on Intelligent Design

A Slightly Technical Introduction to Intelligent Design

Biology, Physics, Mathematics, and Information Theory

What is Intelligent Design?

Intelligent design — often called “ID” — is a scientific theory that holds that the emergence of some features of the universe and living things is best explained by an intelligent cause rather than an undirected process such as natural selection. ID theorists argue that design can be inferred by studying the informational properties of natural objects to determine if they bear the type of information that in our experience arises from an intelligent cause.

Proponents of neo-Darwinian evolution contend that the information in life arose via purposeless, blind, and unguided processes. ID proponents argue that this information arose via purposeful, intelligently guided processes. Both claims are scientifically testable using the standard methods of science. But ID theorists say that when we use the scientific method to explore nature, the evidence points away from unguided material causes, and reveals intelligent design.

Intelligent Design in Everyday Reasoning

Whether we realize it or not, we detect design constantly in our everyday lives. In fact, our lives often depend on inferring intelligent design. Imagine you are driving along a road and come to a place where the asphalt is covered by a random splatter of paint. You would probably ignore the paint and keep driving onward.

But what if the paint is arranged in the form of a warning? In this case, you would probably make a design inference that could save your life. You would recognize that an intelligent agent was trying to communicate an important message.

Only an intelligent agent can use foresight to accomplish an end-goal — such as building a car or using written words to convey a message. Recognizing this unique ability of intelligent agents allows scientists in many fields to detect design.

Intelligent Design in Archaeology and Forensics

ID is in the business of trying to discriminate between strictly naturally/materially caused objects on the one hand, and intelligently caused objects on the other. A variety of scientific fields already use ID reasoning. For example, archaeologists find an object and they need to determine whether it arrived at its shape through natural processes, so it’s just another rock (let’s say), or whether it was carved for a purpose by an intelligence. Likewise forensic scientists distinguish between naturally caused deaths (by disease, for example), and intelligently caused deaths (murder). These are important distinctions for our legal system, drawing on science and logical inference. Using similar reasoning, intelligent design theorists go about their research. They ask: If we can use science to detect design in other fields, why should it be controversial when we detect it in biology or cosmology?

Here is how ID works. Scientists interested in detecting design start by observing how intelligent agents act when they design things. What we know about human agents provides a large dataset for this. One of the things we find is that when intelligent agents act, they generate a great deal of information. As ID theorist Stephen Meyer says: “Our experience-based knowledge of information-flow confirms that systems with large amounts of specified complexity (especially codes and languages) invariably originate from an intelligent source—from a mind or personal agent.”1

Thus ID seeks to find in nature reliable indications of the prior action of intelligence—specifically it seeks to find the types of information which are known to be produced by intelligent agents. Yet not all “information” is the same. What kind of information is known to be produced by intelligence? The type of information that indicates design is generally called “specified complexity” or “complex and specified information” or “CSI” for short. I will briefly explain what these terms mean.

Something is complex if it is unlikely. But complexity or unlikelihood alone is not enough to infer design. To see why, imagine that you are dealt a five-card hand of poker. Whatever hand you receive is going to be a very unlikely set of cards. Even if you get a good hand, like a straight or a royal flush, you’re not necessarily going to say, “Aha, the deck was stacked.” Why? Because unlikely things happen all the time. We don't infer design simply because of something's being unlikely. We need more: specification. Something is specified if it matches an independent pattern.

A Tale of Two Mountains

Imagine you are a tourist visiting the mountains of North America. You come across Mount Rainier, a huge dormant volcano not far from Seattle. There are features of this mountain that differentiate it from any other mountain on Earth. In fact, if all possible combinations of rocks, peaks, ridges, gullies, cracks, and crags are considered, this exact shape is extremely unlikely and complex. But you don't infer design simply because Mount Rainier has a complex shape. Why? Because you can easily explain its shape through the natural processes of erosion, uplift, heating, cooling, freezing, thawing, weathering, etc. There is no special, independent pattern to the shape of Mount Rainier. Complexity alone is not enough to infer design.

But now let's say you go to a different mountain—Mount Rushmore in South Dakota. This mountain also has a very unlikely shape, but its shape is special. It matches a pattern—the faces of four famous Presidents. With Mount Rushmore, you don’t just observe complexity, you also find specification. Thus, you would infer that its shape was designed.

ID theorists ask “How can we apply this kind of reasoning to biology?” One of the greatest scientific discoveries of the past fifty years is that life is fundamentally built upon information. It's all around us. As you read a book, your brain processes information stored in the shapes of ink on the page. When you talk to a friend, you communicate information using sound-based language, transmitted through vibrations in air molecules. Computers work because they receive information, process it, and then give useful output.

Everyday life as we know it would be nearly impossible without the ability to use information. But could life itself exist without it? Carl Sagan observed that the “information content of a simple cell” is “around 1012 bits, comparable to about a hundred million pages of the Encyclopedia Britannica.”2 Information forms the chemical blueprint for all living organisms, governing the assembly, structure, and function at essentially all levels of cells. But where does this information come from?

As I noted previously, ID begins with the observation that intelligent agents generate large quantities of CSI. Studies of the cell reveal vast quantities of information in our DNA, stored biochemically through the sequence of nucleotide bases. No physical or chemical law dictates the order of the nucleotide bases in our DNA, and the sequences are highly improbable and complex. Yet the coding regions of DNA exhibit very unlikely sequential arrangements of bases that match the precise pattern necessary to produce functional proteins. Experiments have found that the sequence of nucleotide bases in our DNA must be extremely precise in order to generate a functional protein. The odds of a random sequence of amino acids generating a functional protein is less than 1 in 10 to the 70th power.3In other words, our DNA contains high CSI.

Thus, as nearly all molecular biologists now recognize, the coding regions of DNA possess a high “information content”—where “information content” in a biological context means precisely “complexity and specificity.” Even the staunch Darwinian biologist Richard Dawkins concedes that “[b]iology is the study of complicated things that give the appearance of having been designed for a purpose.”4 Atheists like Dawkins believe that unguided natural processes did all the "designing" but intelligent design theorist Stephen C. Meyer notes, “in all cases where we know the causal origin of ‘high information content,’ experience has shown that intelligent design played a causal role.”5

A DVD in Search of a DVD Player

But just having the information in our DNA isn't enough. By itself, a DNA molecule is useless. You need some kind of machinery to read the information in the DNA and produce some useful output. A lone DNA molecule is like having a DVD—and nothing more. A DVD might carry information, but without a machine to read that information, it's all but useless (maybe you could use it as a Frisbee). To read the information in a DVD, we need a DVD player. In the same way, our cells are equipped with machinery to help process the information in our DNA.

That machinery reads the commands and codes in our DNA much as a computer processes commands in computer code. Many authorities have recognized the computer-like information processing of the cell and the computer-like information-rich properties of DNA's language-based code. Bill Gates observes, “Human DNA is like a computer program but far, far more advanced than any software we've ever created.”6 Biotech guru Craig Venter says that “life is a DNA software system,”7 containing “digital information” or “digital code,” and the cell is a “biological machine” full of “protein robots.”8 Richard Dawkins has written that “[t]he machine code of the genes is uncannily computer-like.”9 Francis Collins, the leading geneticist who headed the human genome project, notes, “DNA is something like the hard drive on your computer,” containing “programming.”10

Cells are thus constantly performing computer-like information processing. But what is the result of this process? Machinery. The more we discover about the cell, the more we learn that it functions like a miniature factory, replete with motors, powerhouses, garbage disposals, guarded gates, transportation corridors, CPUs, and much more. Bruce Alberts, former president of the U.S. National Academy of Sciences, has stated:

[T]he entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of a set of large protein machines. ... Why do we call the large protein assemblies that underlie cell function protein machines? Precisely because, like machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts.11

There are hundreds, if not thousands, of molecular machines in living cells. In discussions of ID, the most famous example of a molecular machine is the bacterial flagellum. The flagellum is a micro-molecular propeller assembly driven by a rotary engine that propels bacteria toward food or a hospitable living environment. There are various types of flagella, but all function like a rotary engine made by humans, as found in some car and boat motors. Flagella also contain many parts that are familiar to human engineers, including a rotor, a stator, a drive shaft, a U-joint, and a propeller. As one molecular biologist writes, “More so than other motors the flagellum resembles a machine designed by a human.”12 But there's something else that's special about the flagellum.

Introducing "Irreducible Complexity"

In applying ID to biology, ID theorists often discuss “irreducible complexity,” a concept developed and popularized by Lehigh University biochemist Michael Behe. Irreducible complexity is a form of specified complexity, which exists in systems composed of “several interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system to effectively cease functioning.”13 Because natural selection only preserves structures that confer a functional advantage to an organism, such systems would be unlikely to evolve through a Darwinian process. Why? Because there is no evolutionary pathway where they could remain functional during each small evolutionary step. According to ID theorists, irreducible complexity is an informational pattern that reliably indicates design, because in all irreducibly complex systems in which the cause of the system is known by experience or observation, intelligent design or engineering played a role in the origin of the system.

Microbiologist Scott Minnich has performed genetic knockout experiments where each gene encoding a flagellar part is mutated individually such that it no longer functions. His experiments show that the flagellum fails to assemble or function properly if any one of its approximately 35 different protein-components is removed.14 By definition, it is irreducibly complex. In this all-or-nothing game, mutations cannot produce the complexity needed to evolve a functional flagellum one step at a time. The odds are also too daunting for it to evolve in one great mutational leap.

The past fifty years of biological research have showed that life is fundamentally based upon:

  • A vast amount of complex and specified information encoded in a biochemical language.
  • A computer-like system of commands and codes that processes the information.
  • Irreducibly complex molecular machines and multi-machine systems.

Where, in our experience, do language, complex and specified information, programming code, and machines come from? They have only one known source: intelligence.

Intelligent Design Extends Beyond Biology

But there's much more to ID. Contrary to what many people suppose, ID is much broader than the debate over Darwinian evolution. That's because much of the scientific evidence for intelligent design comes from areas that Darwin's theory doesn't even address. In fact, much evidence for intelligent design from physics and cosmology.

The fine-tuning of the laws of physics and chemistry to allow for advanced life is an example of extremely high levels of CSI in nature. The laws of the universe are complex because they are highly unlikely. Cosmologists have calculated the odds of a life-friendly universe appearing by chance are less than 1 in 1010^123. That's ten raised to a power of 10 with 123 zeros after it—a number far too long to write out! The laws of the universe are specified in that they match the narrow band of parameters required for the existence of advanced life. This high CSI indicates design. Even the atheist cosmologist Fred Hoyle observed, “A common sense interpretation of the facts suggests that a super intellect has monkeyed with physics, as well as with chemistry and biology.”15 From the tiniest atom, to living organisms, to the architecture of the entire cosmos, the fabric of nature shows strong evidence that it was intelligently designed.

Using Mathematics to Detect Design

Intelligent design has its roots in information theory, and design can be detected via statistical mathematical calculations.

As noted, ID theorists begin by observing the types of information produced by the action of intelligent agents vs. the types of information produced through purely natural processes. By making these observations, we can infer that intelligence is the best explanation for many information-rich features we see in nature. But can the inference to design be made rigorously using mathematics? ID theorists think we can, by mathematically quantifying the amount of information present and determining if it is the type of information which, in our experience, is only produced by intelligence.

The fact that information is a real entity is attested by scientists both inside and outside the ID movement. In his essay “Intelligent Design as a Theory of Information,” a pro-ID mathematician and philosopher William Dembski notes:

No one disputes that there is such a thing as information. As Keith Devlin remarks, “Our very lives depend upon it, upon its gathering, storage, manipulation, transmission, security, and so on. Huge amounts of money change hands in exchange for information. People talk about it all the time. Lives are lost in its pursuit. Vast commercial empires are created in order to manufacture equipment to handle it.”16

The fundamental intuition behind measuring information is a reduction in possibilities. The more possibilities you rule out, the more information present. Thus Dembski uses accepted definitions from the field of information theory that define information as the occurrence of one event, or scenario, while excluding other events, or scenarios. In other words, information is what you get when you narrow down what you're talking about.

The amount of information in a system or represented by some event can be calculating the probability of that scenario, and converting that probability into units of information, called “bits.” These are the same “bits” and “bytes” from the computer world. We can calculate bits according to the following equation:

Given a probability p of some event or scenario, Information content = I = - Log2 (p)

For example, in binary code, each character has two possibilities—0 or 1—meaning the probability of any character is 0.5. Using the formula above, this leads to an information content of 1 bit for each binary digit. Thus, a binary string like “00110” contains 5 bits. But saying “this string carries 5 bits of information” says nothing about the meaning of the string! It only describes the likelihood of the string occurring. Nobel Prize winning molecular biologist Jack Szostak explains that this classical method of measuring information via raw probabilities does not help us discern the functional meaning of an information-rich system:

[C]lassical information theory ... does not consider the meaning of a message, defining the information content of a string of symbols as simply that required to specify, store or transmit the string. ... A new measure of information—functional information—is required to account for all possible sequences that could potentially carry out an equivalent biochemical function, independent of the structure or mechanism used.17

Szostak suggests that we must look at more than just the likelihood (i.e., the probability or raw information content in bits) to understand the functional workings of natural systems. We must look at the meaning of the information as well. ID theorists feel the same way.

To measure both the information content and the meaning of some event, Dembski developed the concept of complex and specified information (CSI), which was discussed earlier. To review, this method of detecting design can not only determine if an event unlikely (i.e., high information content), but also whether it matches a pre-existing pattern or “specification” (i.e., it has some functional meaning). This is seen in the diagram below:

In the figure above, Point A, which bears low CSI, represents something best explained by natural processes. Point B, which has high CSI, represents something best explained by design. Curve C represents the upper limit to what natural processes can produce-the "universal probability bound." Anything far beyond Curve C is best explained by design; anything far within Curve C is best explained by natural processes.

As seen in figure at above, there is a limit to the amount of CSI which can be produced by natural processes (represented by Curve C). When we see a specified event that is highly unlikely—high CSI—we know that natural processes were not involved, and that intelligent design is the best explanation. When low information content is involved, natural causes can produce the feature in question, and the best explanation is some natural cause.

To help us discriminate between systems that could arise naturally and those that are best explained by design, ID proponents have developed the “universal probability bound,” a measure of the maximum amount of CSI that could be produced during the entire history of the universe. In essence, if the CSI content of a system exceeds the universal probability bound, then natural causes cannot explain that feature and it can only be explained by intelligent design. Dembski and Jonathan Witt explain it this way:

Scientists have learned that within the known physical universe there are about 1080 elementary particles ... Scientists also have learned that a change from one state of matter to another can’t happen faster than what physicists call the Planck time. ... The Planck time is 1 second divided by 1045 (1 followed by forty-five zeroes). ... Finally, scientists estimate that the universe is about fourteen billion years old, meaning the universe is itself millions of times younger than 1025 seconds. If we now assume that any physical event in the universe requires the transition of at least one elementary particle (most events require far more, of course), then these limits on the universe suggest that the total number of events throughout cosmic history could not have exceeded 1080 x 1045 x 1025 = 10150.

This means that any specified event whose probability is less than 1 chance in 10150 will remain improbable even if we let every corner and every moment of the universe roll the proverbial dice. The universe isn’t big enough, fast enough or old enough to roll the dice enough times to have a realistic chance of randomly generating specified events that are this improbable.18

Using our equation for calculating bits, an event whose probability is 1 in 10150 carries about 500 bits of information. This means that if the CSI content of a system is greater than 500 bits, then we can rule out blind material causes and infer intelligent design. Dembski has applied this method to bacterial flagellum, an irreducibly complex molecular machine which contains high CSI, and calculated that it contains a few thousand bits of information—far greater than what can be produced by natural causes according to the universal probability bound.

But ID theorists have developed other ways to research the limits of what can be produced by natural processes, especially in the context of Darwinian evolution.

Intelligent Design and the Limits of Natural Selection

Intelligent design does not reject all aspects of evolution. Evolution can mean something as benign as (1) “life has changed over time,” or it can entail more controversial ideas, like (2) “all living things share common ancestry,” or (3) “natural selection acting upon random mutations produced life’s diversity.”

ID does not conflict with the observation that natural selection causes small-scale changes over time (meaning 1), or the view that all organisms are related by common ancestry (meaning 2). However, the dominant evolutionary viewpoint today is neo-Darwinism (meaning 3), which contends that life’s entire history was driven by unguided natural selection acting on random mutations (as well as other forces like genetic drift)—a collection of blind, purposeless process with no directions or goals. It is this specific neo-Darwinian claim that ID directly challenges.

Darwinian evolution can work fine when one small step (e.g., a single point mutation) along an evolutionary pathway gives an advantage that helps an organisms survive and reproduce. The theory of ID has no problem with this, and acknowledges that there are many small-scale changes that Darwinian mechanisms can produce.

But what about cases where many steps, or multiple mutations, are necessary to gain some advantage? Here, Darwinian evolution faces limits on what it can accomplish. Evolutionary biologist Jerry Coyne affirms this when he states: “natural selection cannot build any feature in which intermediate steps do not confer a net benefit on the organism.”19 Likewise, Darwin wrote in The Origin of Species:

If it could be demonstrated that any complex organ existed, which could not possibly have been formed by numerous, successive, slight modifications, my theory would absolutely break down.

As Darwin’s quote suggests, natural selection gets stuck when a feature cannot be built through “numerous, successive, slight modifications”—that is, when a structure requires multiple mutations to be present before providing any advantage for natural selection to select. Proponents of intelligent design have done research showing that many such biological structures exist which would require multiple mutations before providing some advantage.

In 2004, biochemist Michael Behe co-published a study in Protein Science with physicist David Snoke demonstrating that if multiple mutations were required to produce a functional bond between two proteins, then “the mechanism of gene duplication and point mutation alone would be ineffective because few multicellular species reach the required population sizes.”20

Writing in 2008 in the journal Genetics, Behe and Snoke's critics tried to refute them, but failed. The critics found that, in a human population, to obtain a feature via Darwinian evolution that required only two mutations before providing an advantage “would take > 100 million years,” which they admitted was “very unlikely to occur on a reasonable timescale.”21 Such “multi-mutation features” are thus unlikely to evolve in humans, which have small population sizes and long generation times, reducing the efficiency of the Darwinian mechanism.

But can Darwinian processes produce complex multimutation features in bacteria which have larger population sizes and reproduce rapidly? Even here, Darwinian evolution faces limits.

In a 2010 peer-reviewed study, molecular biologist Douglas Axe calculated that when a “multi-mutation feature” requires more than six mutations before giving any benefit, it is unlikely to arise even in the whole history of the Earth—even in the case of bacteria.22 He provided empirical backing for this conclusion from experimental research he earlier published in the Journal of Molecular Biology. There, he found there that only one in 1074 amino-acid sequences yields a functional protein fold.23 That implies that protein folds in general are multimutation features, requiring many amino acids to be present before there is any functional advantage.

Another study by Axe and biologist Ann Gauger found that merely converting one enzyme to perform the function of a closely related enzyme—the kind of conversion that evolutionists claim can happen easily—would require a minimum of seven mutations.24 This exceeds the limits of what Darwinian can produce over the Earth’s entire history, as calculated by Axe’s 2010 paper.

A later study published in 2014 by Gauger, Axe and biologist Mariclair Reeves bolstered this finding. They examined additional proteins to determine whether they could be converted via mutation to perform the function of a closely related protein.25 After inducing all possible single mutations in the enzymes, and many other combinations of mutations, they found that evolving a protein, via Darwinian evolution, to perform the function of a closely related protein would take over 1015 years—over 100,000 times longer than the age of the earth!

Collectively, these research results indicate that many biochemical features would require many mutations before providing any advantage to an organism, and would thus be beyond the limit of what Darwinian evolution can do. If blind evolution cannot build these CSI-rich features, what can? Some non-random process is necessary that can “look ahead” and find the complex combinations of mutations to generate these high-CSI features. That process is intelligent design.

A Positive Argument or God of the Gaps?

When arguing against ID, some critics will contend that ID is merely a negative argument against evolution, what some will call a “God-of-the-gaps” argument. A “God-of-the-gaps” argument, critics observe, argues for God based upon gaps in our knowledge, rather than presenting a positive argument. Moreover, it is said that “God-of-the-gaps” arguments are dangerous to faith, because as our knowledge increases, our basis for believing in God is squeezed into smaller and smaller “gaps” in our knowledge. Eventually, the argument goes, there is no reason for believing in God at all. Does ID present a God-of-the-gaps argument? It does not, for many reasons.

First, ID refers to an intelligent cause and does not identify the designer as “God.” All ID scientifically detects is the prior action of an intelligent cause. ID respects the limits of scientific inquiry and does not attempt to address religious questions about the identity of the designer. Indeed, the ID movement includes people of many worldviews, including Christians, Jews, Muslims, people of Eastern religious views, and even agnostics. What unites them is not some religious view about the identity of the designer, but a conviction that there is scientific evidence for intelligent design in nature.

More to the point, the argument for design is not based on what we don’t know (i.e., gaps in our knowledge), but is rather based entirely on what we do know (evidence) about the known causes of information-rich systems. For example, irreducibly complex molecular machines contain high CSI, and we know from experience that high-CSI systems arise from the action of an intelligent agent. To elaborate on a quote given earlier from Stephen Meyer:

[W]e have repeated experience of rational and conscious agents—in particular ourselves—generating or causing increases in complex specified information, both in the form of sequence-specific lines of code and in the form of hierarchically arranged systems of parts. ... Our experience-based knowledge of information-flow confirms that systems with large amounts of specified complexity (especially codes and languages) invariably originate from an intelligent source—from a mind or personal agent.26

Similarly, Meyer and biochemist Scott Minnich explain that irreducibly complex systems in particular are always known to derive from an intelligent cause:

Molecular machines display a key signature or hallmark of design, namely, irreducible complexity. In all irreducibly complex systems in which the cause of the system is known by experience or observation, intelligent design or engineering played a role the origin of the system. ... Indeed, in any other context we would immediately recognize such systems as the product of very intelligent engineering. Although some may argue this is a merely an argument from ignorance, we regard it as an inference to the best explanation, given what we know about the powers of intelligent as opposed to strictly natural or material causes.27

It’s important to understand that when ID theorists argue that we can find in nature the kind of information and complexity that comes from intelligence, they are not making a mere argument from analogy. When one reduces natural systems to their raw informational properties, they are mathematically identical to those of designed systems. Though not an ID proponent, molecular biologist Hubert Yockey explains that form of information in DNA is identical to what we find in language:

It is important to understand that we are not reasoning by analogy. The sequence hypothesis [that the exact order of symbols records the information] applies directly to the protein and the genetic text as well as to written language and therefore the treatment is mathematically identical.28

Though Yockey is no ID proponent, he rightly observes that the informational properties of DNA are mathematically identical to language. Thus, the argument for design is much stronger than a mere appeal to analogy, and we don't infer design based upon merely finding and exploiting alleged “gaps” in our knowledge. Rather, ID is based upon the positive argument that nature contains the kind of information and complexity which, in our positive experience, comes only from the action of intelligence. Accordingly, intelligent design is, by standard scientific methods, the best explanation for high CSI in nature.

Using the Scientific Method to Positively Detect Design

As a final demonstration of how ID uses a positive scientific argument, consider how the scientific method can be used to detect design. The scientific method is commonly described as a four-step process involving observation, hypothesis, experiment, and conclusion. ID uses this precise scientific method to make a positive cases for design in various scientific fields, including biochemistry, paleontology, systematics, and genetics:

Example 1—Using the Scientific Method to Detect Design in Biochemistry:

  • Observation: Intelligent agents solve complex problems by acting with an end goal in mind, producing high levels of CSI. In our experience, systems with large amounts of CSI—such as codes and languages—invariably originate from an intelligent source. Likewise, in our experience, intelligence is the cause of irreducibly complex machines.
  • Hypothesis (Prediction): Natural structures will be found that contain many parts arranged in intricate patterns that perform a specific function—indicating high levels of CSI, including irreducible complexity.
  • Experiment: Experimental investigations of DNA indicate that it is full of a CSI-rich, language-based code. Cells use computer-like information processing systems to translate the genetic information in DNA into proteins. Biologists have performed mutational sensitivity tests on proteins and determined that their amino acid sequences are highly specified. The end-result of cellular information processing system are protein-based micromolecular machines. Genetic knockout experiments and other studies show that some molecular machines, like the bacterial flagellum, are irreducibly complex.
  • Conclusion: The high levels of CSI—including irreducible complexity—in biochemical systems are best explained by the action of an intelligent agent.

Example 2—Using the Scientific Method to Detect Design in Paleontology:

  • Observation: Intelligent agents rapidly infuse large amounts of information into systems. As four ID theorists write: "intelligent design provides a sufficient causal explanation for the origin of large amounts of information… the intelligent design of a blueprint often precedes the assembly of parts in accord with a blueprint or preconceived design plan.”
  • Hypothesis (Prediction): Forms containing large amounts of novel information will appear in the fossil record suddenly and without similar precursors.
  • Experiment: Studies of the fossil record show that species typically appear abruptly without similar precursors. The Cambrian explosion is a prime example, although there are other examples of explosions in life’s history. Large amounts of CSI had to arise rapidly to explain the abrupt appearance of these forms.
  • Conclusion: The abrupt appearance of new fully formed body plans in the fossil record is best explained by intelligent design.

Example 3—Using the Scientific Method to Detect Design in Systematics:

  • Observation: Intelligent agents often reuse functional components in different designs. As Paul Nelson and Jonathan Wells explain: “An intelligent cause may reuse or redeploy the same module in different systems… [and] generate identical patterns independently.”
  • Hypothesis (Prediction): Genes and other functional parts will be commonly reused in different organisms.
  • Experiment: Studies of comparative anatomy and genetics have uncovered similar parts commonly existing in widely different organisms. Examples of extreme convergent evolution show reusage of functional genes and structures in a manner not predicted by common ancestry.
  • Conclusion: The reusage of highly similar and complex parts in widely different organisms in non-treelike patterns is best explained by the action of an intelligent agent.

Example 4— Using the Scientific Method to Detect Design in Genetics:

  • Observation: Observation: Intelligent agents construct structures with purpose and function. As William Dembski argues: “Consider the term ‘junk DNA.’… [O]n an evolutionary view we expect a lot of useless DNA. If, on the other hand, organisms are designed, we expect DNA, as much as possible, to exhibit function.”
  • Hypothesis (Prediction): Much so-called “ junk DNA” will turn out to perform valuable functions.
  • Experiment: Numerous studies have discovered functions for “junk DNA.” Examples include functions for pseudogenes, introns, and repetitive DNA.
  • Conclusion: The discovery of function for numerous types of “junk DNA” was successfully predicted by intelligent design.

One might disagree with the conclusions of ID, but one cannot reasonably claim that these arguments for design are based upon religion, faith, or divine revelation. They are based upon science.

Follow the Evidence Where It Leads

There will, of course, always be gaps in scientific knowledge. But when critics accuse ID of being a “gaps-based” argument, they essentially insist that all gaps may only be filled with naturalistic explanations, and promote “materialism-of-the-gaps” thinking. This precludes scientists from fully seeking the truth and finding evidence for design in nature. ID rejects gaps-based reasoning of all kinds, and follows the motto that we should “follow the evidence wherever it leads.”

Adding ID to our explanatory toolkit leads to many advances in different scientific fields. In biochemistry, ID allows us to better understand the workings and origin of molecular machines. In paleontology, ID helps resolve long-standing questions about patterns of abrupt appearance—and disappearance—of species. In systematics, ID explains why studies of biomolecules and anatomy are failing to yield a grand “tree of life.” In genetics, ID leads biology into a new paradigm where life is full of functional, information rich molecules containing new layers of code and regulation. In this way, ID is best poised to lead biology into an information age that uncovers the complex, information-based genetic and epigenetic workings of life.

ID has scientific merit because it uses well-accepted methods of historical sciences in order to detect in nature the types of complexity that we understand, from present-day observations, are derived from intelligent causes. From top to bottom, when we study nature through science, we find evidence of fine-tuning and planning—intelligent design—from the macro-architecture of the entire universe to the tiniest submicroscopic biomolecular machines. The more we understand nature, the more clearly we see it is filled with evidence for design.

Good ID Websites for More Information:

Click here for a printable PDF of this article.

References Cited

[1.] Stephen C. Meyer, “The origin of biological information and the higher taxonomic categories,” Proceedings of the Biological Society of Washington, 117(2):213-239 (2004).

[2.] Carl Sagan, “Life,” in Encyclopedia Britannica: Macropaedia Vol. 10 (Encyclopedia Britannica, Inc., 1984), 894.

[3.] Douglas D. Axe, “Extreme Functional Sensitivity to Conservative Amino Acid Changes on Enzyme Exteriors,” Journal of Molecular Biology, 301:585-595 (2000); Douglas D. Axe, “Estimating the Prevalence of Protein Sequences Adopting Functional Enzyme Folds,” Journal of Molecular Biology, 341: 1295–1315 (2004).

[4.] Richard Dawkins, The Blind Watchmaker (New York: W. W. Norton, 1986), 1.

[5.] Stephen C. Meyer et. al., “The Cambrian Explosion: Biology's Big Bang,” in Darwinism, Design, and Public Education, J. A. Campbell and S. C. Meyer eds. (Michigan State University Press, 2003).

[6.] Bill Gates, N. Myhrvold, and P. Rinearson, The Road Ahead: Completely Revised and Up-To-Date (Penguin Books, 1996), 228.

[7.] J. Craig Venter, “The Big Idea: Craig Venter On the Future of Life,” The Daily Beast (October 25, 2013), accessed October 25, 2013, www.thedailybeast.com/articles/2013/10/25/the-big-idea-craig-venter-the-future-of-life.html.

[8.] J. Craig Venter, quoted in Casey Luskin, “Craig Venter in Seattle: ‘Life Is a DNA Software System’,” (October 24, 2013), www.evolutionnews.org/2013/10/craig_venter_in078301.html.

[9.] Richard Dawkins, River Out of Eden: A Darwinian View of Life (New York: Basic Books, 1995), 17.

[10.] Francis Collins, The Language of God: A Scientist Presents Evidence for Belief (New York: Free Press, 2006), 91.

[11.] Bruce Alberts, “The Cell as a Collection of Protein Machines: Preparing the Next Generation of Molecular Biologists,” Cell, 92: 291-294 (Feb. 6, 1998).

[12.] David J. DeRosier, “The Turn of the Screw: The Bacterial Flagellar Motor,” Cell, 93: 17-20 (April 3, 1998).

[13.] Michael J. Behe, Darwin's Black Box: The Biochemical Challenge to Darwinism (Free Press 1996), 39.

[14.] Transcript of testimony of Scott Minnich, Kitzmiller et al. v. Dover Area School Board (M.D. Pa., PM Testimony, November 3, 2005), 103-112. See also Table 1 in R. M. Macnab, “Flagella,” in Escherichia Coli and Salmonella Typhimurium: Cellular and Molecular Biology Vol. 1, eds. F. C. Neidhardt, J. L. Ingraham, K. B. Low, B. Magasanik, M. Schaechter, and H. E. Umbarger (Washington D.C.: American Society for Microbiology, 1987), 73-74.

[15.] Fred Hoyle, “The Universe: Past and Present Reflections,” Engineering and Science, pp. 8-12 (November, 1981).

[16.] William Dembski, “Intelligent Design as a Theory of Information,” Naturalism, Theism and the Scientific Enterprise: An Interdisciplinary Conference at the University of Texas, Feb. 20-23, 1997, http://www.discovery.org/a/118 (citations omitted).

[17.] Jack W. Szostak, “Molecular messages,” Nature, 423: 689 (June 12, 2003).

[18.] William Dembski and Jonathan Witt, Intelligent Design Uncensored, pp. 68-69 (InterVarsity Press, 2010).

[19.] Jerry Coyne, “The Great Mutator,” The New Republic (June 14, 2007).

[20.] Michael Behe and David Snoke, “Simulating Evolution by Gene Duplication of Protein Features That Require Multiple Amino Acid Residues,” Protein Science, 13: 2651-2664 (2004).

[21.] Rick Durrett and Deena Schmidt, “Waiting for Two Mutations: With Applications to Regulatory Sequence Evolution and the Limits of Darwinian Evolution,” Genetics, 180:1501-1509 (2008).

[22.] Douglas Axe, “The Limits of Complex Adaptation: An Analysis Based on a Simple Model of Structured Bacterial Populations,” BIO-Complexity, 2010 (4): 1-10.

[23.] Axe, “Estimating the Prevalence of Protein Sequences Adopting Functional Enzyme Folds”; Axe, “Extreme Functional Sensitivity to Conservative Amino Acid Changes on Enzyme Exteriors.”

[24.] Ann Gauger and Douglas Axe, "The Evolutionary Accessibility of New Enzyme Functions: A Case Study from the Biotin Pathway," BIO-Complexity, 2011 (1): 1-17.

[25.] Mariclair A. Reeves, Ann K. Gauger, Douglas D. Axe, “Enzyme Families—Shared Evolutionary History or Shared Design? A Study of the GABA-Aminotransferase Family,” BIO-Complexity, 2014 (4): 1-16.

[26.] Meyer, “The origin of biological information and the higher taxonomic categories.”

[27.] Scott A. Minnich & Stephen C. Meyer, “Genetic analysis of coordinate flagellar and type III regulatory circuits in pathogenic bacteria," in Proceedings of the Second International Conference on Design & Nature, Rhodes Greece, p. 8 (M.W. Collins & C.A. Brebbia eds., 2004).

[28.] Hubert P. Yockey, “Self Organization Origin of Life Scenarios and Information Theory," Journal of Theoretical Biology, 91:13-31 (1981).

Click here for a printable PDF of this article.

Copyright © 2016. Version 1.0 Permission Granted to Reproduce for Educational Purposes.

(more…)
Christians who work in the natural sciences are dogged by a persistent bogeyman: a singular creature called the God of the gaps. Should a believer ever conclude that natural forces are inadequate to produce some phenomenon in the natural world, the bogeymen is poised to spring from the shadows. Its power to intimidate can be seen in an essay by philosopher Nancey Murphy in Perspectives on Science and Christian Faith in which she backs away from any suggestion that Christians ought to challenge naturalism in science. Murphy argues that believers are properly “wary of invoking divine action in any way in science, especially in biology, fearing that science will advance, providing the naturalistic explanations that will make God appear once again to have been an unnecessary hypothesis.” Such Christians fear being cast in the same category as primitives attributing thunder to the raging of their gods. But in a deft turning of the tables, Stephen M. Barr conjured up a counter-spook in “The Atheism of the Gaps” ( [First Things”> www.firstthings.com] , November 1995). We are so conditioned to expect scientific breakthroughs that exceed our expectations, Barr observed, that we reflexively reject any idea that science has limits. Yet science reveals not only the rich possibilities of nature but also its limitations. To give obvious examples, we know that we will never fulfill the alchemists’ dream of chemically transmuting lead into gold. We know that a parent of one species will never give birth to offspring of another species. Science reveals consistent patterns that allow us to make negative statements about what natural forces cannot do. To persist in seeking natural laws in such cases, Barr suggested, is as irrational as any primitive myth of the thunder gods. The example Barr considered was human consciousness, focusing on Roger Penrose’s argument that the mind cannot be explained by currently known laws of physics. As a convinced materialist, Penrose holds out the hope that new and unimaginable physical laws are nonetheless out there, just waiting to be discovered. As Barr comments, materialism clearly functions for Penrose as a faith of the gaps: When science reveals phenomena that surpass the explanatory power of known natural laws, materialism takes refuge in the hope of turning up “undiscovered and unprecedented” laws, different in kind from any currently known. Another field that often elicits the materialist’s faith of the gaps is origin-of-life research. The discovery of DNA revealed that at the core of life is a molecular message that contains a staggering quantity of information. A single cell of the human body contains as much information as the Encyclopedia Britannica ”all thirty volumes of it”three or four times over. As a result, the question of the origin of life must now be recast as the origin of biological information. The materialist is committed to constructing an explanation that appeals solely to physical-chemical laws. And it is true that the bases, sugars, and phosphates comprising the nucleotides in DNA are ordinary chemicals that react according to ordinary laws. Yet those same laws do not explain how the chemicals came to function as a cellular language. We know, after all, the characteristic effects of physical forces: They create either random patterns, like the pile of leaves against my back fence, or else ordered, repetitive structures, like ripples on a beach or the molecular structure of crystals. But information theory teaches us that neither random nor repetitive structures carry high levels of information. The information content of any structure is defined as the minimum number of instructions needed to specify it. For example, a random pattern of letters has a low information content because it requires very few instructions: 1) Select a letter of the English alphabet and write it down, and 2) Do it again. A highly ordered but repetitive pattern likewise has low information content. Wrapping paper with “Merry Christmas” printed all over in ornate gold letters is highly ordered, but it can be specified with very few instructions: 1) Write “M-e-r-r-y C-h-r-i-s-t-m-a-s,” and 2) Do it again. By contrast, a structure with high information content requires a large number of instructions. If you want your computer to print out the poem “‘Twas the Night Before Christmas,” you must specify every letter, one by one. There are no shortcuts. This is the kind of order we find in DNA. It would be impossible to produce a simple set of instructions telling a chemist how to synthesize the DNA of even the simplest bacterium. You would have to specify every chemical “letter,” one by one. The high level of complexity in DNA has led researchers to abandon chance theories of life’s origin in favor of theories of spontaneous self-organization. The guiding principle in the field today is (in the words of chemist Cyril Ponnamperuma) that “there are inherent properties in the atoms and molecules which seem to direct the synthesis in the direction most favorable” for producing the macromolecules of life. But so far no one has been able to identify these mysterious self-organizing properties. The best that scientists can do is draw analogies to spontaneous ordering in non living structures, such as crystals. The unique structure of any crystal is the result of what we might think of as the “shape” of its atoms (or ions), which causes them to slot into a particular position and to layer themselves in a fixed, orderly pattern. “If we could shrink ourselves to the atomic scale,” writes zoologist Richard Dawkins in The Blind Watchmaker , “we would see almost endless rows of atoms stretching to the horizon in straight lines”galleries of geometric repetition.” Many scientists find it irresistible to draw an analogy between this example of spontaneous ordering and the origin of DNA. For example, chemist Graham Cairns-Smith proposes that DNA originated by sticking to the surface of crystals in certain clays, with the crystals acting as a template to organize life’s building blocks in precise arrays. In Darwin’s Dangerous Idea , philosopher Daniel Dennett goes so far as to speak of DNA as itself a carbon-based, self-replicating crystal. But the fatal flaw in all such theories is that crystals, while highly ordered, are low in information content. The structure of a crystal is strictly repetitive-“galleries of geometric repetition.” If the forces that produced DNA were analogous to those that produce a crystal, then DNA would consist of a single or at most a few patterns repeating again and again”like Christmas wrapping paper”and it would be incapable of storing and transmitting large quantities of information. Nor is this problem solved by newer theories of complexity. In At Home in the Universe , Stuart Kauffman claims that complexity theory will uncover laws that make life inevitable. But the ferns, swirls, and snowflakes that complexity theorists construct on their computer screens represent the same kind of order as crystals. In Kauffman’s words, they are constructed by the repeated application of a few “astonishingly simple rules.” Like crystals, these structures can be specified with only a few instructions, followed by “Do it again.” The upshot is that DNA exhibits too much “design work” (as Cairns”Smith puts it) to be the product of mere chance, yet there are no known physical laws capable of doing the necessary work. Once we apply the tools of information theory, all the plausible candidates fall out of the race. No known physical laws produce the right kind of ordered structure: one with high information content. This is not a statement about our ignorance”a “gap” in knowledge that one might be tempted to bridge with an appeal to the supernatural. Rather, it is a statement about what we know”about the consistent character of natural laws. If the structure of the DNA molecule were a regular, repeated pattern, then it would make sense to look for a general law of assembly to explain its origin. But instead we must look for something that specified each nucleotide one by one. We also know, from information theory, how codes work. Encoded messages are independent of the physical medium used to store and transmit them. If we knew how to translate the message in a DNA molecule, we could write it out using ink or crayon or electronic impulses from a keyboard. We could even take a stick and write it in the sand”all without affecting its meaning. In other words, the sequence of “letters” in DNA is chemically arbitrary: There is nothing intrinsic in the chemicals themselves that explains why particular sequences carry a particular message. In the words of chemist-turned-philosopher Michael Polanyi, the sequence of nucleotides is “extraneous to” the physical and chemical properties within the molecule”which is to say, the sequence is not determined by inherent physical-chemical forces. In fact, it is precisely this “physical indeterminacy” (Polanyi’s phrase) that gives nucleotides the flexibility to function as letters in a message”to be arranged and rearranged in a host of unpredictable patterns, like the letters on a page. But physical indeterminacy also implies that physical forces did not originate the pattern”any more than the text on this page originated from the physical properties of the paper and ink. If we consult everyday experience, we readily note that objects with a high information content”books, computer disks, musical scores”are products of intelligence. It is reasonable to conclude, by analogy, that the DNA molecule is likewise the product of an intelligent agent. This is a contemporary version of the design argument, and it does not rest on ignorance”on gaps in knowledge”but on the explosive growth in knowledge thanks to the revolution in molecular biology and the development of information theory. In spite of this extensive new evidence, the materialist continues to hold out for the discovery of some new physical laws to explain the origin of biological information. As chemist Manfred Eigen writes in Steps Towards Life , “Our task is to find an algorithm, a natural law that leads to the origin of information.” Yet no known natural forces produce structures with high information content, and so the elusive law that Eigen hopes to find must be different in kind from any we currently know. Surely that qualifies as an argument from ignorance”the materialist’s God of the gaps.
(more…)

Editor’s note: The online journal Sapientia recently posed a good question to several participants in a forum: “Is Intelligent Design Detectable by Science?” This is one key issue on which proponents of ID and of theistic evolution differ. Stephen Meyer, philosopher of science and director of Discovery Institute's Center for Science & Culture, gave the following reply.


Biologists have long recognized that many organized structures in living organisms — the elegant form and protective covering of the coiled nautilus; the interdependent parts of the vertebrate eye; the interlocking bones, muscles, and feathers of a bird wing — “give the appearance of having been designed for a purpose.”1

Before Darwin, biologists attributed the beauty, integrated complexity, and adaptation of organisms to their environments to a powerful designing intelligence. Consequently, they also thought the study of life rendered the activity of a designing intelligence detectable in the natural world.

Yet Darwin argued that this appearance of design could be more simply explained as the product of a purely undirected mechanism, namely, natural selection and random variation. Modern neo-Darwinists have similarly asserted that the undirected process of natural selection and random mutation produced the intricate designed-like structures in living systems. They affirm that natural selection can mimic the powers of a designing intelligence without itself being guided by an intelligent agent. Thus, living organisms may look designed, but on this view, that appearance is illusory and, consequently, the study of life does not render the activity of a designing intelligence detectable in the natural world. As Darwin himself insisted, “There seems to be no more design in the variability of organic beings and in the action of natural selection, than in the course in which the wind blows.”2 Or as the eminent evolutionary biologist Francisco Ayala has argued, Darwin accounted for “design without a designer” and showed “that the directive organization of living beings can be explained as the result of a natural process, natural selection, without any need to resort to a Creator or other external agent.”3

But did Darwin explain away all evidence of apparent design in biology? Darwin attempted to explain the origin of new living forms starting from simpler pre-existing forms of life, but his theory of evolution by natural selection did not even attempt to explain the origin of life — the simplest living cell — in the first place. Yet there is now compelling evidence of intelligent design in the inner recesses of even the simplest living one-celled organisms. Moreover, there is a key feature of living cells — one that makes the intelligent design of life detectable — that Darwin didn’t know about and that contemporary evolutionary theorists have not explained away.

The Information Enigma

In 1953 when Watson and Crick elucidated the structure of the DNA molecule, they made a startling discovery. The structure of DNA allows it to store information in the form of a four-character digital code. Strings of precisely sequenced chemicals called nucleotide bases store and transmit the assembly instructions — the information — for building the crucial protein molecules and machines the cell needs to survive.

Francis Crick later developed this idea with his famous “sequence hypothesis” according to which the chemical constituents in DNA function like letters in a written language or symbols in a computer code. Just as English letters may convey a particular message depending on their arrangement, so too do certain sequences of chemical bases along the spine of a DNA molecule convey precise instructions for building proteins. The arrangement of the chemical characters determines the function of the sequence as a whole. Thus, the DNA molecule has the same property of “sequence specificity” that characterizes codes and language.

Moreover, DNA sequences do not just possess “information” in the strictly mathematical sense described by pioneering information theorist Claude Shannon. Shannon related the amount of information in a sequence of symbols to the improbability of the sequence (and the reduction of uncertainty associated with it). But DNA base sequences do not just exhibit a mathematically measurable degree of improbability. Instead, DNA contains information in the richer and more ordinary dictionary sense of “alternative sequences or arrangements of characters that produce a specific effect.” DNA base sequences convey instructions. They perform functions and produce specific effects. Thus, they not only possess “Shannon information,” but also what has been called “specified” or “functional information.”

Like the precisely arranged zeros and ones in a computer program, the chemical bases in DNA convey instructions by virtue of their specific arrangement — and in accord with an independent symbol convention known as the “genetic code.” Thus, biologist Richard Dawkins notes that “the machine code of the genes is uncannily computer-like.”4 Similarly, Bill Gates observes that “DNA is like a computer program, but far, far more advanced than any software we’ve ever created.”5 Similarly, biotechnologist Leroy Hood describes the information in DNA as “digital code.”6

After the early 1960s, further discoveries revealed that the digital information in DNA and RNA is only part of a complex information processing system — an advanced form of nanotechnology that both mirrors and exceeds our own in its complexity, design logic, and information storage density.

Where did the information in the cell come from? And how did the cell’s complex information processing system arise? These questions lie at the heart of contemporary origin-of-life research. Clearly, the informational features of the cell at least appear designed. And, as I show in extensive detail in my book Signature in the Cell, no theory of undirected chemical evolution explains the origin of the information needed to build the first living cell.7

Why? There is simply too much information in the cell to be explained by chance alone. And attempts to explain the origin of information as the consequence of pre-biotic natural selection acting on random changes inevitably presuppose precisely what needs explaining, namely, reams of pre-existing genetic information. The information in DNA also defies explanation by reference to the laws of chemistry. Saying otherwise is like saying a newspaper headline might arise from the chemical attraction between ink and paper. Clearly something more is at work.

Yet, the scientists who infer intelligent design do not do so merely because natural processes — chance, laws, or their combination — have failed to explain the origin of the information and information processing systems in cells. Instead, we think intelligent design is detectable in living systems because we know from experience that systems possessing large amounts of such information invariably arise from intelligent causes. The information on a computer screen can be traced back to a user or programmer. The information in a newspaper ultimately came from a writer — from a mind. As the pioneering information theorist Henry Quastler observed, “Information habitually arises from conscious activity.”8

This connection between information and prior intelligence enables us to detect or infer intelligent activity even from unobservable sources in the distant past. Archeologists infer ancient scribes from hieroglyphic inscriptions. SETI’s search for extraterrestrial intelligence presupposes that information imbedded in electromagnetic signals from space would indicate an intelligent source. Radio astronomers have not found any such signal from distant star systems; but closer to home, molecular biologists have discovered information in the cell, suggesting — by the same logic that underwrites the SETI program and ordinary scientific reasoning about other informational artifacts — an intelligent source.

DNA functions like a software program and contains specified information just as software does. We know from experience that software comes from programmers. We know generally that specified information — whether inscribed in hieroglyphics, written in a book, or encoded in a radio signal — always arises from an intelligent source. So the discovery of such information in the DNA molecule provides strong grounds for inferring (or detecting) that intelligence played a role in the origin of DNA, even if we weren’t there to observe the system coming into existence.

The Logic of Design Detection

In The Design Inference, mathematician William Dembski explicates the logic of design detection. His work reinforces the conclusion that the specified information present in DNA points to a designing mind.

Dembski shows that rational agents often detect the prior activity of other designing minds by the character of the effects they leave behind. Archaeologists assume that rational agents produced the inscriptions on the Rosetta Stone. Insurance fraud investigators detect certain “cheating patterns” that suggest intentional manipulation of circumstances rather than a natural disaster. Cryptographers distinguish between random signals and those carrying encoded messages, the latter indicating an intelligent source. Recognizing the activity of intelligent agents constitutes a common and fully rational mode of inference.

More importantly, Dembski explicates criteria by which rational agents recognize or detect the effects of other rational agents, and distinguish them from the effects of natural causes. He demonstrates that systems or sequences with the joint properties of “high complexity” (or small probability) and “specification” invariably result from intelligent causes, not chance or physical-chemical laws.9 Dembski noted that complex sequences exhibit an irregular and improbable arrangement that defies expression by a simple rule or algorithm, whereas specification involves a match or correspondence between a physical system or sequence and an independently recognizable pattern or set of functional requirements.

By way of illustration, consider the following three sets of symbols:

nehya53nslbyw1`jejns7eopslanm46/J

TIME AND TIDE WAIT FOR NO MAN

ABABABABABABABABABABAB

The first two sequences are complex because both defy reduction to a simple rule. Each represents a highly irregular, aperiodic, improbable sequence. The third sequence is not complex, but is instead highly ordered and repetitive. Of the two complex sequences, only the second, however, exemplifies a set of independent functional requirements — i.e., is specified.

English has many such functional requirements. For example, to convey meaning in English one must employ existing conventions of vocabulary (associations of symbol sequences with particular objects, concepts, or ideas) and existing conventions of syntax and grammar. When symbol arrangements “match” existing vocabulary and grammatical conventions (i.e., functional requirements), communication can occur. Such arrangements exhibit “specification.” The sequence “Time and tide waits for no man” clearly exhibits such a match, and thus performs a communication function.

Thus, of the three sequences only the second manifests both necessary indicators of a designed system. The third sequence lacks complexity, though it does exhibit a simple periodic pattern, a specification of sorts. The first sequence is complex, but not specified. Only the second sequence exhibits both complexity and specification. Thus, according to Dembski’s theory of design detection, only the second sequence implicates an intelligent cause — as our uniform experience affirms.

In my book Signature in the Cell, I show that Dembski’s joint criteria of complexity and specification are equivalent to “functional” or “specified information.” I also show that the coding regions of DNA exemplify both high complexity and specification and, thus not surprisingly, also contain “specified information.” Consequently, Dembski’s scientific method of design detection reinforces the conclusion that the digital information in DNA indicates prior intelligent activity.

So, contrary to media reports, the theory of intelligent design is not based upon ignorance or “gaps” in our knowledge, but on scientific discoveries about DNA and on established scientific methods of reasoning in which our uniform experience of cause and effect guides our inferences about the kinds of causes that produce (or best explain) different types of events or sequences.

Anthropic Fine Tuning

The evidence of design in living cells is not the only such evidence in nature. Modern physics now reveals evidence of intelligent design in the very fabric of the universe. Since the 1960s physicists have recognized that the initial conditions and the laws and constants of physics are finely tuned, against all odds, to make life possible. Even extremely slight alterations in the values of many independent factors — such as the expansion rate of the universe, the speed of light, and the precise strength of gravitational or electromagnetic attraction — would render life impossible. Physicists refer to these factors as “anthropic coincidences” and to the fortunate convergence of all these coincidences as the “fine-tuning of the universe.”

Many have noted that this fine-tuning strongly suggests design by a pre-existent intelligence. Physicist Paul Davies has said that “the impression of design is overwhelming.”10 Fred Hoyle argued that, “A common sense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as chemistry and biology.”11 Many physicists now concur. They would argue that — in effect — the dials in the cosmic control room appear finely-tuned because someone carefully fine-tuned them.

To explain the vast improbabilities associated with these fine-tuning parameters, some physicists have postulated not a “fine-tuner” or intelligent designer, but the existence of a vast number of other parallel universes. This “multiverse” concept also necessarily posits various mechanisms for producing these universes. On this view, having some mechanism for generating new universes would increase the number of opportunities for a life-friendly universe such as our own to arise — making ours something like a lucky winner of a cosmic lottery.

But advocates of these multiverse proposals have overlooked an obvious problem. The speculative cosmologies (such as inflationary cosmology and string theory) they propose for generating alternative universes invariably invoke mechanisms that themselves require fine-tuning, thus begging the question as to the origin of that prior fine-tuning. Indeed, all the various materialistic explanations for the origin of the fine-tuning — i.e., the explanations that attempt to explain the fine-tuning without invoking intelligent design — invariably invoke prior unexplained fine-tuning.

Moreover, as Jay Richards has shown,12 the fine-tuning of the universe exhibits precisely those features — extreme improbability and functional specification — that invariably trigger an awareness of, and justify an inference to, intelligent design. Since the multiverse theory cannot explain fine-tuning without invoking prior fine-tuning, and since the fine-tuning of a physical system to accomplish a propitious end is exactly the kind of thing we know intelligent agents do, it follows that intelligent design stands as the best explanation for the fine-tuning of the universe.

And that makes intelligent design detectable in both the physical parameters of the universe and the information-bearing properties of life.


Notes

  1. Richard Dawkins, The Blind Watchmaker (New York, NY: Norton, 1986), 1.
  2. Charles Darwin, The Life and Letters of Charles Darwin, ed. Francis Darwin, vol. 1 (New York: Appleton, 1887), 278–279.
  3. Francisco J. Ayala, “Darwin’s Greatest Discovery: Design without Designer,” Proceedings of the National Academy of Sciences USA 104 (May 15, 2007): 8567–8573.
  4. Richard Dawkins, River out of Eden: A Darwinian View of Life (New York: Basic, 1995), 17.
  5. Bill Gates, The Road Ahead (New York: Viking, 1995), 188.
  6. Leroy Hood and David Galas, "The Digital Code of DNA." Nature 421 (2003), 444-448.
  7. Stephen Meyer, Signature in the Cell: DNA and the Evidence for Intelligent Design (San Francisco: HarperOne, 2009), 173-323.
  8. Henry Quastler, The Emergence of Biological Organization (New Haven: Yale UP, 1964), 16.
  9. William Dembski, The Design Inference: Eliminating Chance Through Small Probabilities (Cambridge: Cambridge University Press, 1998), 36-66.
  10. Paul Davies, The Cosmic Blueprint (New York: Simon & Schuster, 1988), 203.
  11. Fred Hoyle, "The Universe: Past and Present Reflections." Annual Review of Astronomy and Astrophysics 20 (1982): 16.
  12. Guillermo Gonzalez and Jay Richards, The Privileged Planet: How Our Place in the Cosmos is Designed for Discovery (Washington, DC: Regnery Publishing, 2004), 293-311.
(more…)

Unless you've been hiding in a cave, you've heard of "intelligent design" (ID) and some of its leading proponents-Phillip Johnson, Michael Behe, William Dembski. Unfortunately, you probably got the mainstream media's spin. It's so predictable, I sometimes wonder if reporters aren't using computer macros.

The reporter types control-alt "CE" and out pops the witty headline: "Creationism Evolves." Control-alt "Scopes Trope" and out pops a lead referencing the old Spencer Tracy film "Inherit the Wind," a cartoon-like caricature of the 1925 Scopes Monkey Trial over evolution in the classroom.

Control-alt "Conspiracy" and, presto, a paragraph about the religious right and its scheme to smuggle Bibles into the science class as the first step toward establishing a theocracy. Next comes a quotation supposedly representing the view of all "serious scientists," with the phrase "overwhelming evidence" thrown in for good measure. The story practically writes itself, and it possesses this virtue: it saves the reporter the bother of actually investigating what design theory really is.

Victor Victorian

So what is ID, really? ID is not a deduction from religious dogma or scripture. It's simply the argument that certain features of the natural world — from miniature machines and digital information found in living cells, to the fine-tuning of physical constants — are best explained as the result of an intelligent cause. ID is thus a tacit rebuke of an idea inherited from the 19th century, called scientific materialism.

Natural science in the Victorian Age, or rather, its materialistic gloss, offered a radically different view of the universe: (1) The universe has always existed, so we need not explain its origin; (2) Everything in the universe submits to deterministic laws. (3) Life is the love child of luck and chemistry. (4) Cells, the basic units of life, are essentially blobs of Jell-O.

Onto this dubious edifice Charles Darwin added a fifth conjecture: All the sophisticated organisms around us grew from a process called natural selection: this process seizes and passes along those minor, random variations in a population that provide a survival advantage. With this, Darwin explained away the apparent design in the biological world as just that-only apparent.

Each of these 19th-century assumptions has been undermined or discredited in the 20th century, but the materialist gloss remains: There is one god, matter, and science is its prophet. It hides behind its more modest cousin, methodological naturalism. According to this tidy dictum, scientists can believe whatever they want in their personal lives, but they must appeal only to impersonal causes when explaining nature. Accordingly, any who discuss purpose or design within science (the founders of modern science generously excepted) cease to be scientists.

The Universe Strikes Back

There was one problem with this tidy rule. Nature forgot to cooperate. The trouble started in the 1920s when astronomer Edwin Hubble discovered that the light from distant galaxies was "red-shifted." It had stretched during the course of its travels. This suggested the universe is expanding. Reversing the process in their minds, scientists were suddenly confronted with a universe that had come into existence in the finite past. Who knew! Hubble's discovery, confirmed by later evidence, flatly contradicted the earlier picture of an eternal and self-existing cosmos. The universe itself had re-introduced the question of its origin to a community bent on avoiding the question altogether.

This was just the beginning. In the 1960s and '70s, physicists found that the universal constants of physics (e.g., gravity, electromagnetism) appeared finely tuned for complex life. To astrophysicist and atheist Fred Hoyle, this fine-tuning suggested the work of a "superintellect."

Still more recently, growing evidence in astronomy has revealed that even in a finely tuned universe, dozens of local conditions have to go just right to build a single habitable planet. This growing list of unlikely requirements is only half the story. In "The Privileged Planet," astronomer Guillermo Gonzalez and I argue that those conditions for habitability also provide the best overall conditions for doing science. The very places where observers can exist are the same places that provide the best overall conditions for observing. For instance, the most life-friendly region of the galaxy is also the best place to be an astronomer and cosmologist. You might expect this if the universe were designed for discovery, but not if, as astronomer Carl Sagan put it, "The universe is all there is, ever was, or ever will be."

Information Plantation

Of course, even with a suitable environment, you don't automatically get man or even amoebas. Before the Darwinian mechanism can even get started, it needs a wealth of biological information as part of the first self-reproducing organism. For instance, there's the information encoded along the DNA molecule, often described as a sophisticated computer code for producing proteins, the three-dimensional building blocks of all life. These, in turn, need the right cellular hardware to function.

In recent years, philosophers William Dembski and Stephen Meyer have turned this evidence into a formidable argument for intelligent design. Dembski, also a mathematician, applies information and probability theory to the subject. Meyer argues that the usual aimless processes of chance and chemistry simply can't explain biological information and that, moreover, our everyday experience shows us where such information comes from-intelligent agents.

Moving up a level, we find complex and functionally integrated machines that are out of reach to the Darwinian mechanism. Biochemist Michael Behe immortalized some of these in his bestselling 1996 book, "Darwin's Black Box."

Behe argues that molecular machines like the bacterial flagellum are "irreducibly complex." They're like a mousetrap. Without all of their basic parts, they don't work. Natural selection can only build systems one small step at a time, where each step provides an immediate survival advantage for the organism. It can't select for a future function. To do that requires foresight-the exclusive jurisdiction of intelligent agents. That's the positive evidence for design: Such structures are the sort produced by intelligent agents, who can foresee a future function. If you get this point, you've already comprehended more than most journalists writing on the subject.

The New Zoo Review

Moving to the macroscopic world, we see the three-dimensional complexity of many diverse animal body plans (phyla). In the fossil record, these show up suddenly. The problem for Darwinism is not that there are "gaps." Of course there are. Rather, it's the entire fossil record's pattern of sudden appearance of new phyla and persistent morphological isolation between them. This is not the gradually branching tree of life the Darwinian story leads us to expect.

Nor is this an argument from ignorance. In our experience, sudden innovations and massive infusions of information come from intelligent agents. The primary innovations come first (e.g., car, airplane, a new Cambrian phylum) followed by variations on the original form. This is the story the fossil record tells.

The Definition or the Evidence?
At the beginning of the 21st century, we have new evidence and new intellectual tools at our disposal. Standing in the way is the materialistic definition of science inherited from the Victorian Age. If a definition of science conflicts with the scientific evidence, should we go with the definition or the evidence?

To ask the question is to answer it. "Scientia" means knowledge. If we are properly scientific, then we should be open to the natural world, not decide beforehand what it's allowed to reveal. Either the universe provides evidence for purpose and design or it doesn't. The way to resolve the question isn't to play definitional games but to look.

The G-word

Recently, Nobel-prize winning physicist Charles Townes asked, "What is the purpose or meaning of life? Or of our universe? These are questions which should concern us all.... If the universe has a purpose, then its structure, and how it works, must reflect this purpose."

Townes continues: "Serious intellectual discussion of the possible meaning of our universe, or the nature of religion and philosophical views of religion and science, needs to be openly and carefully discussed."

Unfortunately, few are willing to follow Townes' advice. If we talk about ID, we're warned, someone, somewhere, will start talking about God.

But certain ideas in science will always have theological implications. As arch-Darwinist Richard Dawkins so memorably said, "Darwin made it possible to be an intellectually fulfilled atheist." Right.

Both Dawkins and Townes agree that ideas in science can have theological implications. Isn't that obvious? Yet in our current climate, even the bare rumor of God causes some to reach for their stash of derisive terms-"theocrat," "fundamentalist," "creationist"-they don't require much imagination.

But that response rings increasingly hollow. The genie is out of the bottle, and name-calling and misinformation won't put him back. The mandarins can no longer control the flow of information to those who seek it. The implications can take care of themselves. It's time to discuss the evidence.

(more…)
Intelligent design begins with a seemingly innocuous question: Can objects, even if nothing is known about how they arose, exhibit features that reliably signal the action of an intelligent cause? To see what’s at stake, consider Mount Rushmore. The evidence for Mount Rushmore’s design is direct — eyewitnesses saw the sculptor Gutzon Borglum spend the better part of his life designing and building this structure. But what if there were no direct evidence for Mount Rushmore’s design? What if humans went extinct and aliens, visiting the earth, discovered Mount Rushmore in substantially the same condition as it is now? In that case, what about this rock formation would provide convincing circumstantial evidence that it was due to a designing intelligence and not merely to wind and erosion? Designed objects like Mount Rushmore exhibit characteristic features or patterns that point to an intelligence. Such features or patterns constitute signs of intelligence. Proponents of intelligent design, known as design theorists, purport to study such signs formally, rigorously, and scientifically. Intelligent design may therefore be defined as the science that studies signs of intelligence. Because a sign is not the thing signified, intelligent design does not presume to identify the purposes of a designer. Intelligent design focuses not on the designer’s purposes (the thing signified) but on the artifacts resulting from a designer’s purposes (the sign). What a designer intends or purposes is, to be sure, an interesting question, and one may be able to infer something about a designer’s purposes from the designed objects that a designer produces. Nevertheless, the purposes of a designer lie outside the scope of intelligent design. As a scientific research program, intelligent design investigates the effects of intelligence and not intelligence as such. Intelligent design is controversial because it purports to find signs of intelligence in nature, and specifically in biological systems. According to the evolutionary biologist Francisco Ayala, Darwin’s greatest achievement was to show how the organized complexity of organisms could be attained apart from a designing intelligence. Intelligent design therefore directly challenges Darwinism and other naturalistic approaches to the origin and evolution of life. The idea that an intrinsic intelligence or teleology inheres in and is expressed through nature has a long history and is embraced by many religious traditions. The main difficulty with this idea since Darwin’s day, however, has been to discover a conceptually powerful formulation of design that can fruitfully advance science. What has kept design outside the scientific mainstream since the rise of Darwinism has been the lack of precise methods for distinguishing intelligently caused objects from unintelligently caused ones. For design to be a fruitful scientific concept, scientists have to be sure that they can reliably determine whether something is designed. Johannes Kepler, for instance, thought the craters on the moon were intelligently designed by moon dwellers. We now know that the craters were formed by purely material factors (like meteor impacts). This fear of falsely attributing something to design, only to have it overturned later, has hindered design from entering the scientific mainstream. But design theorists argue that they now have formulated precise methods for discriminating designed from undesigned objects. These methods, they contend, enable them to avoid Kepler’s mistake and reliably locate design in biological systems. As a theory of biological origins and development, intelligent design’s central claim is that only intelligent causes adequately explain the complex, information-rich structures of biology and that these causes are empirically detectable. To say intelligent causes are empirically detectable is to say there exist well-defined methods that, based on observable features of the world, can reliably distinguish intelligent causes from undirected natural causes. Many special sciences have already developed such methods for drawing this distinction — notably forensic science, cryptography, archeology, and the search for extraterrestrial intelligence (SETI). Essential to all these methods is the ability to eliminate chance and necessity. Astronomer Carl Sagan wrote a novel about SETI called Contact, which was later made into a movie. The plot and the extraterrestrials were fictional, but Sagan based the SETI astronomers’ methods of design detection squarely on scientific practice. Real-life SETI researchers have thus far failed to conclusively detect designed signals from distant space, but if they encountered such a signal, as the film’s astronomers’ did, they too would infer design. Why did the radio astronomers in Contact draw such a design inference from the signals they monitored from space? SETI researchers run signals collected from distant space through computers programmed to recognize preset patterns. These patterns serve as a sieve. Signals that do not match any of the patterns pass through the sieve and are classified as random. After years of receiving apparently meaningless, random signals, the Contact researchers discovered a pattern of beats and pauses that corresponded to the sequence of all the prime numbers between two and one-hundred and one. (Prime numbers are divisible only by themselves and by one.) That startled the astronomers, and they immediately inferred an intelligent cause. When a sequence begins with two beats and then a pause, three beats and then a pause, and continues through each prime number all the way to one-hundred and one beats, researchers must infer the presence of an extraterrestrial intelligence. Here’s the rationale for this inference: Nothing in the laws of physics requires radio signals to take one form or another. The prime sequence is therefore contingent rather than necessary. Also, the prime sequence is long and hence complex. Note that if the sequence were extremely short and therefore lacked complexity, it could easily have happened by chance. Finally, the sequence was not merely complex but also exhibited an independently given pattern or specification (it was not just any old sequence of numbers but a mathematically significant one — the prime numbers). Intelligence leaves behind a characteristic trademark or signature — what within the intelligent design community is now called specified complexity. An event exhibits specified complexity if it is contingent and therefore not necessary; if it is complex and therefore not readily repeatable by chance; and if it is specified in the sense of exhibiting an independently given pattern. Note that a merely improbable event is not sufficient to eliminate chance — by flipping a coin long enough, one will witness a highly complex or improbable event. Even so, one will have no reason to attribute it to anything other than chance. The important thing about specifications is that they be objectively given and not arbitrarily imposed on events after the fact. For instance, if an archer fires arrows at a wall and then paints bull’s-eyes around them, the archer imposes a pattern after the fact. On the other hand, if the targets are set up in advance (“specified”), and then the archer hits them accurately, one legitimately concludes that it was by design. The combination of complexity and specification convincingly pointed the radio astronomers in the movie Contact to an extraterrestrial intelligence. Note that the evidence was purely circumstantial — the radio astronomers knew nothing about the aliens responsible for the signal or how they transmitted it. Design theorists contend that specified complexity provides compelling circumstantial evidence for intelligence. Accordingly, specified complexity is a reliable empirical marker of intelligence in the same way that fingerprints are a reliable empirical marker of an individual’s presence. Moreover, design theorists argue that purely material factors cannot adequately account for specified complexity. In determining whether biological organisms exhibit specified complexity, design theorists focus on identifiable systems (e.g., individual enzymes, metabolic pathways, and molecular machines). These systems are not only specified by their independent functional requirements but also exhibit a high degree of complexity. In Darwin’s Black Box, biochemist Michael Behe connects specified complexity to biological design through his concept of irreducible complexity. Behe defines a system as irreducibly complex if it consists of several interrelated parts for which removing even one part renders the system’s basic function unrecoverable. For Behe, irreducible complexity is a sure indicator of design. One irreducibly complex biochemical system that Behe considers is the bacterial flagellum. The flagellum is an acid-powered rotary motor with a whip-like tail that spins at twenty-thousand revolutions per minute and whose rotating motion enables a bacterium to navigate through its watery environment. Behe shows that the intricate machinery in this molecular motor — including a rotor, a stator, O-rings, bushings, and a drive shaft — requires the coordinated interaction of approximately forty complex proteins and that the absence of any one of these proteins would result in the complete loss of motor function. Behe argues that the Darwinian mechanism faces grave obstacles in trying to account for such irreducibly complex systems. In No Free Lunch, William Dembski shows how Behe’s notion of irreducible complexity constitutes a particular instance of specified complexity. Once an essential constituent of an organism exhibits specified complexity, any design attributable to that constituent carries over to the organism as a whole. To attribute design to an organism one need not demonstrate that every aspect of the organism was designed. Organisms, like all material objects, are products of history and thus subject to the buffeting of purely material factors. Automobiles, for instance, get old and exhibit the effects of corrosion, hail, and frictional forces. But that doesn’t make them any less designed. Likewise design theorists argue that organisms, though exhibiting the effects of history (and that includes Darwinian factors such as genetic mutations and natural selection), also include an ineliminable core that is designed. Intelligent design’s main tie to religion is through the design argument. Perhaps the best-known design argument is William Paley’s. Paley published his argument in 1802 in a book titled Natural Theology. The subtitle of that book is revealing: Evidences of the Existence and Attributes of the Deity, Collected from the Appearances of Nature. Paley’s project was to examine features of the natural world (what he called “appearances of nature”) and from there draw conclusions about the existence and attributes of a designing intelligence responsible for those features (whom Paley identified with the God of Christianity). According to Paley, if one finds a watch in a field (and thus lacks all knowledge of how the watch arose), the adaptation of the watch’s parts to telling time ensures that it is the product of an intelligence. So too, according to Paley, the marvelous adaptations of means to ends in organisms (like the intricacy of the human eye with its capacity for vision) ensure that organisms are the product of an intelligence. The theory of intelligent design updates Paley’s watchmaker argument in light of contemporary information theory and molecular biology, purporting to bring this argument squarely within science. In arguing for the design of natural systems, intelligent design is more modest than the design arguments of natural theology. For natural theologians like Paley, the validity of the design argument did not depend on the fruitfulness of design-theoretic ideas for science but on the metaphysical and theological mileage one could get out of design. A natural theologian might point to nature and say, “Clearly, the designer of this ecosystem prized variety over neatness.” A design theorist attempting to do actual design-theoretic research on that ecosystem might reply, “Although that’s an intriguing theological possibility, as a design theorist I need to keep focused on the informational pathways capable of producing that variety.” In his Critique of Pure Reason, Immanuel Kant claimed that the most the design argument can establish is “an architect of the world who is constrained by the adaptability of the material in which he works, not a creator of the world to whose idea everything is subject.” Far from rejecting the design argument, Kant objected to overextending it. For Kant, the design argument legitimately establishes an architect (that is, an intelligent cause whose contrivances are constrained by the materials that make up the world), but it can never establish a creator who originates the very materials that the architect then fashions. Intelligent design is entirely consonant with this observation by Kant. Creation is always about the source of being of the world. Intelligent design, as the science that studies signs of intelligence, is about arrangements of preexisting materials that point to a designing intelligence. Creation and intelligent design are therefore quite different. One can have creation without intelligent design and intelligent design without creation. For instance, one can have a doctrine of creation in which God creates the world in such a way that nothing about the world points to design. The evolutionary biologist Richard Dawkins wrote a book titled The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design. Even if Dawkins is right about the universe revealing no evidence of design, it would not logically follow that it was not created. It is logically possible that God created a world that provides no evidence of design. On the other hand, it is logically possible that the world is full of signs of intelligence but was not created. This was the ancient Stoic view, in which the world was eternal and uncreated, and yet a rational principle pervaded the world and produced marks of intelligence in it. The implications of intelligent design for religious belief are profound. The rise of modern science led to a vigorous attack on all religions that treat purpose, intelligence, and wisdom as fundamental and irreducible features of reality. The high point of this attack came with Darwin’s theory of evolution. The central claim of Darwin’s theory is that an unguided material process (random variation and natural selection) could account for the emergence of all biological complexity and order. In other words, Darwin appeared to show that the design in biology (and, by implication, in nature generally) was dispensable. By showing that design is indispensable to the scientific understanding of the natural world, intelligent design is reinvigorating the design argument and at the same time overturning the widespread misconception that the only tenable form of religious belief is one that treats purpose, intelligence, and wisdom as byproducts of unintelligent material processes.

Bibliography

  • Beckwith, Francis J. Law, Darwinism, and Public Education: The Establishment Clause and the Challenge of Intelligent Design. Lanham, Md., 2003.
  • Behe, Michael J. Darwin’s Black Box: The Biochemical Challenge to Evolution. New York, 1996.
  • Dawkins, Richard. The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design. New York, 1986.
  • Dembski, William A. No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence. Lanham, Md., 2002.
  • Forrest, Barbara. “The Wedge at Work: How Intelligent Design Creationism Is Wedging Its Way into the Cultural and Academic Mainstream.” In Intelligent Design Creationism and Its Critics: Philosophical, Theological, and Scientific Perspectives, edited by Robert T. Pennock, pp. 5–53, Cambridge, Mass., 2001.
  • Giberson, Karl W. and Donald A. Yerxa. Species of Origins: America’s Search for a Creation Story. Lanham, Md., 2002.
  • Hunter, Cornelius G. Darwin’s God: Evolution and the Problem of Evil. Grand Rapids, Mich., 2002.
  • Manson, Neil A., ed. God and Design: The Teleological Argument and Modern Science. London, 2003.
  • Miller, Kenneth R. Finding Darwin’s God: A Scientist’s Search for Common Ground between God and Evolution. San Francisco, 1999.
  • Rea, Michael C. World without Design: The Ontological Consequences of Naturalism. Oxford, 2002.
  • Witham, Larry. By Design: Science and the Search for God. San Francisco, 2003.
  • Woodward, Thomas. Doubts about Darwin: A History of Intelligent Design. Grand Rapids, Mich., 2003.

Bibliographic Essay

Larry Witham provides the best overview of intelligent design, even-handedly treating its scientific, cultural, and religious dimensions. As a journalist, Witham has personally interviewed all the main players in the debate over intelligent design and allows them to tell their story. For intelligent design’s place in the science and religion dialogue, see Giberson and Yerxa. For histories of the intelligent design movement, see Woodward (a supporter) and Forrest (a critic). See Behe and Dembski to overview intelligent design’s scientific research program. For a critique of that program, see Miller. For an impassioned defense of Darwinism against any form of teleology or design, see Dawkins. Manson’s anthology situates intelligent design within broader discussions about teleology. Rea probes intelligent design’s metaphysical underpinnings. Hunter provides an interesting analysis of how intelligent design and Darwinism play off the problem of evil. Beckwith examines whether intelligent design is inherently religious and thus, on account of church-state separation, must be barred from public school science curricula.
(more…)

In 2009, I discussed a paper in BioEssays titled “MicroRNAs and metazoan macroevolution: insights into canalization, complexity, and the Cambrian explosion” which stated that “elucidating the materialistic basis of the Cambrian explosion has become more elusive, not less, the more we know about the event itself, and cannot be explained away by coupling extinction of intermediates with long stretches of geologic time, despite the contrary claims of some modern neo-Darwinists.”

(more…)

The Current Landscape

In December of 2004, the renowned British philosopher Antony Flew made worldwide news when he repudiated a lifelong commitment to atheism, citing, among other factors, evidence of intelligent design in the DNA molecule. In that same month, the American Civil Liberties Union filed suit to prevent a Dover, Pennsylvania school district from informing its students that they could learn about the theory of intelligent design from a supplementary science textbook in their school library. The following February, The Wall Street Journal (Klinghoffer 2005) reported that an evolutionary biologist at the Smithsonian Institution with two doctorates had been punished for publishing a peer-reviewed scientific article making a case for intelligent design.

Since 2005, the theory of intelligent design has been the focus of a frenzy of international media coverage, with prominent stories appearing in The New York Times, Nature, The London Times, The Independent (London), Sekai Nippo (Tokyo), The Times of India, Der Spiegel, The Jerusalem Post and Time magazine, to name just a few. And recently, a major conference about intelligent design was held in Prague (attended by some 700 scientists, students and scholars from Europe, Africa and the United States), further signaling that the theory of intelligent design has generated worldwide interest.

But what is this theory of intelligent design, and where did it come from? And why does it arouse such passion and inspire such apparently determined efforts to suppress it?

According to a spate of recent media reports, intelligent design is a new “faith-based” alternative to evolution – one based on religion rather than scientific evidence. As the story goes, intelligent design is just biblical creationism repackaged by religious fundamentalists in order to circumvent a 1987 United States Supreme Court prohibition against teaching creationism in the U.S. public schools. Over the past two years, major newspapers, magazines and broadcast outlets in the United States and around the world have repeated this trope.

But is it accurate? As one of the architects of the theory of intelligent design and the director of a research center that supports the work of scientists developing the theory, I know that it isn't.

The modern theory of intelligent design was not developed in response to a legal setback for creationists in 1987. Instead, it was first proposed in the late 1970s and early 1980s by a group of scientists – Charles Thaxton, Walter Bradley and Roger Olson – who were trying to account for an enduring mystery of modern biology: the origin of the digital information encoded along the spine of the DNA molecule. Thaxton and his colleagues came to the conclusion that the information-bearing properties of DNA provided strong evidence of a prior but unspecified designing intelligence. They wrote a book proposing this idea in 1984, three years before the U.S. Supreme Court decision (in Edwards v. Aguillard) that outlawed the teaching of creationism.

Earlier in the 1960s and 1970s, physicists had already begun to reconsider the design hypothesis. Many were impressed by the discovery that the laws and constants of physics are improbably “finely-tuned” to make life possible. As British astrophysicist Fred Hoyle put it, the fine-tuning of the laws and constants of physics suggested that a designing intelligence “had monkeyed with physics” for our benefit.

Contemporary scientific interest in the design hypothesis not only predates the U.S. Supreme Court ruling against creationism, but the formal theory of intelligent design is clearly different than creationism in both its method and content. The theory of intelligent design, unlike creationism, is not based upon the Bible. Instead, it is based on observations of nature which the theory attempts to explain based on what we know about the cause and effect structure of the world and the patterns that generally indicate intelligent causes. Intelligent design is an inference from empirical evidence, not a deduction from religious authority.

The propositional content of the theory of intelligent design also differs from that of creationism. Creationism or Creation Science, as defined by the U.S. Supreme Court, defends a particular reading of the book of Genesis in the Bible, typically one that asserts that the God of the Bible created the earth in six literal twenty-four hour periods a few thousand years ago. The theory of intelligent design does not offer an interpretation of the book of Genesis, nor does it posit a theory about the length of the Biblical days of creation or even the age of the earth. Instead, it posits a causal explanation for the observed complexity of life.

But if the theory of intelligent design is not creationism, what is it? Intelligent design is an evidence-based scientific theory about life's origins that challenges strictly materialistic views of evolution. According to Darwinian biologists such as Oxford's Richard Dawkins (1986: 1), livings systems “give the appearance of having been designed for a purpose.” But, for modern Darwinists, that appearance of design is entirely illusory. Why? According to neo-Darwinism, wholly undirected processes such as natural selection and random mutations are fully capable of producing the intricate designed-like structures in living systems. In their view, natural selection can mimic the powers of a designing intelligence without itself being directed by an intelligence of any kind.

In contrast, the theory of intelligent design holds that there are tell-tale features of living systems and the universe – for example, the information-bearing properties of DNA, the miniature circuits and machines in cells and the fine tuning of the laws and constants of physics – that are best explained by an intelligent cause rather than an undirected material process. The theory does not challenge the idea of “evolution” defined as either change over time or common ancestry, but it does dispute Darwin's idea that the cause of biological change is wholly blind and undirected. Either life arose as the result of purely undirected material processes or a guiding intelligence played a role. Design theorists affirm the latter option and argue that living organisms look designed because they really were designed.

A Brief History of the Design Argument

By making a case for design based on observations of natural phenomena, advocates of the contemporary theory of intelligent design have resuscitated the classical design argument. Prior to the publication of The Origin of Species by Charles Darwin in 1859, many Western thinkers, for over two thousand years, had answered the question “how did life arise?” by invoking the activity of a purposeful designer. Design arguments based on observations of the natural world were made by Greek and Roman philosophers such as Plato (1960: 279) and Cicero (1933: 217), by Jewish philosophers such as Maimonides and by Christian thinkers such as Thomas Aquinas1 (see Hick 1970: 1).

The idea of design also figured centrally in the modern scientific revolution (1500-1700). As historians of science (see Gillespie 1987: 1-49) have often pointed out, many of the founders of early modern science assumed that the natural world was intelligible precisely because they also assumed that it had been designed by a rational mind. In addition, many individual scientists – Johannes Kepler in astronomy (see Kepler 1981: 93-103; Kepler 1995: 170, 240),2 John Ray in biology (see Ray 1701) and Robert Boyle in chemistry (see Boyle 1979: 172) – made specific design arguments based upon empirical discoveries in their respective fields. This tradition attained an almost majestic rhetorical quality in the writing of Sir Isaac Newton, who made both elegant and sophisticated design arguments based upon biological, physical and astronomical discoveries. Writing in the General Scholium to the Principia, Newton (1934: 543-44) suggested that the stability of the planetary system depended not only upon the regular action of universal gravitation, but also upon the very precise initial positioning of the planets and comets in relation to the sun. As he explained:

[T]hough these bodies may, indeed, continue in their orbits by the mere laws of gravity, yet they could by no means have at first derived the regular position of the orbits themselves from those laws [...] [Thus] [t]his most beautiful system of the sun, planets and comets, could only proceed from the counsel and dominion of an intelligent and powerful Being.

Or as he wrote in the Opticks:

How came the Bodies of Animals to be contrived with so much Art, and for what ends were their several parts? Was the Eye contrived without Skill in Opticks, and the Ear without Knowledge of Sounds? [...] And these things being rightly dispatch’d, does it not appear from Phænomena that there is a Being incorporeal, living, intelligent, omnipresent [...]. (Newton 1952: 369-70.)

Scientists continued to make such design arguments well into the early nineteenth century, especially in biology. By the later part of the 18th century, however, some enlightenment philosophers began to express skepticism about the design argument. In particular, David Hume, in his Dialogues Concerning Natural Religion (1779), argued that the design argument depended upon a flawed analogy with human artifacts. He admitted that artifacts derive from intelligent artificers, and that biological organisms have certain similarities to complex human artifacts. Eyes and pocket watches both depend upon the functional integration of many separate and specifically configured parts. Nevertheless, he argued, biological organisms also differ from human artifacts – they reproduce themselves, for example – and the advocates of the design argument fail to take these dissimilarities into account. Since experience teaches that organisms always come from other organisms, Hume argued that analogical argument really ought to suggest that organisms ultimately come from some primeval organism (perhaps a giant spider or vegetable), not a transcendent mind or spirit.

Despite this and other objections, Hume’s categorical rejection of the design argument did not prove entirely decisive with either theistic or secular philosophers. Thinkers as diverse as the Scottish Presbyterian Thomas Reid (1981: 59), the Enlightenment deist Thomas Paine (1925: 6) and the rationalist philosopher Immanuel Kant, continued to affirm3 various versions of the design argument after the publication of Hume’s Dialogues. Moreover, with the publication of William Paley’s Natural Theology, science-based design arguments would achieve new popularity, both in Britain and on the continent. Paley (1852: 8-9) catalogued a host of biological systems that suggested the work of a superintending intelligence. Paley argued that the astonishing complexity and superb adaptation of means to ends in such systems could not originate strictly through the blind forces of nature, any more than could a complex machine such as a pocket watch. Paley also responded directly to Hume’s claim that the design inference rested upon a faulty analogy. A watch that could reproduce itself, he argued, would constitute an even more marvelous effect than one that could not. Thus, for Paley, the differences between artifacts and organisms only seemed to strengthen the conclusion of design. And indeed, despite the widespread currency of Hume’s objections, many scientists continued to find Paley’s watch-to-watchmaker reasoning compelling well into 19th century.

Darwin and the Eclipse of Design

Acceptance of the design argument began to abate during the late 19th century with the emergence of increasingly powerful materialistic explanations of apparent design in biology, particularly Charles Darwin’s theory of evolution by natural selection. Darwin argued in 1859 that living organisms only appeared to be designed. To make this case, he proposed a concrete mechanism, natural selection acting on random variations, that could explain the adaptation of organisms to their environment (and other evidences of apparent design) without actually invoking an intelligent or directing agency. Darwin saw that natural forces would accomplish the work of a human breeder and thus that blind nature could come to mimic, over time, the action of a selecting intelligence – a designer. If the origin of biological organisms could be explained naturalistically,4 as Darwin (1964: 481-82) argued, then explanations invoking an intelligent designer were unnecessary and even vacuous.

Thus, it was not ultimately the arguments of the philosophers that destroyed the popularity of the design argument, but a scientific theory of biological origins. This trend was reinforced by the emergence of other fully naturalistic origins scenarios in astronomy, cosmology and geology. It was also reinforced (and enabled) by an emerging positivistic tradition in science that increasingly sought to exclude appeals to supernatural or intelligent causes from science by definition (see Gillespie 1979: 41-66, 82-108 for a discussion of this methodological shift). Natural theologians such as Robert Chambers, Richard Owen and Asa Gray, writing just prior to Darwin, tended to oblige this convention by locating design in the workings of natural law rather than in the complex structure or function of particular objects. While this move certainly made the natural theological tradition more acceptable to shifting methodological canons in science, it also gradually emptied it of any distinctive empirical content, leaving it vulnerable to charges of subjectivism and vacuousness. By locating design more in natural law and less in complex contrivances that could be understood by direct comparison to human creativity, later British natural theologians ultimately made their research program indistinguishable from the positivistic and fully naturalistic science of the Darwinians (Dembski 1996). As a result, the notion of design, to the extent it maintained any intellectual currency, soon became relegated to a matter of subjective belief. One could still believe that a mind superintended over the regular law-like workings of nature, but one might just as well assert that nature and its laws existed on their own. Thus, by the end of the nineteenth century, natural theologians could no longer point to any specific artifact of nature that required intelligence as a necessary explanation. As a result, intelligent design became undetectable except “through the eyes of faith.”

Though the design argument in biology went into retreat after the publication of The Origin, it never quite disappeared. Darwin was challenged by several leading scientists of his day, most forcefully by the great Harvard naturalist Louis Agassiz, who argued that the sudden appearance of the first complex animal forms in the Cambrian fossil record pointed to “an intellectual power” and attested to “acts of mind.” Similarly, the co-founder of the theory of evolution by natural selection, Alfred Russel Wallace (1991: 33-34), argued that some things in biology were better explained by reference to the work of a “Higher intelligence” than by reference to Darwinian evolution. There seemed to him “to be evidence of a Power” guiding the laws of organic development “in definite directions and for special ends.” As he put it, “[S]o far from this view being out of harmony with the teachings of science, it has a striking analogy with what is now taking place in the world.” And in 1897, Oxford scholar F.C.S. Schiller argued that “it will not be possible to rule out the supposition that the process of Evolution may be guided by an intelligent design” (Schiller 1903: 141).

This continued interest in the design hypothesis was made possible in part because the mechanism of natural selection had a mixed reception in the immediate post-Darwinian period. As the historian of biology Peter Bowler (1986: 44-50) has noted, classical Darwinism entered a period of eclipse during the late 19th and early 20th centuries mainly because Darwin lacked an adequate theory for the origin and transmission of new heritable variation. Natural selection, as Darwin well understood, could accomplish nothing without a steady supply of genetic variation, the ultimate source of new biological structure. Nevertheless, both the blending theory of inheritance that Darwin had assumed and the classical Mendelian genetics that soon replaced it, implied limitations on the amount of genetic variability available to natural selection. This in turn implied limits on the amount of novel structure that natural selection could produce.

By the late 1930s and 1940s, however, natural selection was revived as the main engine of evolutionary change as developments in a number of fields helped to clarify the nature of genetic variation. The resuscitation of the variation / natural selection mechanism by modern genetics and population genetics became known as the neo-Darwinian synthesis. According to the new synthetic theory of evolution, the mechanism of natural selection acting upon random variations (especially including small-scale mutations) sufficed to account for the origin of novel biological forms and structures. Small-scale “microevolutionary” changes could be extrapolated indefinitely to account for large-scale “macroevolutionary” development. With the revival of natural selection, the neo-Darwinists would assert, like Darwinists before them, that they had found a “designer substitute” that could explain the appearance of design in biology as the result of an entirely undirected natural process.5 As Harvard evolutionary biologist Ernst Mayr (1982: xi-xii) has explained, “[T]he real core of Darwinism [...] is the theory of natural selection. This theory is so important for the Darwinian because it permits the explanation of adaptation, the ‘design’ of the natural theologian, by natural means.” By the centennial celebration of Darwin’s Origin of Species in 1959, it was assumed by many scientists that natural selection could fully explain the appearance of design and that, consequently, the design argument in biology was dead.

Problems with the Neo-Darwinian Synthesis

Since the late 1960s, however, the modern synthesis that emerged during the 1930s, 1940s and 1950s has begun to unravel in the face of new developments in paleontology, systematics, molecular biology, genetics and developmental biology. Since then a series of technical articles and books – including such recent titles as Evolution: a Theory in Crisis (1986) by Michael Denton, Darwinism: The Refutation of a Myth (1987) by Soren Lovtrup, The Origins of Order (1993) by Stuart A. Kauffman, How The Leopard Changed Its Spots (1994) by Brian C. Goodwin, Reinventing Darwin (1995) by Niles Eldredge, The Shape of Life (1996) by Rudolf A. Raff, Darwin’s Black Box (1996) by Michael Behe, The Origin of Animal Body Plans (1997) by Wallace Arthur, Sudden Origins: Fossils, Genes, and the Emergence of Species (1999) by Jeffrey H. Schwartz – have cast doubt on the creative power of neo-Darwinism’s mutation/selection mechanism. As a result, a search for alternative naturalistic mechanisms of innovation has ensued with, as yet, no apparent success or consensus. So common are doubts about the creative capacity of the selection / mutation mechanism, neo-Darwinism’s “designer substitute,” that prominent spokesmen for evolutionary theory must now periodically assure the public that “just because we don’t know how evolution occurred, does not justify doubt about whether it occurred.”6 As Niles Eldredge (1982: 508-9) wrote, “Most observers see the current situation in evolutionary theory – where the object is to explain how, not if, life evolves – as bordering on total chaos.” Or as Stephen Gould (1980: 119-20) wrote, “The neoDarwinism synthesis is effectively dead, despite its continued presence as textbook orthodoxy.” (See also Müller and Newman 2003: 3-12.)

Soon after Gould and Eldredge acknowledged these difficulties, the first important books (Thaxton, et al. 1984; Denton 1985) advocating the idea of intelligent design as an alternative to neo-Darwinism began to appear in the United States and Britain.7 But the scientific antecedents of the modern theory of intelligent design can be traced back to the beginning of the molecular biological revolution. In 1953 when Watson and Crick elucidated the structure of the DNA molecule, they made a startling discovery. The structure of DNA allows it to store information in the form of a four-character digital code. (See Figure 1). Strings of precisely sequenced chemicals called nucleotide bases store and transmit the assembly instructions – the information – for building the crucial protein molecules and machines the cell needs to survive.

Francis Crick later developed this idea with his famous "sequence hypothesis" according to which the chemical constituents in DNA function like letters in a written language or symbols in a computer code. Just as English letters may convey a particular message depending on their arrangement, so too do certain sequences of chemical bases along the spine of a DNA molecule convey precise instructions for building proteins. The arrangement of the chemical characters determines the function of the sequence as a whole. Thus, the DNA molecule has the same property of “sequence specificity” or “specified complexity” that characterizes codes and language. As Richard Dawkins has acknowledged, “the machine code of the genes is uncannily computer-like” (Dawkins 1995: 11). As Bill Gates has noted, “DNA is like a computer program but far, far more advanced than any software ever created” (Gates 1995:188). After the early 1960s, further discoveries made clear that the digital information in DNA and RNA is only part of a complex information processing system – an advanced form of nanotechnology that both mirrors and exceeds our own in its complexity, design logic and information storage density.

Thus, even as the design argument was being declared dead at the Darwinian centennial at the close of the 1950s, evidence that many scientists would later see as pointing to design was being uncovered in the nascent discipline of molecular biology. In any case, discoveries in this field would soon generate a growing rumble of voices dissenting from neo-Darwinism. In By Design, a history of the current design controversy, journalist Larry Witham (2003) traces the immediate roots of the theory of intelligent design in biology to the 1960s, at which time developments in molecular biology were generating new problems for the neo-Darwinian synthesis. At this time, mathematicians, engineers and physicists were beginning to express doubts that random mutations could generate the genetic information needed to produce crucial evolutionary transitions in the time available to the evolutionary process. Among the most prominent of these skeptical scientists were several from the Massachusetts Institute of Technology.

Digital Information
Figure 1

These researchers might have gone on talking among themselves about their doubts but for an informal gathering of mathematicians and biologists in Geneva in the mid-1960s at the home of MIT physicist Victor Weisskopf. During a picnic lunch the discussion turned to evolution, and the mathematicians expressed surprise at the biologists’ confidence in the power of mutations to assemble the genetic information necessary to evolutionary innovation. Nothing was resolved during the argument that ensued, but those present found the discussion stimulating enough that they set about organizing a conference to probe the issue further. This gathering occurred at the Wistar Institute in Philadelphia in the spring of 1966 and was chaired by Sir Peter Medawar, Nobel Laureate and director of North London’s Medical Research Council's laboratories. In his opening remarks at the meeting, he said that the “immediate cause of this conference is a pretty widespread sense of dissatisfaction about what has come to be thought of as the accepted evolutionary theory in the English-speaking world, the so-called neo-Darwinian theory” (Taylor 1983: 4).

The mathematicians were now in the spotlight and they took the opportunity to argue that neo-Darwinism faced a formidable combinatorial problem (see Moorhead and Kaplan 1967 for the seminar proceedings).8 In their view, the ratio of the number of functional genes and proteins, on the one hand, to the enormous number of possible sequences corresponding to a gene or protein of a given length, on the other, seemed so small as to preclude the origin of genetic information by a random mutational search. A protein one hundred amino acids in length represents an extremely unlikely occurrence. There are roughly 10130 possible amino acid sequences of this length, if one considers only the 20 protein-forming acids as possibilities. The vast majority of these sequences – it was (correctly) assumed – perform no biological function (see Axe 2004: 1295-1314 for a rigorous experimental evaluation of the rarity of functional proteins within the “sequence space” of possible combinations). Would an undirected search through this enormous space of possible sequences have a realistic chance of finding a functional sequence in the time allotted for crucial evolutionary transitions? To many of the Wistar mathematicians and physicists, the answer seemed clearly ‘no.’ Distinguished French mathematician M. P. Schützenberger (1967: 73-5) noted that in human codes, randomness is never the friend of function, much less of progress. When we make changes randomly to computer programs, “we find that we have no chance (i.e. less than 1/101000) even to see what the modified program would compute: it just jams.” MIT’s Murray Eden illustrated with reference to an imaginary library evolving by random changes to a single phrase: “Begin with a meaningful phrase, retype it with a few mistakes, make it longer by adding letters, and rearrange subsequences in the string of letters; then examine the result to see if the new phrase is meaningful. Repeat until the library is complete” (Eden 1967: 110). Would such an exercise have a realistic chance of succeeding, even granting it billions of years? At Wistar, the mathematicians, physicists and engineers argued that it would not. And they insisted that a similar problem confronts any mechanism that relies on random mutations to search large combinatorial spaces for sequences capable of performing novel function – even if, as is the case in biology, some mechanism of selection can act after the fact to preserve functional sequences once they have arisen.

Just as the mathematicians at Wistar were casting doubt on the idea that chance (i.e., random mutations) could generate genetic information, another leading scientist was raising questions about the role of law-like necessity. In 1967 and 1968, the Hungarian chemist and philosopher of science Michael Polanyi published two articles suggesting that the information in DNA was “irreducible” to the laws of physics and chemistry (Polanyi 1967: 21; Polanyi 1968: 1308-12). In these papers, Polanyi noted that the DNA conveys information in virtue of very specific arrangements of the nucleotide bases (that is, the chemicals that function as alphabetic or digital characters) in the genetic text. Yet, Polanyi also noted the laws of physics and chemistry allow for a vast number of other possible arrangements of these same chemical constituents. Since chemical laws allow a vast number of possible arrangements of nucleotide bases, Polanyi reasoned that no specific arrangement was dictated or determined by those laws. Indeed, the chemical properties of the nucleotide bases allow them to attach themselves interchangeably at any site on the (sugar-phosphate) backbone of the DNA molecule. (See Figure 1). Thus, as Polanyi (1968: 1309) noted, “As the arrangement of a printed page is extraneous to the chemistry of the printed page, so is the base sequence in a DNA molecule extraneous to the chemical forces at work in the DNA molecule.” Polanyi argued that it is precisely this chemical indeterminacy that allows DNA to store information and which also shows the irreducibility of that information to physical-chemical laws or forces. As he explained:

Suppose that the actual structure of a DNA molecule were due to the fact that the bindings of its bases were much stronger than the bindings would be for any other distribution of bases, then such a DNA molecule would have no information content. Its code-like character would be effaced by an overwhelming redundancy. [...] Whatever may be the origin of a DNA configuration, it can function as a code only if its order is not due to the forces of potential energy. It must be as physically indeterminate as the sequence of words is on a printed page (Polanyi 1968:1309).

The Mystery of Life’s Origin

As more scientists began to express doubts about the ability of undirected processes to produce the genetic information necessary to living systems, some began to consider an alternative approach to the problem of the origin of biological form and information. In 1984, after seven years of writing and research, chemist Charles Thaxton, polymer scientist Walter Bradley and geochemist Roger Olsen published a book proposing “an intelligent cause” as an explanation for the origin of biological information. The book was titled The Mystery of Life’s Origin and was published by The Philosophical Library, then a prestigious New York scientific publisher that had previously published more than twenty Nobel laureates.

Thaxton, Bradley and Olsen’s work directly challenged reigning chemical evolutionary explanations of the origin-of-life, and old scientific paradigms do not, to borrow from a Dylan Thomas poem, “go gently into that good night.” Aware of the potential opposition to their ideas, Thaxton flew to California to meet with one of the world’s top chemical evolutionary theorists, San Francisco State University biophysicist Dean Kenyon, co-author of a leading monograph on the subject, Biochemical Predestination. Thaxton wanted to talk with Kenyon to ensure that Mystery’s critiques of leading origin-of-life theories (including Kenyon’s), were fair and accurate. But Thaxton also had a second and more audacious motive: he planned to ask Kenyon to write the foreword to the book, even though Mystery critiqued the very originof-life theory that had made Kenyon famous in his field.

One can imagine how such a meeting might have unfolded, with Thaxton’s bold plan quietly dying in a corner of Kenyon’s office as the two men came to loggerheads over their competing theories. But fortunately for Thaxton, things went better than expected. Before he had worked his way around to making his request, Kenyon volunteered for the job, explaining that he had been moving toward Thaxton’s position for some time (Charles Thaxton, interview by Jonathan Witt, August 16, 2005; Jon Buell, interview by Jonathan Witt, September 21, 2005).

Kenyon’s bestselling origin-of-life text, Biochemical Predestination, had outlined what was then arguably the most plausible evolutionary account of how a living cell might have organized itself from chemicals in the “primordial soup.” Already by the 1970s, however, Kenyon was questioning his own hypothesis. Experiments (some performed by Kenyon himself) increasingly suggested that simple chemicals do not arrange themselves into complex information-bearing molecules such as proteins and DNA without guidance from human investigators. Thaxton, Bradley and Olsen appealed to this fact in constructing their argument, and Kenyon found their case both well-reasoned and well-researched. In the foreword he went on to pen, he described The Mystery of Life’s Origin as “an extraordinary new analysis of an age-old question” (Kenyon 1984: v).

The book eventually became the best-selling advanced college-level work on chemical evolution, with sales fueled by endorsements from leading scientists such as Kenyon, Robert Shapiro and Robert Jastrow and by favorable reviews in prestigious journals such as the Yale Journal of Biology and Medicine.9 Others dismissed the work as going beyond science.

What was their idea, and why did it generate interest among leading scientists? First, Mystery critiqued all of the current, purely materialistic explanations for the origin of life. In the process, they showed that the famous Miller-Urey experiment did not simulate early Earth conditions, that the existence of an early Earth pre-biotic soup was a myth, that important chemical evolutionary transitions were subject to destructive interfering cross-reactions, and that neither chance nor energy-flow could account for the information in biopolymers such as proteins and DNA. But it was in the book’s epilogue that the three scientists proposed a radically new hypothesis. There they suggested that the information-bearing properties of DNA might point to an intelligent cause. Drawing on the work of Polanyi and others, they argued that chemistry and physics alone couldn’t produce information any more than ink and paper could produce the information in a book. Instead, they argued that our uniform experience suggests that information is the product of an intelligent cause:

We have observational evidence in the present that intelligent investigators can (and do) build contrivances to channel energy down nonrandom chemical pathways to bring about some complex chemical synthesis, even gene building. May not the principle of uniformity then be used in a broader frame of consideration to suggest that DNA had an intelligent cause at the beginning? (Thaxton et al. 1984: 211.)

Mystery also made the radical claim that intelligent causes could be legitimately considered as scientific hypotheses within the historical sciences, a mode of inquiry they called origins science.

Their book marked the beginning of interest in the theory of intelligent design in the United States, inspiring a generation of younger scholars (see Denton 1985; Denton 1986; Kenyon and Mills 1996: 9-16; Behe 2004: 352-370; Dembski 2002; Dembski 2004: 311-330; Morris 2000: 1-11; Morris 2003a: 13-32; Morris 2003b: 505-515; Lönnig 2001; Lönnig and Saedler 2002: 389-410; Nelson and Wells 2003: 303-322; Meyer 2003a: 223-285; Meyer 2003b: 371391; Bradley 2004: 331-351) to investigate the question of whether there is actual design in living organisms rather than, as neo-Darwinian biologists and chemical evolutionary theorists had long claimed, the mere appearance of design. At the time the book appeared, I was working as a geophysicist for the Atlantic Richfield Company in Dallas where Charles Thaxton happened to live. I later met him at a scientific conference and became intrigued with the radical idea he was developing about DNA. I began dropping by his office after work to discuss the arguments made in his book. Intrigued, but not yet fully convinced, the next year I left my job as a geophysicist to pursue a Ph.D. at The University of Cambridge in the history and philosophy of science. During my Ph.D. research, I investigated several questions that had emerged in my discussions with Thaxton. What methods do scientists use to study biological origins? Is there a distinctive method of historical scientific inquiry? After completing my Ph.D., I would take up another question: Could the argument from DNA to design be formulated as a rigorous historical scientific argument?

Of Clues and Causes

During my Ph.D. research at Cambridge, I found that historical sciences (such as geology, paleontology and archeology) do employ a distinctive method of inquiry. Whereas many scientific fields involve an attempt to discover universal laws, historical scientists attempt to infer past causes from present effects. As Stephen Gould (1986: 61) put it, historical scientists are trying to “infer history from its results.” Visit the Royal Tyrrell Museum in Alberta, Canada and you will find there a beautiful reconstruction of the Cambrian seafloor with its stunning assemblage of phyla. Or read the fourth chapter of Simon Conway Morris’s book on the Burgess Shale and you will be taken on a vivid guided tour of that long-ago place. But what Morris (1998: 63-115) and the museum scientists did in both cases was to imaginatively reconstruct the ancient Cambrian site from an assemblage of present-day fossils. In other words, paleontologists infer a past situation or cause from present clues.

A key figure in elucidating the special nature of this mode of reasoning was a contemporary of Darwin, polymath William Whewell, master of Trinity College, Cambridge and best known for two books about the nature of science, History of the Inductive Sciences (1837) and The Philosophy of the Inductive Sciences (1840). Whewell distinguished inductive sciences like mechanics (physics) from what he called palaetiology – historical sciences that are defined by three distinguishing features. First, the palaetiological or historical sciences have a distinctive object: to determine “ancient condition[s]” (Whewell 1857, vol. 3: 397) or past causal events. Second, palaetiological sciences explain present events (“manifest effects”) by reference to past (causal) events rather than by reference to general laws (though laws sometimes play a subsidiary role). And third, in identifying a “more ancient condition,” Whewell believed palaetiology utilized a distinctive mode of reasoning in which past conditions were inferred from "manifest effects" using generalizations linking present clues with past causes (Whewell 1840, vol. 2: 121-22, 101-103).

Inference to the Best Explanation

This type of inference is called abductive reasoning. It was first described by the American philosopher and logician C.S. Peirce. He noted that, unlike inductive reasoning, in which a universal law or principle is established from repeated observations of the same phenomena, and unlike deductive reasoning, in which a particular fact is deduced by applying a general law or rule to another particular fact or case, abductive reasoning infers unseen facts, events or causes in the past from clues or facts in the present.

As Peirce himself showed, however, there is a problem with abductive reasoning. Consider the following syllogism:

If it rains, the streets will get wet.
The streets are wet.
Therefore, it rained.

This syllogism infers a past condition (i.e., that it rained) but it commits a logical fallacy known as affirming the consequent. Given that the street is wet (and without additional evidence to decide the matter), one can only conclude that perhaps it rained. Why? Because there are many other possible ways by which the street may have gotten wet. Rain may have caused the streets to get wet; a street cleaning machine might have caused them to get wet; or an uncapped fire hydrant might have done so. It can be difficult to infer the past from the present because there are many possible causes of a given effect.

Peirce’s question was this: how is it that, despite the logical problem of affirming the consequent, we nevertheless frequently make reliable abductive inferences about the past? He noted, for example, that no one doubts the existence of Napoleon. Yet we use abductive reasoning to infer Napoleon’s existence. That is, we must infer his past existence from present effects. But despite our dependence on abductive reasoning to make this inference, no sane or educated person would doubt that Napoleon Bonaparte actually lived. How could this be if the problem of affirming the consequent bedevils our attempts to reason abductively? Peirce’s answer was revealing: "Though we have not seen the man [Napoleon], yet we cannot explain what we have seen without" the hypothesis of his existence (Peirce, 1932, vol. 2: 375). Peirce's words imply that a particular abductive hypothesis can be strengthened if it can be shown to explain a result in a way that other hypotheses do not, and that it can be reasonably believed (in practice) if it explains in a way that no other hypotheses do. In other words, an abductive inference can be enhanced if it can be shown that it represents the best or the only adequate explanation of the "manifest effects" (to use Whewell's term).

As Peirce pointed out, the problem with abductive reasoning is that there is often more than one cause that can explain the same effect. To address this problem, pioneering geologist Thomas Chamberlain (1965: 754-59) delineated a method of reasoning that he called “the method of multiple working hypotheses.” Geologists and other historical scientists use this method when there is more than one possible cause or hypothesis to explain the same evidence. In such cases, historical scientists carefully weigh the evidence and what they know about various possible causes to determine which best explains the clues before them. In modern times, contemporary philosophers of science have called this the method of inference to the best explanation. That is, when trying to explain the origin of an event or structure in the past, historical scientists compare various hypotheses to see which would, if true, best explain it. They then provisionally affirm that hypothesis that best explains the data as the most likely to be true.

Causes Now in Operation

But what constitutes the best explanation for the historical scientist? My research showed that among historical scientists it’s generally agreed that best doesn’t mean ideologically satisfying or mainstream; instead, best generally has been taken to mean, first and foremost, most causally adequate. In other words, historical scientists try to identify causes that are known to produce the effect in question. In making such determinations, historical scientists evaluate hypotheses against their present knowledge of cause and effect; causes that are known to produce the effect in question are judged to be better causes than those that are not. For instance, a volcanic eruption is a better explanation for an ash layer in the earth than an earthquake because eruptions have been observed to produce ash layers, whereas earthquakes have not.

This brings us to the great geologist Charles Lyell, a figure who exerted a tremendous influence on 19th century historical science generally and on Charles Darwin specifically. Darwin read Lyell’s magnum opus, The Principles of Geology, on the voyage of the Beagle and later appealed to its uniformitarian principles to argue that observed micro-evolutionary processes of change could be used to explain the origin of new forms of life. The subtitle of Lyell’s Principles summarized the geologist’s central methodological principle: “Being an Attempt to Explain the Former Changes of the Earth's Surface, by Reference to Causes now in Operation.” Lyell argued that when historical scientists are seeking to explain events in the past, they should not invoke unknown or exotic causes, the effects of which we do not know, but instead they should cite causes that are known from our uniform experience to have the power to produce the effect in question (i.e., “causes now in operation”).

Darwin subscribed to this methodological principle. His term for a “presently acting cause” was a vera causa, that is, a true or actual cause. In other words, when explaining the past, historical scientists should seek to identify established causes – causes known to produce the effect in question. For example, Darwin tried to show that the process of descent with modification was the vera causa of certain kinds of patterns found among living organisms. He noted that diverse organisms share many common features. He called these homologies and noted that we know from experience that descendents, although they differ from their ancestors, also resemble them in many ways, usually more closely than others who are more distantly related. So he proposed descent with modification as a vera causa for homologous structures. That is, he argued that our uniform experience shows that the process of descent with modification from a common ancestor is “causally adequate” or capable of producing homologous features.

And Then There Was One

Contemporary philosophers agree that causal adequacy is the key criteria by which competing hypotheses are adjudicated, but they also show that this process leads to secure inferences only where it can be shown that there is just one known cause for the evidence in question. Philosophers of science Michael Scriven and Elliot Sober, for example, point out that historical scientists can make inferences about the past with confidence when they discover evidence or artifacts for which there is only one cause known to be capable of producing them. When historical scientists infer to a uniquely plausible cause, they avoid the fallacy of affirming the consequent and the error of ignoring other possible causes with the power to produce the same effect. It follows that the process of determining the best explanation often involves generating a list of possible hypotheses, comparing their known or theoretically plausible causal powers with respect to the relevant data, and then like a detective attempting to identify the murderer, progressively eliminating potential but inadequate explanations until, finally, one remaining causally adequate explanation can be identified as the best. As Scriven (1966: 250) explains, such abductive reasoning (or what he calls “Reconstructive causal analysis”) “proceeds by the elimination of possible causes,” a process that is essential if historical scientists are to overcome the logical limitations of abductive reasoning.

The matter can be framed in terms of formal logic. As C.S. Peirce noted, arguments of the form:

if X, then Y
Y
therefore X

commit the fallacy of affirming the consequent. Nevertheless, as Michael Scriven (1959: 480), Elliot Sober (1988: 1-5), W.P. Alston (1971: 23) and W.B. Gallie (1959: 392) have observed, such arguments can be restated in a logically acceptable form if it can be shown that Y has only one known cause (i.e., X) or that X is a necessary condition (or cause) of Y. Thus, arguments of the form:

X is antecedently necessary to Y,
Y exists,
Therefore, X existed

are accepted as logically valid by philosophers and persuasive by historical and forensic scientists. Scriven especially emphasized this point: if scientists can discover an effect for which there is only one plausible cause, they can infer the presence or action of that cause in the past with great confidence. For instance, the archaeologist who knows that human scribes are the only known cause of linguistic inscriptions will infer scribal activity upon discovering tablets containing ancient writing.

In many cases, of course, the investigator will have to work his way to a unique cause one painstaking step at a time. For instance, both wind shear and compressor blade failure could explain an airline crash, but the forensic investigator will want to know which one did, or if the true cause lies elsewhere. Ideally, the investigator will be able to discover some crucial piece of evidence or suite of evidences for which there is only one known cause, allowing him to distinguish between competing explanations and eliminate every explanation but the correct one.

In my study of the methods of the historical sciences, I found that historical scientists, like detectives and forensic experts, routinely employ this type of abductive and eliminative reasoning in their attempts to infer the best explanation.10 In fact, Darwin himself employed this method in The Origin of Species. There he argued for his theory of Universal Common Descent, not because it could predict future outcomes under controlled experimental conditions, but because it could explain already known facts better than rival hypotheses. As he explained in a letter to Asa Gray:

I [...] test this hypothesis [Universal Common Descent] by comparison with as many general and pretty well-established propositions as I can find – in geographical distribution, geological history, affinities &c., &c. And it seems to me that, supposing that such a hypothesis were to explain such general propositions, we ought, in accordance with the common way of following all sciences, to admit it till some better hypothesis be found out. (Darwin 1896, vol. 1: 437.)

DNA by Design: Developing the Argument from Information

What does this investigation into the nature of historical scientific reasoning have to do with intelligent design, the origin of biological information and the mystery of life’s origin? For me, it was critically important to deciding whether the design hypothesis could be formulated as a rigorous scientific explanation as opposed to just an intriguing intuition. I knew from my study of origin-of-life research that the central question facing scientists trying to explain the origin of the first life was this: how did the sequence-specific digital information (stored in DNA and RNA) necessary to building the first cell arise? As Bernd-Olaf Küppers (1990: 170-172) put it, “the problem of the origin of life is clearly basically the equivalent to the problem of the origin of biological information.” My study of the methodology of the historical sciences then led me to ask a series of questions: What is the presently acting cause of the origin of digital information? What is the vera causa of such information? Or: what is the “only known cause” of this effect? Whether I used Lyell’s, Darwin’s or Scriven’s terminology, the question was the same: what type of cause has demonstrated the power to generate information? Based upon both common experience and my knowledge of the many failed attempts to solve the problem with “unguided” pre-biotic simulation experiments and computer simulations, I concluded that there is only one sufficient or “presently acting” cause of the origin of such functionally-specified information. And that cause is intelligence. In other words, I concluded, based on our experience-based understanding of the cause-and-effect structure of the world, that intelligent design is the best explanation for the origin of the information necessary to build the first cell. Ironically, I discovered that if one applies Lyell’s uniformitarian method – a practice much maligned by young earth creationists – to the question of the origin of biological information, the evidence from molecular biology supports a new and rigorous scientific argument to design.

What is Information?

In order to develop this argument and avoid equivocation, it was necessary to carefully define what type of information was present in the cell (and what type of information might, based upon our uniform experience, indicate the prior action of a designing intelligence). Indeed, part of the historical scientific method of reasoning involves first defining what philosophers of science call the explanandum – the entity that needs to be explained. As the historian of biology Harmke Kamminga (1986: 1) has observed, “At the heart of the problem of the origin of life lies a fundamental question: What is it exactly that we are trying to explain the origin of?” Contemporary biology had shown that the cell was, among other things, a repository of information. For this reason, origin-of-life studies had focused increasingly on trying to explain the origin of that information. But what kind of information is present in the cell? This was an important question to answer because the term “information” can be used to denote several theoretically distinct concepts.

In developing a case for design from the information-bearing properties of DNA, it was necessary to distinguish two key notions of information from one another: mere information carrying capacity, on the one hand, and functionally-specified information, on the other. It was important to make this distinction because the kind of information that is present in DNA (like the information present in machine code or written language) has a feature that the wellknown Shannon theory of information does not encompass or describe.

During the 1940s, Claude Shannon at Bell Laboratories developed a mathematical theory of information (1948: 379–423, 623–56) that equated the amount of information transmitted with the amount of uncertainty reduced or eliminated by a series of symbols or characters (Dretske, 1981: 6–10). In Shannon’s theory, the more improbable an event the more uncertainty it eliminates, and thus, the more information it conveys. Shannon generalized this relationship by stating that the amount of information conveyed by an event is inversely proportional to the prior probability of its occurrence. The greater the number of possibilities, the greater the improbability of any one being actualized, and thus the more information is transmitted when a particular possibility occurs.11

Shannon’s theory applies easily to sequences of alphabetic symbols or characters that function as such. Within a given alphabet of x possible characters, the occurrence or placement of a specific character eliminates x-1 other possibilities and thus a corresponding amount of uncertainty. Or put differently, within any given alphabet or ensemble of x possible characters (where each character has an equi-probable chance of occurring), the probability of any one character occurring is 1/x. In systems where the value of x can be known (or estimated), as in a code or language, mathematicians can easily generate quantitative estimates of informationcarrying capacity. The greater the number of possible characters at each site, and the longer the sequence of characters, the greater is the information-carrying capacity – or Shannon information – associated with the sequence.

The way that nucleotide bases in DNA function as alphabetic or digital characters enabled molecular biologists to calculate the information-carrying capacity of those molecules using the new formalism of Shannon’s theory. Since at any given site along the DNA backbone any one of four nucleotide bases may occur with equal probability (Küppers, 1987: 355-369), the probability of the occurrence of a specific nucleotide at that site equals 1/4 or .25. The information-carrying capacity of a sequence of a specific length n can then be calculated using

Shannon’s familiar expression (I = –log2p) once one computes a probability value (p) for the occurrence of a particular sequence n nucleotides long where p = (1/4)n. The probability value thus yields a corresponding measure of information-carrying capacity for a sequence of n nucleotide bases (Schneider 1997: 427-441; Yockey 1992: 246-258).

Though Shannon’s theory and equations provided a powerful way to measure the amount of information that could be transmitted across a communication channel, it had important limits. In particular, it did not and could not distinguish merely improbable (or complex) sequences of symbols from those that conveyed a message or performed a function. As Warren Weaver made clear in 1949, “The word information in this theory is used in a special mathematical sense that must not be confused with its ordinary usage. In particular, information must not be confused with meaning.” (Shannon and Weaver 1949: 8.) Information theory could measure the information-carrying capacity of a given sequence of symbols, but it could not distinguish the presence of a meaningful or functional arrangement of symbols from a random sequence.

As scientists applied Shannon information theory to biology it enabled them to render rough quantitative measures of the information-carrying capacity (or brute complexity or improbability) of DNA sequences and their corresponding proteins. As such, information theory did help to refine biologists’ understanding of one important feature of the crucial biomolecular components on which life depends: DNA and proteins are highly complex, and quantifiably so. Nevertheless, the ease with which information theory applied to molecular biology (to measure information-carrying capacity) created confusion about the sense in which DNA and proteins contain “information.”

Information theory strongly suggested that DNA and proteins possess vast informationcarrying capacities, as defined by Shannon’s theory. When molecular biologists have described DNA as the carrier of hereditary information, however, they have meant much more than that technically limited term information. Instead, leading molecular biologists defined biological information so as to incorporate the notion of specificity of function (as well as complexity) as early as 1958 (Crick, 1958: 144, 153). Molecular biologists such as Monod and Crick understood biological information – the information stored in DNA and proteins – as something more than mere complexity (or improbability). Crick and Monod also recognized that sequences of nucleotides and amino acids in functioning bio-macromolecules possessed a high degree of specificity relative to the maintenance of cellular function. As Crick explained in 1958, “By information I mean the specification of the amino acid sequence in protein [...] Information means here the precise determination of sequence, either of bases in the nucleic acid or on amino acid residues in the protein (1958: 144, 153).”

Since the late 1950s, biologists have equated the “precise determination of sequence” with the extra-information-theoretic property of “specificity” or “specification.” Biologists have defined specificity tacitly as ‘necessary to achieving or maintaining function.’ They have determined that DNA base sequences are specified, not by applying information theory, but by making experimental assessments of the function of those sequences within the overall apparatus of gene expression (Judson,1979: 470-487). Similar experimental considerations established the functional specificity of proteins.

In developing an argument for intelligent design based upon the information present in DNA and other bio-macromolecules, I emphasized that the information in these molecules was functionally-specified and complex, not just complex. Indeed, to avoid equivocation, it was necessary to distinguish:

“information content” from mere “information carrying capacity,”
“specified information” from mere “Shannon information”
“specified complexity” from mere “complexity.”

The first of the two terms in each of these couplets refer to sequences in which the function of the sequence depends upon the precise sequential arrangements of the constituent characters or parts, whereas second terms refer to sequences that do not necessarily perform functions or convey meaning at all. The second terms refer to sequences that may be merely improbable or complex; the first terms refer to sequences that are both complex and functionallyspecified.

In developing an argument for intelligent design from the information-bearing properties of DNA, I acknowledged that merely complex or improbable phenomena or sequences might arise by undirected natural processes. Nevertheless, I argued – based upon our uniform experience – that sequences that are both complex and functionally-specified (rich in information content or specified information) invariably arise only from the activity of intelligent agents. Thus, I argued that the presence of specified information provides a hallmark or signature of a designing intelligence. In making these analytical distinctions in order to apply them to an analysis of biological systems, I was greatly assisted in my conversations and collaboration with William Dembski who was at the same time (1992-1997) developing a general theory of design detection which I discuss in detail below.

In the years that followed, I published a series of papers (see Meyer 1998a: 519-56; Meyer 1998b, 117-143; Meyer 2000a: 30-38; Meyer 2003a: 225-285) arguing that intelligent design provides a better explanation than competing chemical evolutionary models for the origin of the biological information. To make this argument, I followed the standard method of historical scientific reasoning that I had studied in doctoral work. In particular, I evaluated the causal adequacy of various naturalistic explanations for the origin of biological information including those based on chance, law-like necessities and the combination of the two. In each case, I showed (or the scientific literature showed) that such naturalistic models failed to explain the origin of specified information (or specified complexity or information content) starting from purely physical / chemical antecedents. Instead, I argued, based on our experience, that there is a cause – namely, intelligence – that is known to be capable of producing such information. As the pioneering information theorist Henry Quastler (1964: 16) pointed out, “Information habitually arises from conscious activity.” Moreover, based upon our experience (and the findings of contemporary origin-of-life research) it is clear that intelligent design or agency is the only type of cause known to produce large amounts of specified information. Therefore, I argued that the theory of intelligent design provides the best explanation for the information necessary to build the first life.12

Darwin on Trial and Philip Johnson

While I was still studying historical scientific reasoning in Cambridge in 1987, I had a fateful meeting with a prominent University of California, Berkeley law professor named Phillip Johnson, whose growing interest in the subject of biological origins would transform the contours of the debate over evolution. Johnson and I met at a small Greek restaurant on Free School Lane next to the Old Cavendish Laboratory in Cambridge. The meeting had been arranged by a fellow graduate student who knew Johnson from Berkeley. My friend had told me only that Johnson was “a quirky but brilliant law professor” who “was on sabbatical studying torts,” and he “had become obsessed with evolution.” “Would you talk to him?” he asked. His description and the tone of his request led me to expect a very different figure than the one I encountered. Though my own skepticism about Darwinism had been well cemented by this time, I knew enough of the stereotypical evolution-basher to be skeptical that a late-in-career nonscientist could have stumbled onto an original critique of contemporary Darwinian theory.

Only later did I learn of Johnson’s intellectual pedigree: Harvard B.A., top of his class University of Chicago law-school graduate, law clerk for Supreme Court Chief Justice Earl Warren, leading constitutional scholar, occupant of a distinguished chair at University of California, Berkeley. In Johnson, I encountered a man of supple and prodigious intellect who seemed in short order to have found the pulse of the origins issue. Johnson told me that his doubts about Darwinism had started with a visit to the British Natural History Museum, where he learned about the controversy that had raged there earlier in the 1980s. At that time, the museum paleontologists presented a display describing Darwin’s theory as “one possible explanation” of origins. A furor ensued, resulting in the removal of the display when the editors of the prestigious journal Nature and others in the scientific establishment denounced the museum for its ambivalence about accepted fact. Intrigued by the response to such an apparently innocuous exhibit, Johnson decided to investigate further.

Soon thereafter, as Johnson was still casting about for a research topic early in his sabbatical year in London, he stepped off the bus and followed his usual route to his visiting faculty office. Along the way, he passed by a large science bookstore and, glancing in, noticed a pair of books about evolution, The Blind Watchmaker by Richard Dawkins and Evolution: A Theory in Crisis by Michael Denton. Historian of science Thomas Woodward recounts the episode:

His curiosity aroused, he entered the store, picked up copies of both books from a table near the door, and studied the dust jacket blurbs. The two biologists were apparently driving toward diametrically opposite conclusions. Sensing a delicious scientific dialectic, he bought both books and tucked them under his arm as he continued on to his office. (Woodward 2003: 69.)

The rest, as they say, is history. Johnson began to read whatever he could find on the issue: Gould, Ruse, Ridley, Dawkins, Denton and many others. What he read made him even more suspicious of evolutionary orthodoxy. “Something about the Darwinists’ rhetorical style,” he told me later, “made me think they had something to hide.”

An extensive examination of evolutionary literature confirmed this suspicion. Darwinist polemic revealed a surprising reliance upon arguments that seemed to assume rather than demonstrate the central claim of neo-Darwinism, namely, that life had evolved via a strictly undirected natural process. Johnson also observed an interesting contrast between biologists' technical papers and their popular defenses of evolutionary theory. He discovered that biologists acknowledged many significant difficulties with both standard and newer evolutionary models when writing in scientific journals. Yet, when defending basic Darwinist commitments (such as the common ancestry of all life and the creative power of the natural selection / mutation mechanism) in popular books or textbooks, Darwinists employed an evasive and moralizing rhetorical style to minimize problems and belittle critics. Johnson began to wonder why, given mounting difficulties, Darwinists remained so confident that all organisms had evolved naturally from simpler forms.

In the book Darwin on Trial, Johnson (1991) argued that evolutionary biologists remain confident about neo-Darwinism, not because empirical evidence generally supports the theory, but instead because their perception of the rules of scientific procedure virtually prevent them from considering any alternative view. Johnson cited, among other things, a communiqué from the National Academy of Sciences (NAS) issued to the Supreme Court during the Louisiana “creation science” trial. The NAS insisted that “the most basic characteristic of science” is a “reliance upon naturalistic explanations.”

While Johnson accepted this convention, called “methodological naturalism,” as an accurate description of how much of science operates, he argued that treating it as a normative rule when seeking to establish that natural processes alone produced life assumes the very point that neo-Darwinists are trying to establish. Johnson reminded readers that Darwinism does not just claim that evolution (in the sense of change over time) has occurred. Instead, it purports to establish that the major innovations in the history of life arose by purely natural mechanisms – that is, without any intelligent direction or design. Thus, Johnson distinguished the various meanings of the term “evolution” (such as change over time or common ancestry) from the central claim of Darwinism, namely, the claim that a purely undirected and unguided process had produced the appearance of design in living organisms. Following Richards Dawkins, the staunch modern defender of Darwinism, Johnson called this latter idea “the Blind Watchmaker thesis” to make clear that Darwinism as a theory is incompatible with the design hypothesis. In any case, he argued, modern Darwinists refuse to consider the possibility of design because they think the rules of science forbid it.

Yet if the design hypothesis must be denied consideration from the outset, and if, as the U.S. National Academy of Sciences also asserted, exclusively negative argumentation against evolutionary theory is “unscientific,” then Johnson (1991: 8) observed that “the rules of argument. [...] make it impossible to question whether what we are being told about evolution is really true.” Defining opposing positions out of existence “may be one way to win an argument,” but, said Johnson, it scarcely suffices to demonstrate the superiority of a protected theory.

When I first met Johnson at the aforementioned Greek restaurant it was not long after he had started his investigation of Darwinism. Nevertheless, we came to an immediate meeting of minds, albeit from different starting points. Johnson saw that, as matter of logic, the convention of methodological naturalism forced scientists into a question-begging affirmation of the proposition that life and humankind had arisen “by a purposeless and natural process that did not have him in mind,” as the neo-Darwinist George Gaylord Simpson (1967: 45) had phrased it. For my part, I had come to question methodological naturalism because it seemed to prevent historical scientists from considering all the possible hypotheses that might explain the evidence – despite a clear methodological desideratum to do otherwise. How could an historical scientist claim that he or she had inferred the best explanation if the causal adequacy of some hypotheses were arbitrarily excluded from consideration? For the method of multiple competing hypotheses to work, hypotheses must be allowed to compete without artificial restrictions on the competition.

In any case, when Darwin on Trial was published in 1991 it created a minor media sensation with magazines and newspapers all over America either reviewing the book or profiling the eccentric Berkeley professor who had dared to take on Darwin. Major science journals including Nature, Science and Scientific American also reviewed Darwin on Trial. The reviews, including one by Stephen J. Gould, were uniformly critical and even hostile. Yet these reviews helped publicize Johnson’s critique and attracted many scientists who shared Johnson’s skepticism about neo-Darwinism. This allowed Johnson to do something that, until that time, hadn’t been done: to bring together dissenting scientists from around the world.

Darwin’s Black Box and Michael Behe

One of those scientists, a tenured biochemist at Lehigh University, Michael Behe, had come to doubt Darwinian evolution in the same way that Johnson had – by reading Denton’s Evolution: A Theory in Crisis. Behe was a Roman Catholic and had been raised to accept Darwinism as the way God chose to create life. Thus, he had no theological objections to Darwinian evolution. For years he had accepted it without questioning. When he finished Denton’s book, he still had no theological objections to evolution, but he did have serious scientific doubts. He soon began to investigate what the evidence from his own field of biochemistry had to say about the plausibility of the neo-Darwinian mechanism. Although he saw no reason to doubt that natural selection could produce relatively minor biological changes, he became extremely skeptical that the Darwinian mechanism could produce the kind of functionally integrated complexity that characterizes the inner workings of the cell. Intelligent design, he concluded, must also have played a role.

As his interest grew, he began teaching a freshman course on the evolution controversy. Later in 1992, he wrote a letter to Science defending Johnson’s new book after it had been panned in the review that appeared there. When Johnson saw the letter in Science, he contacted Behe and eventually invited him to a symposium at Southern Methodist University in Texas, where Johnson debated the Darwinist philosopher of science Michael Ruse. The meeting was significant for two reasons. First, as Behe (2006: 37-47) explained, the scientists skeptical of Darwin who were present at the debate were able to experience what they already believed intellectually – they had strong arguments that could withstand high-level scrutiny from their peers. Second, at SMU, many of the leaders of the intelligent design research community would meet together for the first time in one place. Before, we had each been solitary skeptics, unsure of how to proceed against an entrenched scientific paradigm. Now we understood that we were part of an interdisciplinary intellectual community. After the symposium, Johnson arranged a larger meeting the following year for a core group of dissidents at Pajaro Dunes, California (shown in the film Unlocking the Mystery of Life). There we talked science and strategy and, at Johnson’s prompting, joined an e-mail listserv so that we would remain in contact and hone our ideas. At Pajaro Dunes, “the movement” congealed.

Behe, in particular, used the new listserv to test and refine the various arguments for a book he was working on. Within three years, Darwin’s Black Box appeared with The Free Press, a major New York trade publisher. The book went on to sell a quarter million copies.

In Darwin’s Black Box, Behe pointed out that over the last 30 years, biologists have discovered an exquisite world of nanotechnology within living cells – complex circuits, molecular motors and other miniature machines. For example, bacterial cells are propelled by tiny rotary engines called flagellar motors that rotate at speeds up to 100,000 rpm. These engines look as if they were designed by the Mazda corporation, with many distinct mechanical parts (made of proteins) including rotors, stators, O-rings, bushings, U-joints and drive shafts. (See Figure 2). Behe noted that the flagellar motor depends on the coordinated function of 30 protein parts. Remove one of these necessary proteins and the rotary motor simply doesn't work. The motor is, in Behe's terminology, “irreducibly complex.”

This, he argued, creates a problem for the Darwinian mechanism. Natural selection preserves or “selects” functional advantages. If a random mutation helps an organism survive, it can be preserved and passed on to the next generation. Yet the flagellar motor does not function unless all of its thirty parts are present. Thus, natural selection can “select” or preserve the motor once it has arisen as a functioning whole, but it can't produce the motor in a step-bystep Darwinian fashion.

Natural selection purportedly builds complex systems from simpler structures by preserving a series of intermediate structures, each of which must perform some function. In the case of the flagellar motor, most of the critical intermediate stages – like the 29 or 28-part version of the flagellar motor – perform no function for natural selection to preserve. This leaves the origin of the flagellar motor, and many complex cellular machines, unexplained by the mechanism – natural selection – that Darwin specifically proposed to replace the design hypothesis.

Is there a better explanation? Based upon our uniform experience, we know of only one type of cause that produces irreducibly complex systems – namely, intelligence. Indeed, whenever we encounter such complex systems – whether integrated circuits or internal combustion engines – and we know how they arose, invariably a designing intelligence played a role.

Flagellar Motor
Figure 2

The strength of Behe's argument can be judged in part by the responses of his critics. The neo-Darwinists have had ten years to respond and have so far mustered only vague stories about natural selection building irreducibly complex systems (like the flagellar motor) by “coopting” simpler functional parts from other systems. For example, some of Behe’s critics, such as Kenneth Miller of Brown University, have suggested that the flagellar motor might have arisen from the functional parts of other simpler systems or from simpler subsystems of the motor. He and others have pointed to a tiny molecular syringe called a type III secretory system (or TTSS) – that is sometimes found in bacteria without the other parts of the flagellar motor present – to illustrate this possibility. Since the type III secretory system is made of ten or so proteins that are also found in the thirty-protein motor, and since this tiny pump does perform a function, Professor Miller (2004: 81-97) has intimated13 that the bacterial flagellar motor might have arisen from this smaller pump.

While it’s true that the type III secretory system can function separately from the other parts of the flagellar motor, attempts to explain the origin of the flagellar motor by co-option of the TTSS face at least three key difficulties. First, the other twenty or so proteins in the flagellar motor are unique to it and are not found in any other bacterium. This raises the question: from where were these other protein parts co-opted? Second, as microbiologist Scott Minnich (Minnich and Meyer 2004: 295-304) of the University of Idaho points out, even if all the genes and protein parts were somehow available to make a flagellar motor during the evolution of life, the parts would need to be assembled in a specific temporal sequence similar to the way an automobile is assembled in factory. Yet, in order to choreograph the assembly of the flagellar motor, present-day bacteria need an elaborate system of genetic instructions as well as many other protein machines to regulate the timing of the expression of these assembly instructions. Arguably, this system is itself irreducibly complex. Thus, advocates of cooption tacitly presuppose the need for the very thing that the co-option hypotheses seek to explain: a functionally interdependent system of proteins (and genes). Co-option only explains irreducible complexity by presupposing irreducible complexity. Third, analyses of the gene sequences of the two systems (Saier 2004: 113-115) suggest that the flagellar motor arose first and the pump came later. In other words, if anything, the syringe evolved from the motor, not the motor from the syringe. (See Behe 2006b: 255-272 for Behe’s response to his critics.)

An Institutional Home

In the same year, 1996, that Behe’s book appeared, the Center for Science and Culture was launched as part of the Seattle-based Discovery Institute. The Center began with a research fellowship program to support the research of scientists and scholars such as Michael Behe, Jonathan Wells and David Berlinski who were challenging neo-Darwinism or developing the alternative theory of intelligent design. The Center has now become the institutional hub for an international groups of scientists and scholars who are challenging scientific materialism or developing the theory of intelligent design.

William Dembski and The Design Inference

One of the first Center-supported research projects was completed two years later when mathematician and probability theorist William Dembski (1998) completed a monograph for Cambridge University Press titled The Design Inference. In this book, Dembski argued that rational agents often infer or detect the prior activity of other designing minds by the character of the effects they leave behind. Archaeologists assume, for example, that rational agents produced the inscriptions on the Rosetta Stone. Insurance fraud investigators detect certain “cheating patterns” that suggest intentional manipulation of circumstances rather than natural disasters. Cryptographers distinguish between random signals and those that carry encoded messages. Dembski’s work showed that recognizing the activity of intelligent agents constitutes a common and fully rational mode of inference.

More importantly, Dembski’s work explicated criteria by which rational agents recognize the effects of other rational agents, and distinguish them from the effects of natural causes. He argued that systems or sequences that have the joint properties of “high complexity” (or low probability) and “specification” invariably result from intelligent causes, not chance or physical-chemical laws (see Dembski 1998: 36-66). Dembski noted that complex sequences are those that exhibit an irregular and improbable arrangement that defies expression by a simple rule or algorithm. According to Dembski, a specification, on the other hand, is a match or correspondence between a physical system or sequence and a set of independent functional requirements or constraints. To illustrate these concepts (of complexity and specification), consider the following three sets of symbols:

“inetehnsdysk]idfawqnz,mfdifhsnmcpew,ms.s/a”
“Time and tide waits for no man.”
“ABABABABABABABABABABABABAB”

Both the first and second sequences shown above are complex because both defy reduction to a simple rule. Each represents a highly irregular, aperiodic and improbable sequence of symbols. The third sequence is not complex, but is instead highly ordered and repetitive. Of the two complex sequences, only one exemplifies a set of independent functional requirements – i.e., is specified. English has a number of such functional requirements. For example, to convey meaning in English one must employ existing conventions of vocabulary (associations of symbol sequences with particular objects, concepts or ideas) and existing conventions of syntax and grammar (such as “every sentence requires a subject and a verb”). When arrangements of symbols “match” or utilize existing vocabulary and grammatical conventions (i.e., functional requirements), communication can occur. Such arrangements exhibit “specification.” The second sequence (“Time and tide waits for no man”) clearly exhibits such a match between itself and the preexisting requirements of vocabulary and grammar. It has employed these conventions to express a meaningful idea.

Of the three sequences above only the second (“Time and tide waits for no man”) manifests both the jointly necessary indicators of a designed system. The third sequence lacks complexity, though it does exhibit a simple periodic pattern, a specification of sorts. The first sequence is complex, but not specified as we have seen. Only the second sequence exhibits both complexity and specification. Thus, according to Dembski’s theory, only the second sequence, but not the first and third, implicates an intelligent cause – as indeed our intuition tells us. (See Dembski 1998).

As it turns out, these criteria are equivalent (or “isomorphic”) to the notion of specified complexity or information content. Thus, Dembski’s work suggested that “high information content” or “specified information” or “specified complexity” indicates prior intelligent activity. This theoretical insight comported with common, as well as scientific, experience. Few rational people would, for example, attribute hieroglyphic inscriptions to natural forces such as wind or erosion; instead, they would immediately recognize the activity of intelligent agents. Dembski’s work shows why: Our reasoning involves a comparative evaluation process that he represents with a device he calls “the explanatory filter.” The filter outlines a formal method by which scientists (as well as ordinary people) decide among three different types of explanations: chance, necessity and design. (See Figure 3). His “explanatory filter” constituted, in effect, a scientific method for detecting the effects of intelligence.

Explanatory Filter
Figure 3

Dembski’s academic credentials were impeccable, and since the book had been published after a rigorous peer review process as part of the prestigious Cambridge University Press monograph series, his argument was difficult to ignore. Dembski’s formal method also reinforced the argument that I was making simultaneously, namely, that the specified information in DNA is best explained by reference to an intelligent cause rather than by reference to chance, necessity or a combination of the two (Meyer 1998a; Meyer 1998b; Meyer 2003a; Meyer et al., 2003.) Indeed, the coding regions of the nucleotide base sequences in DNA manifest both complexity and specification just as does the second of the three symbol strings in the preceding illustration.

Design Beyond Biology

Meanwhile, the fledgling Center for Science and Culture was working with scientists and scholars around the world to develop the case for intelligent design not only in biology but also in the physical sciences. Since then, its fellows have written more than sixty books and hundreds of articles (including many peer-reviewed scientific articles challenging Darwinian evolution or, in some cases, explicitly arguing for intelligent design [see Meyer 2004: 213239; see http://www.discovery.org/csc for other peer-reviewed books and articles supporting intelligent design]), and have appeared on hundreds of television and radio broadcasts, many of them national or international. In addition, the center co-produced four science documentaries and helped improve science education policy in seven states and in the U.S. Congress. As a result of these efforts, the work of the center has generated an international discussion about the growing evidence for design in nature.

Since so much of the intelligent design debate concerns biology, many journalists covering the debate – particularly those guided by boilerplate of the 1925 Scopes Monkey Trial and its Hollywood embodiment, Inherit the Wind – fail to mention that the theory of intelligent design is larger than biology. In recent decades, molecular and cell biology have provided powerful evidence of design, but so too have chemistry, astronomy and physics.

Consider, for example, the role that physics has played in reviving the case for intelligent design. Since Fred Hoyle’s prediction and discovery of the resonance levels of Carbon in 1954 (Hoyle 1954: 121-146), physicists have discovered that the existence of life in the universe depends upon a number of precisely balanced physical factors (see Giberson 1997: 63-90; Yates, 1997: 91-104). The constants of physics, the initial conditions of the universe and many other of its contingent features appear delicately balanced to allow for the possibility of life. Even very slight alterations in the values of many independent factors such as the expansion rate of the universe, the speed of light, the precise strength of gravitational or electromagnetic attraction, would render life impossible. Physicists now refer to these factors as “anthropic coincidences” and to the fortunate convergence of all these coincidences as the “fine-tuning of the universe.” Many have noted that this fine-tuning strongly suggests design by a pre-existent intelligence. As physicist Paul Davies (1988: 203) has put it, “The impression of design is overwhelming.”

To see why, consider the following illustration. Imagine a cosmic explorer has just stumbled into the control room for the whole universe. There he discovers an elaborate “universe creating machine,” with rows and rows of dials each with many possible settings. As he investigates, he learns that each dial represents some particular parameter that has to be calibrated with a precise value in order to create a universe in which life can survive. One dial represents the possible settings for the strong nuclear force, one for the gravitational constant, one for Planck’s constant, one for the speed of light, one for the ratio of the neutron mass to the proton mass, one for the strength of electromagnetic attraction and so on. As our cosmic explorer examines the dials, he finds that the dials can be easily spun to different settings – that they could have been set otherwise. Moreover, he determines by careful calculation (he is a physicist) that even slight alterations in any of the dial settings would alter the architecture of the universe such that life would cease to exist. Yet for some reason each dial sits with just the exact value necessary to keep the universe running – like an already-opened bank safe with multiple dials in which every dial is found with just the just the right value. What should one infer about how these dial settings came to be set?

Not surprisingly, many physicists have been asking the same question about the anthropic coincidences. And for many,14 the design hypothesis seems the most obvious and intuitively plausible answer to this question. As George Greenstein (1988: 26-27) muses, “the thought insistently arises that some supernatural agency, or rather Agency, must be involved.” As Fred Hoyle (1982: 16) commented, “a commonsense interpretation of the facts suggests that a superintellect has monkeyed with physics, as well as chemistry and biology, and that there are no blind forces worth speaking about in nature.” Or as he put it in his book The Intelligent Universe, “A component has evidently been missing from cosmological studies. The origin of the Universe, like the solution of the Rubik cube, requires an intelligence” (Hoyle 1983: 189). Many physicists now concur. They would argue that – in effect – the dials in the cosmic control room appear finely-tuned because someone carefully set them that way.

In the 2004 book The Privileged Planet, astronomer Guillermo Gonzalez and philosopher Jay Richards extended this fine-tuning argument to planet earth (Gonzalez and Richards 2004). They showed first that the Earth’s suitability as a habitable planet depends on a host of very improbable conditions – conditions so improbable in fact as to call into question the widespread assumption that habitable planets are common in our galaxy or even the universe. Further, by drawing on a host of recent astronomical discoveries, Gonzalez and Richards also showed that the set of improbable conditions that render the earth habitable also make it an optimal place for observing the cosmos and making various scientific discoveries. As they put it, habitability correlates with discoverability. They argued that the best explanation for this correlation is that the earth was intelligently designed to be a habitable planet and a platform for making scientific discovery. The Privileged Planet makes a nuanced and cumulative argument15 – one that resists easy summation, but their groundbreaking advance of the finetuning argument for design was persuasive enough that such scientists as Cambridge’s Simon Conway Morris and Harvard’s Owen Gingerich endorsed the book, and David Hughes (2005: 113), a vice-president of the Royal Astronomical Society, gave it an enthusiastic review in the pages of The Observatory.

Three Philosophical Objections

On this and other fronts, advocates of the theory of intelligent design have stirred up debate at the highest levels of the scientific community. In response opponents have often responded with philosophical rather than evidential objections. The three of the most common are: (1) that the theory of intelligent design is an argument from ignorance, (2) that it represents the same kind of fallacious argument from analogy that David Hume criticized in the 18th century and (3) that the theory of intelligent design is not “scientific.” Let us examine each of these arguments in turn.

An Argument from Knowledge

Opponents of intelligent design frequently characterize the theory as an argument from ignorance. According to this criticism anyone who makes a design inference from the presence of information or irreducible complexity in the biological world uses our present ignorance of an adequate materialistic cause of these phenomena as the sole basis for inferring an intelligent cause. Since, the objection goes, ‘design advocates can’t imagine a natural process that can produce biological information or irreducibly complex systems, they resort to invoking the mysterious notion of intelligent design.’ In this view, intelligent design functions not as an explanation, but as a placeholder for ignorance.

On the contrary, the arguments for intelligent design described in this essay do not constitute fallacious arguments from ignorance. Arguments from ignorance occur when evidence against a proposition is offered as the sole grounds for accepting another, alternative proposition. The inferences and arguments to design made by contemporary design theorists don’t commit this fallacy. True, the design arguments employed by contemporary advocates of intelligent design do depend in part upon negative assessments of the causal adequacy of competing materialistic hypotheses. And clearly, the lack of an adequate materialistic cause does provide part of the grounds for inferring design from information or irreducibly complex structures in the cell. Nevertheless, this lack is only part of the basis for inferring design. Advocates of the theory of intelligent design also infer design because we know that intelligent agents can and do produce information-rich and irreducibly complex systems. In other words, we have positive experience-based knowledge of an alternative cause that is sufficient to have produced such effects. That cause is intelligence. Thus, design theorists infer design not just because natural processes do not or cannot explain the origin of specified information or irreducible complexity in biological systems, but also because we know based upon our uniform experience that only intelligent agents produce these effects. In other words, biological systems manifest distinctive and positive hallmarks of intelligent design – ones that in any other realm of experience would trigger the recognition of an intelligent cause.

Thus, Michael Behe has inferred design not only because the mechanism of natural selection cannot (in his judgment) produce “irreducibly complex” systems, but also because in our experience “irreducible complexity” is a feature of systems known always to result from intelligent design. That is, whenever we see systems that have the feature of irreducible complexity and we know the causal story about how such systems originated, invariably “intelligent design” played a role in the origin of such systems. Thus, Behe infers intelligent design as the best explanation for the origin of irreducible complexity in cellular molecular motors and circuits based upon what we know, not what we do not know, about the causal powers of intelligent agents and natural processes, respectively.

Similarly, the “specified complexity” or “specified information” of DNA implicates a prior intelligent cause, not only because (as I have argued) materialistic scenarios based upon chance, necessity and the combination of the two fail to explain the origin of such information, but also because we know that intelligent agents can and do produce information of this kind. In other words, we have positive experience-based knowledge of an alternative cause that is sufficient to have produced such effects, namely, intelligence. To quote Henry Quastler again, “Information habitually arises from conscious activity” (Quastler 1964: 16). For this reason, specified information also constitutes a distinctive hallmark (or signature) of intelligence. Indeed, in all cases where we know the causal origin of such information, experience has shown that intelligent design played a causal role. Thus, when we encounter such information in the bio-macromolecules necessary to life, we may infer – based upon our knowledge of established cause-effect relationships (i.e., “presently acting causes”) – that an intelligent cause operated in the past to produce the information necessary to the origin of life.

Thus, contemporary design advocates employ the standard uniformitarian method of reasoning used in all historical sciences. That contemporary arguments for design necessarily include critical evaluations of the causal adequacy of competing hypotheses is entirely appropriate. All historical scientists must compare causal adequacy of competing hypotheses in order to make a judgment as to which hypothesis is best. We would not say, for example, that an archeologist had committed a “scribe of the gaps” fallacy simply because – after rejecting the hypothesis that an ancient hieroglyphic inscription was caused by a sand storm – he went on to conclude that the inscription had been produced by a human scribe. Instead, we recognize that the archeologist has made an inference based upon his experience-based knowledge that information-rich inscriptions invariably arise from intelligent causes, not solely upon his judgment that there are no suitably efficacious natural causes that could explain the inscription.

Not Analogy but Identity

Nor does the design argument from biological information depend on the analogical reasoning that Hume critiqued since it does not depend upon assessments of degree of similarity. The argument does not depend upon the similarity of DNA to a computer program or human language but upon the presence of an identical feature (“information” defined as “complexity and specification”) in both DNA and all other designed systems, languages or artifacts. For this reason, the design argument from biological information does not represent an argument from analogy of the sort that Hume criticized, but an “inference to the best explanation.” Such arguments turn not on assessments of the degree of similarity between effects, but instead on an assessment of the adequacy of competing possible causes for the same effect. Because we know intelligent agents can (and do) produce complex and functionally specified sequences of symbols and arrangements of matter (information so defined), intelligent agency qualifies as a sufficient causal explanation for the origin of this effect. In addition, since naturalistic scenarios have proven universally inadequate for explaining the origin of such information, mind or creative intelligence now stands as the best explanation for the origin of this feature of living systems.

But Is It Science?

Of course, many simply refuse to consider the design hypothesis on grounds that it does not qualify as “scientific.” Such critics (see Ruse 1988: 103) affirm the extra-evidential principle mentioned above known as methodological naturalism or methodological materialism. Methodological naturalism asserts that, as a matter of definition, for a hypothesis, theory or explanation to qualify as “scientific,” it must invoke only materialistic entities. Thus, critics say, the theory of intelligent design does not qualify. Yet, even if one grants this definition, it does not follow that some nonscientific (as defined by methodological naturalism) or metaphysical hypothesis couldn’t constitute a better, more causally adequate, explanation of some phenomena than competing materialistic hypotheses. Design theorists argue that, whatever its classification, the design hypothesis does constitute a better explanation than its materialistic rivals for the origin of biological information, irreducibly complex systems and the fine-tuning of the constants of physics. Surely, simply classifying an argument as “not scientific” does not refute it.

In any case, methodological materialism now lacks justification as a normative definition of science. First, attempts to justify methodological materialism by reference to metaphysically neutral (that is, non-question begging) demarcation criteria have failed (see Meyer 2000b; Meyer 2000c; Laudan 2000a: 337-50; Laudan 2000b: 351-355; Plantinga 1986a: 1826; Plantinga 1986b: 22-34). Second, to assert methodological naturalism as a normative principle for all of science has a negative effect on the practice of certain scientific disciplines, especially those in the historical sciences. In origin-of-life research, for example, methodological materialism artificially restricts inquiry and prevents scientists from considering some hypotheses that might provide the best, most causally adequate explanations. To be a truthseeking endeavor, the question that origin-of-life researchers must address is not “Which materialistic scenario seems most adequate?” but rather “What actually caused life to arise on Earth?” Clearly, it’s at least logically possibly that the answer to the latter question is this: “Life was designed by an intelligent agent that existed before the advent of humans.” If one accepts methodological naturalism as normative, however, scientists may never consider the design hypothesis as possibly true. Such an exclusionary logic diminishes the significance of any claim of theoretical superiority for any remaining hypothesis and raises the possibility that the best “scientific” explanation (as defined by methodological naturalism) may not be the best in fact.

As many historians and philosophers of science now recognize, scientific theory-evaluation is an inherently comparative enterprise. Theories that gain acceptance in artificially constrained competitions can claim to be neither ‘most probably true’ nor ‘most empirically adequate.’ At best, such theories can be considered the ‘most probably true or adequate among an artificially limited set of options.’ Thus, an openness to the design hypothesis would seem necessary to any fully rational historical science – that is, to one that seeks the truth, “no holds barred” (Bridgman 1955: 535). An historical science committed to following the evidence wherever it leads will not exclude hypotheses a priori on metaphysical grounds. Instead, it will employ only metaphysically neutral criteria – such as explanatory power and causal adequacy – to evaluate competing hypotheses. This more open (and seemingly rational) approach to scientific theory evaluation suggests the theory of intelligent design as the best, most causally adequate explanation for the origin of certain features of the natural world, especially including the origin of the specified information necessary to build the first living organism.

Conclusion

Of course, many continue to dismiss intelligent design as nothing but “religion masquerading as science.” They point to the theory’s obviously friendly implications for theistic belief as a justification for classifying and dismissing the theory as “religion.” But such critics confuse the implications of the theory of intelligent design with its evidential basis. The theory of intelligent design may well have theistic implications. But that is not grounds for dismissing it. Scientific theories must be judged by their ability to explain evidence, not by whether they have undesirable implications. Those who say otherwise flout logic and overlook the clear testimony of the history of science. For example, many scientists initially rejected the Big Bang theory because it seemed to challenge the idea of an eternally self-existent universe and pointed to the need for a transcendent cause of matter, space and time. But scientists eventually accepted the theory despite such apparently unpleasant implications because the evidence strongly supported it. Today a similar metaphysical prejudice confronts the theory of intelligent design. Nevertheless, it too must be evaluated on the basis of the evidence, not our philosophical preferences or concerns about its possible religious implications. As Professor Flew, the long-time atheistic philosopher who has come to accept the case for design, advises: we must “follow the evidence wherever it leads.”

Acknowledgement: The author would like to acknowledge the assistance of Dr. Jonathan Witt in the preparation of parts of this article.


Endnotes

  1. Aquinas used the argument from design as one of his proofs for the existence of God.
  2. Kepler’s belief that the work of God is evident in nature is illustrated by his statement in the Harmonies of the World that God “the light of nature promote[s] in us the desire for the light of grace, that by its means [God] ma[y] transport us into the light of glory” (Kepler 1995: 240. See also Kline 1980: 39).
  3. Kant sought to limit the scope of the design argument, but did not reject it wholesale. Though he rejected the argument as a proof of the transcendent and omnipotent God of Judeo-Christian theology, he still accepted that it could establish the reality of a powerful and intelligent author of the world. In his words, “physical-theological argument can indeed lead us to the point of admiring the greatness, wisdom, power, etc., of the Author of the world, but can take us no further” (Kant 1963: 523).
  4. The effort to explain biological organisms was reinforced by a trend in science to provide fully naturalistic accounts for other phenomena such as the precise configuration of the planets in the solar system (Laplace) and the origin of geological features (Lyell and Hutton). It was also reinforced (and in large part made possible) by an emerging positivistic tradition in science that increasingly sought to exclude appeals to supernatural or intelligent causes from science by definition (see Gillespie 1987: 1-49).
  5. “[T]he fact of evolution was not generally accepted until a theory had been put forward to suggest how evolution had occurred, and in particular how organisms could become adapted to their environment; in the absence of such a theory, adaptation suggested design, and so implied a creator. It was this need which Darwin's theory of natural selection satisfied” (Smith, 1975: 30).
  6. “There is absolutely no disagreement among professional biologists on the fact that evolution has occurred. [...] But the theory of how evolution occurs is quite another matter, and is the subject of intense dispute” (Futuyma 1985: 3-13). Of course, to admit that natural selection cannot explain the appearance of design is in effect to admit that it has failed to perform the role that is claimed for it as a “designer substitute.”
  7. Note that similar developments were already taking place in Germany, starting with W.-E. Lönnig’s Auge – widerlegt Zufalls-Evolution [=The Eye Disproves Accidental Evolution] (Stuttgart: Selbstverlag, 1976) and Henning Kahle's book, Evolution – Irrweg moderner Wissenschaft? [=Evolution – Error of Modern Science?] (Bielefeld: Moderner Buch Service, 1980).
  8. Commenting on events at this symposium, mathematician David Berlinski writes, “However it may operate in life, randomness in language is the enemy of order, a way of annihilating meaning. And not only in language, but in any language-like system—computer programs, for example. The alien influence of randomness in such systems was first noted by the distinguished French mathematician M. P. Schützenberger, who also marked the significance of this circumstance for evolutionary theory.
  9. For instance, it also received praise in the Journal of College Science Teaching and in a major review essay by Klaus Dose, “The Origin of Life: More Questions than Answers,” Interdisciplinary Science Reviews, 13.4, 1988.
  10. Gian Capretti (1983: 143) has developed the implications of Peircian abduction. Capretti and others explore the use of abductive reasoning by Sherlock Holmes in detective fiction of Sir Arthur Conan Doyle. Capretti attributes the success of Holmesian abductive “reconstructions” to a willingness to employ a method of “progressively eliminating hypotheses.”
  11. Moreover, information increases as improbabilities multiply. The probability of getting four heads in a row when flipping a fair coin is 1/2 X 1/2 X 1/2 X 1/2 or (1/2)4. Thus, the probability of attaining a specific sequence of heads and/or tails decreases exponentially as the number of trials increases. The quantity of information increases correspondingly. Even so, information theorists found it convenient to measure information additively rather than multiplicatively. Thus, the common mathematical expression (I = –log2p) for calculating information converts probability values into informational measures through a negative logarithmic function, where the negative sign expresses an inverse relationship between information and probability.
  12. I later extended this information argument to an analysis of the geologically-sudden appearance of animal body plans that occurred in the Cambrian period. In a peer-reviewed article published in 2004 with the Proceedings of the Biological Society of Washington, a journal published out of the Smithsonian Institution, I argued that intelligent design provided the best explanation of the quantum increase in biological information that was necessary to build the Cambrian animals. In constructing this case, I again self-consciously followed the method of multiple competing hypotheses by showing that neither neoDarwinian mechanism, nor structuralism, nor self-organizational models nor other materialistic models offered an adequate causal explanation for the origin of the Cambrian explosion in biological form and information (see Meyer 2004: 213-239; Meyer et al. 2003). Instead, I argued that, based upon our uniform and repeated experience, only intelligent agency (mind, not a material process) has demonstrated the power to produce the large amounts of specified information such as that which arose with the Cambrian animals.
  13. Kenneth Miller carefully avoids saying that the bacterial flagellar motor actually did evolve from the type III secretory system. Instead, he insists that the TTSS simply refutes Behe’s claim that the flagellar motor is irreducibly complex. But as Behe has made clear his definition of “irreducible complexity” (IC) does not entail the claim that the parts of an irreducibly complex system perform no other function, only that the loss of parts from an irreducibly complex system destroys the function of that system. Systems that are IC even by this less restrictive definition still pose formidable obstacles to co-option scenarios, even granting that some of their parts may have had some other selectable function in the past. For co-option scenarios to be plausible, natural selection must build complex systems from simpler structures by preserving a series of intermediate structures, each of which must perform some function. For this reason, it is not enough for advocates of co-option to point to a single possible ancestral structure, but instead they must show that a plausible series of such structures existed and could have maintained function at each stage. In the case of the flagellar motor, co-option scenarios lack such plausibility in part because experimental research has shown that the presumptively precedent stages to a fully functional flagellar motor (for example, the 29, 28 and 27—part versions of the flagellar motor) have no motor function. If the last stages in a hypothetical series of functional intermediates are not functional, then it follows that the series as a whole is not. For this and other reasons, co-option does not presently provide either an adequate explanation of the origin of the flagellar motor or a better explanation than Behe’s design hypothesis.
  14. Greenstein himself does not favor the design hypothesis. Instead, he favors the so-called “participatory universe principle” or “PAP.” PAP attributes the apparent design of the fine tuning of the physical constants to the universe’s (alleged) need to be observed in order to exist. As he says, the universe “brought forth life in order to exist [...] that the very Cosmos does not exist unless observed.” See Greenstein 1988: 223.
  15. In arguing that our place in the cosmos is optimized for life and discovery, they introduce a concept from engineering, constrained optimization, offering the example of a notebook computer. Yes, a notebook computer’s screen could be substantially bigger, but that would compromise its effectiveness as a lightweight, portable computer. The best notebook computer is the best compromise among a range of sometimes competing qualities. In the same way, Earth’s situation in the cosmos might be improved in this or that way, but these improvements would involve tradeoffs. For instance, if we were near the center of our galaxy, we might be able to learn more about the black hole posited to rest there, but the bright galactic core would greatly compromise our ability to observe distant galaxies. Our actual viewing position, while perhaps not ideal in any one respect, possesses the same quality of constrained optimization that a well-designed notebook computer possesses.

References

  • Alston, W. P. (1971): The place of the explanation of particular facts in science, in: Philosophy of science 38, 13-34.
  • Axe, D. (2004): Estimating the prevalence of protein sequences adopting functional enzyme folds, in: Journal of Molecular Biology, 341, 1295-1315.
  • Behe, M. (2004): Irreducible complexity: Obstacle to Darwinian evolution, in: W. A. Dembski/M. Ruse (eds.), Debating design: from Darwin to DNA, Cambridge, 352-370.
  • (2006a): From muttering to mayhem: How Phillip Johnson got me moving, in: W. A. Dembski (ed.), Darwin’s nemesis: Phillip Johnson and the intelligent design movement, Downers Grove, IL, 37-47.
  • – (2006b): Darwin’s black box: The biochemical challenge to evolution. Afterword, New York, 255-272.
  • Berlinski, D. (1996): The deniable Darwin, in: Commentary 101.6, 19-29.
  • Bowler, P. J. (1986): Theories of human evolution: A century of debate, 1844-1944, Baltimore, 44-50.
  • Boyle, R. (1979): Selected philosophical papers of Robert Boyle, edited by M. A. Stewart, Manchester, 172.
  • Bradley, W. (2004): Information, entropy and the origin of life, in: W. A. Dembski / M. Ruse (eds.), Debating design: from Darwin to DNA, Cambridge, 331-351.
  • Bridgman, P. W. (1955): Reflections of a physicist, 2nd edition, New York, 535.
  • Capretti, G. (1983): Peirce, Holmes, Popper, in: U. Eco and T. Sebeok (eds.), The sign of three, Bloom
  • ington, IN, 135-153.
  • Chamberlain, T. C. (1965): The method of multiple working hypotheses, in: Science 148, 754-59.
  • Cicero (1933): De natura deorum, translated by Harris Rackham, Cambridge, MA, 217.
  • Crick, F. (1958): On Protein Synthesis, in: Symposium for the Society of Experimental Biology, 12,138– 63, esp. 138-63.
  • Darwin, C. (1896): Life and letters of Charles Darwin, 2 volumes, edited by Francis Darwin, London, vol. 1, 437.
    • – (1964): On the origin of species, Cambridge, MA, 481-82.
  • Dawkins, R. (1986): The blind watchmaker, London, 1.
    • – (1995): River out of Eden, New York, 11.
  • Davies, P. (1988): The cosmic blueprint, New York, 203.
  • Dembski, W. A. (1996): Demise of British natural theology. Unpublished paper presented to Philosophy of Religion seminar, University of Notre Dame, fall.
    • – (1998): The design inference: Eliminating chance through small probabilities. Cambridge.
    • – (2002): No free lunch: why specified complexity cannot be purchased without intelligence. Lanham, Maryland.
    • – (2004): The logical underpinnings of intelligent design, in: W. A. Dembski / M. Ruse (eds.), Debating design: from Darwin to DNA, Cambridge, 311-440.
  • Denton, M. (1985): Evolution: a theory in crisis, London.
    • – (1986): Nature’s destiny, New York.
  • Dretske, F. (1981): Knowledge and the flow of information, Cambridge, MA, 6-10.
  • Eden, M. (1967): Inadequacies of neo-Darwinian evolution as a scientific theory, in: P. S. Moorhead / M.M. Kaplan (eds.), Mathematical challenges to the neo-Darwinian interpretation of evolution, Philadelphia, 109-111.
  • Eldredge, N. (1982): An ode to adaptive transformation, in: Nature 296, 508-9.
  • Futuyama, D. (1985): Evolution as fact and theory, in: Bios 56, 3-13.
  • Gallie, W. B. (1959): Explanations in history and the genetic sciences, in: P. Gardiner (ed.), Theories of history: Readings from classical and contemporary sources, Glencoe, IL, 386-402.
  • Gates, B. (1995): The road ahead, New York, 188.
  • Giberson, K. (1997): The anthropic principle, in: Journal of interdisciplinary studies 9, 63-90.
  • Gillespie, N. (1979): Charles Darwin and the problem of creation, Chicago, 41-66, 82-108.
    • – (1987): Natural history, natural theology, and social order: John Ray and the “Newtonian Ideology”, in: Journal of the History of Biology 20, 1-49.
  • Gonzalez, G. and Richards, J. W. (2004): The privileged planet: How our place in the cosmos was designed for discovery. Washington, D.C.
  • Gould, S. J. (1986): Evolution and the triumph of homology: Or, why history matters, in: American scientist 74, 61.
    • – (2003): Is a new and general theory of evolution emerging? In: Paleobiology 119, 119-20.
  • Greenstein, G. (1988): The symbiotic universe: Life and mind in the cosmos, New York, 26-27; 223.
  • Hick, J. (1970): Arguments for the existence of God, London, 1.
  • Hoyle, F. (1954): On nuclear reactions occurring in very hot stars. I. The synthesis of elements from carbon to nickel, in: Astrophysical journal supplement 1, 121-146.
    • – (1982): The universe: Past and present reflections, in: Annual Review of Astronomy and Astrophysics 20, 16.
    • – (1983): The intelligent universe, New York, 189.
  • Hughes, D. (2005): The observatory, 125.1185, 113.
  • Judson, H. (1979): Eighth day of creation, New York.
  • Johnson, P. E. (1991): Darwin on trial, Washington, D.C., 8.
  • Kamminga, H. (1986): Protoplasm and the Gene, in: A. G. Cairns-Smith / H. Hartman (eds.), Clay Minerals and the Origin of Life, Cambridge, 1-10.
  • Kant, I. (1963): Critique of pure reason, translated by Norman Kemp Smith, London, 523.
  • Kenyon, D. (1984): Foreword to The mystery of life’s origin, New York, v-viii.
  • Kenyon, D. / Gordon, M. (1996): The RNA world: A critique, in: Origins & Design 17 (1), 9-16.
  • Kepler, J. (1981): Mysterium cosmographicum [The secret of the universe], translated by A. M. Duncan, New York, 93-103.
  • Kepler, J. (1995): Harmonies of the world, translated by Charles Glen Wallis, Amherst, NY, 170, 240.
  • Kline, M. (1980): Mathematics: The loss of certainty, New York, 39.
  • Klinghoffer, D. (2005): The Branding of a Heretic, in: The Wall Street Journal, 28 January, W11.
  • Küppers, B.-O. (1987): On the Prior Probability of the Existence of Life, in: L. Krüger et al. (eds.), The Probabilistic revolution, Cambridge, MA, 355–69.
    • (1990): Information and the origin of life, Cambridge, MA, 170-172.
  • Laudan, L. (2000a): The demise of the demarcation problem, in: M. Ruse (ed.), But is it science?, Amherst, NY, 337-350.
    • (2000b): Science at the bar – causes for concern, in: M. Ruse (ed.), But is it science?, Amherst, NY, 351-355.
  • Lönnig, W.-E. (2001): Natural selection, in: W. E. Craighead / C. B. Nemeroff (eds.), The Corsini encyclopedia of psychology and behavioral sciences, 3rd edition, New York, vol. 3, 1008-1016.
  • Lönnig, W.-E. / Saedler, H. (2002): Chromosome rearrangements and transposable elements, in: Annual review of genetics 36, 389-410.
  • Mayr, E. (1982): Foreword to Darwinism defended, by Michael Ruse, Reading, MA, xi-xii.
  • Meyer, S. C. (1998): DNA by design: An inference to the best explanation for the origin of biological information, in: Journal of rhetoric and public affairs 4.1, 519-556.
    • – (1998b): The Explanatory power of design: DNA and the origin of information, in: W. A. Dembski (ed.), Mere creation: science, faith and intelligent design, Downers Grove, IL, 114-147.
    • – (2000a): DNA & other designs, in: First things 102 (April 2000), 30-38.
    • – (2000b): The scientific status of intelligent design: The methodological equivalence of naturalistic and non-naturalistic origins theories, in: M. J. Behe / W. A. Dembski / S. C. Meyer (eds.), Science and evidence for design in the universe, San Francisco, 151-211.
    • – (2000c): The demarcation of science and religion, in: G. B. Ferngren et al. (eds.), The history of science and religion in the western tradition, New York, 12-23.
    • – (2003a): DNA and the origin of life: information, specification and explanation, in: J. A. Campbell / S. C. Meyer (eds.), Darwinism, design and public education, Lansing, MI, 223-285.
    • – (2004): The Cambrian information explosion: evidence for intelligent design, in: W. A. Dembski / M. Ruse (eds.), Debating design, Cambridge, 371-391.
    • – (2004): The origin of biological information and the higher taxonomic categories, in: Proceedings of the Biological Society of Washington 117, 213-239.
  • Meyer, S. C. / Ross, M. / Nelson, P. / Chien, P. (2003): The Cambrian explosion: Biology’s big bang, in: J. A. Campbell / S. C. Meyer (eds.), Darwinism, design and public education, Lansing, MI, 323-402.
  • Miller, K. (2004): The bacterial flagellum unspun, in: W. A. Dembski / M. Ruse (eds.), Debating design: from Darwin to DNA, Cambridge, 81-97.
  • Minnich, S. A. / Meyer, S. C. (2004): Genetic analysis of coordinate flagellar and type III regulatory circuits in pathogenic bacteria, in: M. W. Collins / C. A. Brebbia (eds.), Design and nature II: Comparing design in nature with science and engineering, Southampton, 295-304.
  • Moorhead, P. S. / Kaplan, M. M. (eds.) (1967): Mathematical challenges to the neo-Darwinian interpretation of evolution, Philadelphia.
  • Morris, S. C. (1998): The crucible of creation: The Burgess Shale and the rise of animals, Oxford, 63115.
    • (2000): Evolution: bringing molecules into the fold, in: Cell 100, 1-11.
    • – (2003a): The Cambrian “explosion” of metazoans, in: Origination of organismal form, 13-32.
    • (2003b): Cambrian “explosions” of metazoans and molecular biology: would Darwin be satisfied?, in: International journal of developmental biology 47 (7-8), 505-515.
  • Müller, G. B. / Newman, S. A. (2003): Origination of organismal form: The forgotten cause in evolutionary theory, in: G. B. Müller / S. A. Newman (eds.), Origination of organismal form: Beyond the gene in developmental and evolutionary biology, Cambridge, MA, 3-12.
  • Nelson, P. / Wells, J. (2003): Homology in biology: problem for naturalistic science and prospect for intelligent design, in: J. A. Campbell / S. C. Meyer (eds.), Darwinism, design and public education, Lansing, MI, 303-322.
  • Newton, I. (1934): Newton’s Principia: Motte’s translation revised (1686), translated by A. Motte, revised by F. Cajori, Berkeley, 543-44.
    • (1952): Opticks, New York, 369-70.
  • Paine, T. (1925): The life and works of Thomas Paine, vol. 8: The age of reason, New Rochelle, NY, 6.
  • Paley, W. (1852): Natural theology, Boston, 8-9.
  • Peirce, C. S. (1932): Collected papers, Vols. 1-6, edited by C. Hartshorne and P. Weiss, Cambridge, MA, vol. 2, 375.
  • Plantinga, A. (1986a): Methodological naturalism?, in: Origins and design 18.1, 18-26.
    • – (1986b): Methodological naturalism?, in: Origins and design 18.2, 22-34.
  • Plato (1960): The laws, translated by A. E. Taylor, London, 279.
  • Polanyi, M. (1967): Life transcending physics and chemistry, in: Chemical and engineering news 45(35), 21.
    • – (1968): Life’s irreducible structure, in: Science 160, 1308-12.
  • Ray, J. (1701): The wisdom of God manifested in the works of the creation, 3rd edition, London.
  • Quastler, H. (1964): The emergence of biological organization, 16. New Haven, Connecticut.
  • Reid, T. (1981): Lectures on natural theology (1780), edited by E. Duncan and W. R. Eakin, Washington, D.C., 59.
  • Ruse, M. (1988): McLean v. Arkansas: Witness testimony sheet, in: M. Ruse (ed.), But is it science?, Amherst, NY, 103.
  • Saier, M. H. (2004): Evolution of bacterial type III protein secretion systems, in: Trends in microbiology 12, 113-115.
  • Shannon, C. E. (1948): A Mathematical theory of communication, in: Bell System Technical Journal, 27, 379–423; 623–56.
  • Shannon, C. E. / Weaver, W. (1949): The Mathematical theory of communication. Urbana, IL.
  • Schiller, F. C. S. (1903): Darwinism and design argument, in: Humanism: Philosophical essays, New York, 141.
  • Schneider, T. D. (1997): Information content of individual genetic sequences, in: Journal of Theoretical Biology, 189, 427–41.
  • Schützenberger, M. (1967): Algorithms and neo-Darwinian theory, in: P. S. Moorhead / M. M. Kaplan (eds.), Mathematical challenges to the neo-Darwinian interpretation of evolution, Philadelphia, 73-5.
  • Scriven, M. (1959): Explanation and prediction in evolutionary theory, in: Science 130, 477-82.
    • (1966): Causes, connections and conditions in history, in: W. H. Dray (ed.), Philosophical analysis and history, New York, 238-64.
  • Simpson, G. G. (1978): The meaning of evolution, Cambridge, MA, 45.
  • Smith, J. M. (1975): The theory of evolution, 3rd edition, London, 30.
  • Sober, E. (1988): Reconstructing the past: parsimony, evolution, and inference, Cambridge, MA, 1-5.
  • Taylor, G. R. (1983): The great evolution mystery, New York, 4.
  • Thaxton, C. / Bradley, W. / Olsen, R. L. (1984): The mystery of life’s origin, New York.
  • Wallace, A. R. (1991): Sir Charles Lyell on geological climates the origin of species, in: C. H. Smith (ed.), An anthology of his shorter writings, Oxford, 33-34.
  • Whewell, W. (1840): The philosophy of the inductive sciences, 2 vols., London, vol. 2, 121-22; 101-03.
    • – (1857): History of the inductive sciences, 3 vols., London, vol. 3, 397.
  • Witham, L. (2003): By design, San Francisco, chapter 2.
  • Woodward, T. (2003): Doubts about Darwin: A history of intelligent design, Grand Rapids, Michigan, 69.
  • Yates, S. (1997): Postmodern creation myth? A response, in: Journal of interdisciplinary studies 9, 91104.
  • Yockey, H. P. (1992): Information theory and molecular biology, Cambridge.