ID-101 (Part One)

by Brian 11. April 2011 17:22

What is Intelligent Design? If you ask the average proponent of Darwinian evolution, the answer is Creationism. He or she will say ID is nothing more than a god-of-the-gaps scheme concocted by a bunch of fundamentalist Christians. Ironically, if you ask the average Christian the same question, you’ll get a complimentary version of the same answer! When it comes to a real understanding of ID, neither side has much incentive to do the heavy lifting. It’s easier for opponents to excommunicate Intelligent Design from Science and those who believe in a Creator do not require it as a confirmation – ID is a given. Yet those at the forefront of Intelligent Design are adding to our understanding of the world; certainly more so than critics give them credit and probably less than what most theists think. The high road in this debate is neither ad hominem attack nor tacit support. 

What is Intelligent Design?

Straight from the Discovery Institute, the leading ID think-tank, Intelligent Design is defined as follows:

The theory of intelligent design holds that certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection[i]

Immediately the skeptic’s dander is up. They will say: ID invokes an intelligent cause, which we all know is God. Since science only deals with natural causes, ID is not science. Of course, this line of reasoning misconstrues the methodology ID scientists might employ with the potential outcome of their research. But critics do not stop here. It is not uncommon for them to introduce two more red herrings: First; invoking an intelligent cause for life and the universe hinders scientific inquiry and discovery. Second, an intelligent cause is beyond scientific investigation and therefore adds nothing to our understanding of the world. I will show why these accusations are false using the following analogy.

Imagine a forensic scientist who is asked to examine a deceased man in order to find the cause of death. The cause may be natural, or it may be the result of foul play – an intelligent cause. Let’s further imagine the man died from a rare toxin that entered his bloodstream and worked its way up to his heart causing cardiac arrest. Finally, let’s assume the conclusion from forensics, in this case, is murder. If correct, then clearly the efficient and final cause leading to the man’s death was human intelligence and not natural processes. Does this mean the methods employed by the forensic scientist to determine the cause were unscientific? - Of course not. Does this mean further studies in medicine, heart disease or the circulatory system should grind to a halt because of his findings? – Obviously not. What about our understanding of the world? We might not gain scientific knowledge in this case, but we certainly learned something very important – the cause of the man's death.

The attempts by critics to cut ID off at the knees are hardly convincing. But perhaps the work ID proponents are doing really isn’t science. So let’s take a closer look and delve into ID theory to see if we can find something substantive. Foundational to ID is William Dembski’s concept of Specified Complexity which essentially denotes the two hallmarks of design: complexity and a specified pattern. Before I go into this in more detail, it is worth noting there is a vast amount of criticism, disinformation, and polemics on the web from those who loathe anything ID. But what you will not find in the criticism is any recognition of the fact design-detection is something all of us do regularly. If you find an arrowhead in the woods you immediately recognize it as something manmade and not the product of natural forces and erosion. Since on the average critic’s universe, our minds are nothing more than biochemical computers; what sort of processing do you suppose goes on when we see an effect and infer a design cause? Perhaps the process could be discovered, understood and formulated. That is precisely what Dembski and others are trying to do.

The Explanatory Filter


Dembski’s explanatory filter is configured to prevent false-positives by giving necessity and chance the benefit of the doubt. This does mean the filter allows false-negatives through, where design is not detected. A good bit of modern-art might not make it past chance for example. That is, the filter might not distinguish an intentional set of splashes of paint on a canvas from several buckets of paint falling off a ladder onto a canvas. Even given this limitation, it's better than a false positive. The following, which I call the mountain archer analogy, explains how the filter works. Imagine an archer shooting an arrow off the top of a mountain down into a valley ten square miles in size. Further, imagine the archer is so high up the mountain, the arrow could reach any spot in the valley below…

·         Hitting the valley is a high probability (HP) and follows necessarily from initial conditions and the law of gravity. The archer could fire over his shoulder, blindfolded, and still hit the valley.


·         Hitting one of a small number of trees in the valley the archer was not aiming for is an intermediate probability (IP) – not exactly what one might expect, but certainly within the reach of chance.

·         Hitting a stream running through the valley the archer was aiming for is a specified intermediate probability (Spec + IP) – the filter would chalk this up to chance and register a false negative even though this was a good shot and involved an intelligent cause. But the archer could have been blindfolded and got lucky.

·         Hitting a particular pebble the archer was not aiming for would be a small probability (SP) – but unspecified. There are lots of pebbles in the valley and even though hitting a particular one is a small probability event, it is not unlikely to hit a pebble.

·         Hitting a particular pebble that you had earlier painted a bulls-eye on is a specified small probability (Spec + SP) and would make it through the filter to design. The archer is either an incredible shot or a good magician – either way, we have a design-cause.[ii] No one in their right mind would attribute such an event to chance.

Probabilistic Resources

Dembski introduces the concept of probabilistic resources which include replicational and specificational resources. Probabilistic resources comprise the relevant ways an event can occur.[iii] Replicational resources are basically the number of samples taken. In the above analogy, it could be the number of shots fired. Specificational resources refer to the number of opportunities or ways to specify an event. Using the same analogy, it could be the number of pebbles with bulls-eyes (or some other mark indicating a target.) Obviously, the greater number of pebbles with targets and the greater number of shots fired, the greater the probability of hitting a target. 

Universal Probability Bound

If the marked pebble has a surface area of one square inch, then the odds of hitting it at random are roughly 1 in 2.5e11 or one in 250 billion[iv] - about 3000 times less likely than winning the Power Ball lottery with a single ticket. Even if it may seem impractical, critics would argue that this is still within the reach of chance. This is where Dembski introduces his universal probability bound (UPB) - a degree of improbability below which a specified event of that probability cannot reasonably be attributed to chance regardless of whatever probabilistic resources from the known universe are factored in.[v]  The UPB is 1e150 (one with a hundred and fifty zeros after it.) These odds, by taking the inverse, are so small; it would be about as likely to win the Power Ball twenty times in a row with one ticket each! Something even the contrarian realizes would be the result of intelligence and not luck (i.e. someone is cheating.)

But what does Dembski mean by: regardless of whatever probabilistic resources from the known universe are factored in? Here he is basing his probabilistic resources limit on the product of the number of elementary particles in the known universe (1e80) repeating every instant (1e45 per second [based on Plank time]) since the beginning of time in seconds (1e25) = 1e150. This seems like overkill, but apparently, you need this to overcome skepticism.[vi] But do we really need this much overhead? Take for example the estimated number of grains of sand on all of the beaches on earth. Say I traveled to a random beach and dug down and marked a single grain of sand. Now if you go to a random beach anywhere on earth, to a random spot, dig to a random depth (up to 5 meters), and grab a random grain, the odds of it being the same grain as the one I marked are estimated at one in 7.5e18. A rational person would never believe this would happen by chance. Even so, those odds are 131 orders of magnitude better than one in 1e150. The rational position is to realize there comes a point where theoretical possibility must give way to a practical possibility. The odds one in 1e150 are not zero, so a specified, small-probability event at this scale is not theoretically impossible, but it is rational to conclude its practical impossibility.

So far Dembski’s filter appears to be sound. But there is another criticism from detractors: affinities and constraints in the probability landscape can create the appearance of design completely by chance. Say for example the archer shot multiple arrows at random each with a long string of equal length. The resulting semicircle pattern on the valley below might be considered a design-cause since it is unlikely such a pattern would emerge at random. But this criticism obviously fails to recognize how in this case the probability landscape is greatly reduced by the constraint (the string) so that each event necessarily falls within a semicircle swath in the valley below. But perhaps there are laws governing the universe where affinities and constraints shape chaos into order. In a future post, I will try to tackle this and the other foundational principle of ID – irreducible complexity. It is when specified complexity meets the real world things get tricky.

[ii]This analogy does not take into account Dembski’s universal probability bound of 1e-150 which is over 138 orders of magnitude more stringent than the odds in this analogy

[iii]The Design Inference, William Dembski, pg.181

[iv] This assumes an equal probability of hitting any location across the valley below which in real life would not be the case – for example, if you could hit the corners you could likely land outside the valley as well.

[v] ISCID Encyclopedia of Science and Philosophy (1999)

[vi]This seems straightforward in terms of replicational resources but I do question the validity of also including specificational resources here. Samples repeated as quickly as physically possible in every conceivable location in the universe since the big bang does seem to set an upper limit for replicational resources, but I do not see the relation to how specifications can be varied. Imagine every elementary particle in the universe has a piggyback random number generator cranking out 200-digit numbers one every Plank-time since the big bang. One would reasonably expect that the significand of the square root of two had not been generated out 200 places. But what about the irrational square roots of any positive integer and their significands to 200 places? I’m sure SETI would consider a binary transmission of 200 digits of the square root of two to have an intelligent cause – but what about the square root of 3 or 7, etc?

A Test for Unguided versus Guided Mechanisms

by Brian 30. January 2010 21:22

Darwinists falsely accuse Intelligent Design (ID) theorists of promoting non-science. ID proponents have shown certain evolutionary theories lack the hallmarks of a good scientific theory (i.e. verifiability, falsifiability.) They ask how large-scale change in complex specified information can be shown in a lab if by definition the material mechanisms require small change over vast time periods. Darwinists point to the fossil record and put their science on par with forensics. Yet interestingly, design theorists appeal to forensics as well, yet somehow this is unacceptable science. It seems to me the methodologies for testing the opposing views need improvement. I propose a possible candidate for testing unguided and guided (designed) mechanisms. As an electrical and software engineer, this test would have to be adapted by experts in the field. But essentially, the test would require cataloging functions and associated schemas into two categories:

1.       Independently arising function employing different schemas: examples catalogued in this category support the unguided view

2.       Design-reuse function employing comparable schemas: examples catalogued in this category support the guided view

A conclusion drawn from the results would be inductive. If one category received far more examples cataloged than the other, one could reasonably assume the associated view was the better explanation. The terms used in this test are as follows:

1.       Complex Function: biological systems requiring input and producing beneficial output for the survival of the organism. Optimal candidates would be more complex than mere building blocks (e.g. individual proteins) and less so than large-scale systems (e.g. an eye)

2.       Difference in Complex Function: a methodology would have to be developed to quantify function so they could be compared and identified as “minimally different.” For example, comparing the human eye with the eye of an eagle would show sufficiently large differences in function (size, acuity, articulation, spectral response, etc.) to rule out as functions with minimal difference.

3.       Schema: this refers to the information originating a function (e.g. sequence of genetic information)

4.       Difference in Schema: difference in the information originating a function. Here again, a methodology would have to be developed to quantify this.

5.       Common Ancestor: for the sake of this analysis; this would be the current scientific genealogy of organisms employing functions under test. In other words, one would suspend any sort of spontaneous creation assumption and instead assume something akin to the neo-Darwinian account.


Cases catalogued here would support the unguided view. The assumption: Unguided mechanisms should lead to the origination of novel schemas for minimally-different complex function. Since natural selection is blind to engineering best-practices, one should expect to find random mutation producing varying results for minimally-different complex functions. Now, some may argue there are yet-to-be discovered affinities and constraints within the material universe limiting the gambit of possible schemas. Two things can be said here:

1.       These affinities and constraints have not been discovered and one should not appeal to future scientific discovery.

2.       If found, the metaphysical implication smacks of purpose (telos), and would likely harmonize better with a guided view anyway 


Cases catalogued here would support the guided view. The assumption: Guided mechanisms should lead to the reuse of novel schemas for minimally-different complex function. From the perspective of the proponent of the guided view: The lack of schema reuse in comparable function ought to indicate poor design skill or showiness on the account of a designer. Of course the Designer reserves the right to be showy! But, it seems reasonable to grant the unguided view the benefit of the doubt here. Furthermore, it seems highly unlikely, random mutation should lead to the same or very similar complex schema in independently arising function. Natural Selection cares nothing about schema, only function.

Of course as I noted, a test like this would not be definitive but could be part of a cumulative case for a particular view. And, having very little expertise in this field I cannot tell you if this test has been tried and if so, what the results might be. I hope someone out there reading this might shed some light.


Dogmatism (part II) - Inflexibility

by Brian 5. September 2009 18:37

The dogmatist is often charged with holding belief stubbornly, even in light of undercutting evidence. The term dogmatic seems to imply inflexibility these days. But why should this be a problem or even surprising? There is a disingenuous notion inflexibility is a particularly Christian trait when it is in fact a normal human condition, and in many cases desirable. Consider what I refer to as substantive-worldview: This is an ingrained, comprehensive, momentous and cohesive framework of belief defining one’s overall view of the world and the basis of many of our actions. Substantive-worldview deals with life’s most important subjects including origin, purpose, destiny and morality. It has been my experience many so-called freethinkers demonstrate substantive-worldview as well as numerous people of faith.


Of course as a Christian I expect our cognitive faculties are designed properly for the purpose of obtaining true belief. After all, we are created in God’s image and God is rational. [Genesis 1, Isaiah 1:18.] But Christian perspective temporarily aside; can you honestly imagine a well-functioning cognitive system where foundational belief supporting other well-established belief are casually discarded? What about a cognitive system where new ideas contradicting other well-accepted ideas are casually adopted? We all know from experience the more foundational and momentous a belief is, the more impact it would have on our worldview if suddenly found false. Likewise, integrating a new idea contradicting core belief may not be possible without dismantling worldview. I know this from personal experience having gone through an extreme worldview makeover from nontheism to Christianity.


If the resurrection of Jesus is a cornerstone belief in my Christian worldview, other beliefs will follow, some of which inflexibly: Jesus’ authority was confirmed by God’s action; God has the power to overcome death; God acts in the physical world, etc. It is unreasonable to think I should suddenly become flexible on the precepts of philosophical naturalism. To reject the view supernatural cause exists in the physical world would be to turn my Christian worldview on its head. Likewise, if philosophical naturalism is a cornerstone view in the freethinker’s foundation, she will also hold other related beliefs inflexibly: such as, God does not act in the physical world or He does not exist; Jesus was only a man if he existed at all, etc. How could one in this case suddenly accept the Resurrection without a complete rework of their foundation? Is there really any wonder why those with substantive-worldview are inflexible?


Here the freethinker is likely to claim they are more apt to modify their view based on evidence and this flexibility is what differentiates them from the Christian dogmatist. But where does the evidence really lead? The worldview of the freethinking nontheist (freeNT) does not appear to shift as new evidence is uncovered. Does the evidence always point in their direction? As the static universe theory died in the mid-twentieth century and science moved to a model astonishingly similar to the Creation account, did the freeNT budge? Is the freeNT open to new ideas such as those offered by Intelligent Design (ID) or do they excommunicate scientists with contrary perspectives? is a popular hangout for freeNT Darwinists and I’ve yet to see any interest whatsoever in what ID has to say on their website. They seem inflexible and dogmatic to me at least. A large number of freeNT speculators would rather turn to cosmic ancestry and the panspermia theory rather than consider a divine biogenesis theory. I still recall their false-hope and zeal for what might be found in the scrapings of space dust from the NASA Genesis probe [1]. Theirs is clearly a faith looking for the facts to support it. The bottom line; the freeNT is flexible as long as it harmonizes with their worldview. But that is just the same sort of flexibility we seen in the Christian.

Several years ago I engaged a colleague and professed agnostic on a flight back from business. This was the first time I had a chance to discuss core worldview issues with him. We talked about our beliefs and his skepticism was apparent. We discussed origins, neo-Darwinian evolution, cosmology, etc. Our conversation was very amiable and pleasant. My co-worker clearly had a good grasp of the subjects we discussed and he demonstrated substantive-worldview from my observation. Although skeptical, he expressed views on origin, purpose and death. He even conceded evidence such as the Cambrian Explosion [2] did not support the contemporary neo-Darwinian view. But it was what he told me at the end of our discussion that was astonishing. He said: “I am comfortable with my agnosticism” and “suddenly ceasing to exist [at death] is actually appealing to me.” These two statements go the core of his worldview and I should not expect much flexibility in his position on God, even given good supporting arguments and evidence. As I conclude, consider the words of G. K. Chesterton who sums it up so well: “There are two kinds of people in the world: the conscious and the unconscious dogmatists. I have always found that the unconscious dogmatists were by far the most dogmatic.” [3]

[1] The NASA Genesis mission returned (crashed) on Sept 8, 2004 with the hope of learning more about how our solar system was formed. Although NASA officially (on their website) states there are no life-origin motives involved in the project, others disagree. "Our mission is to gain a greater understanding of the origin and evolution of organic material on Earth," said Michael Mumma, a comet expert and director of the Goddard Center for Astrobiology, NASA Astrobiology Institute, who is leading the research. "The key question is: Were water and organic molecules delivered to Earth by cometary impact and does [that process] extend to planets elsewhere?" In other words, panspermia.

[2] The Cambrian Explosion is the radiation of animal phyla that started about 570 million years ago. The famous paleontologist Stephen J. Gould (1941-2002), referred to this as the reverse cone of diversity. Evolutionary theory implies life gets more and more complex and diverse from one origin. But the whole thing turns out to be reversed based on the fossil record.

[3] Gilbert Keith Chesterton (1874-1936), Generally Speaking, 1928

About the author

I am a Christian, husband, father of two daughters, a partner and lead architect of EasyTerritory, armchair apologist and philosopher, writer of hand-crafted electronic music, avid kiteboarder and a kid around anything that flies (rockets, planes, copters, boomerangs)

On Facebook
On GoodReads