ID-101 (Part One)

by Brian 11. April 2011 17:22

What is Intelligent Design? If you ask the average proponent of Darwinian evolution, the answer is Creationism. He or she will say ID is nothing more than a god-of-the-gaps scheme concocted by a bunch of fundamentalist Christians. Ironically, if you ask the average Christian the same question, you’ll get a complimentary version of the same answer! When it comes to a real understanding of ID, neither side has much incentive to do the heavy lifting. It’s easier for opponents to excommunicate Intelligent Design from Science and those who believe in a Creator do not require it as a confirmation – ID is a given. Yet those at the forefront of Intelligent Design are adding to our understanding of the world; certainly more so than critics give them credit and probably less than what most theists think. The high road in this debate is neither ad hominem attack nor tacit support. 

What is Intelligent Design?

Straight from the Discovery Institute, the leading ID think-tank, Intelligent Design is defined as follows:

The theory of intelligent design holds that certain features of the universe and of living things are best explained by an intelligent cause, not an undirected process such as natural selection[i]

Immediately the skeptic’s dander is up. They will say: ID invokes an intelligent cause, which we all know is God. Since science only deals with natural causes, ID is not science. Of course, this line of reasoning misconstrues the methodology ID scientists might employ with the potential outcome of their research. But critics do not stop here. It is not uncommon for them to introduce two more red herrings: First; invoking an intelligent cause for life and the universe hinders scientific inquiry and discovery. Second, an intelligent cause is beyond scientific investigation and therefore adds nothing to our understanding of the world. I will show why these accusations are false using the following analogy.

Imagine a forensic scientist who is asked to examine a deceased man in order to find the cause of death. The cause may be natural, or it may be the result of foul play – an intelligent cause. Let’s further imagine the man died from a rare toxin that entered his bloodstream and worked its way up to his heart causing cardiac arrest. Finally, let’s assume the conclusion from forensics, in this case, is murder. If correct, then clearly the efficient and final cause leading to the man’s death was human intelligence and not natural processes. Does this mean the methods employed by the forensic scientist to determine the cause were unscientific? - Of course not. Does this mean further studies in medicine, heart disease or the circulatory system should grind to a halt because of his findings? – Obviously not. What about our understanding of the world? We might not gain scientific knowledge in this case, but we certainly learned something very important – the cause of the man's death.

The attempts by critics to cut ID off at the knees are hardly convincing. But perhaps the work ID proponents are doing really isn’t science. So let’s take a closer look and delve into ID theory to see if we can find something substantive. Foundational to ID is William Dembski’s concept of Specified Complexity which essentially denotes the two hallmarks of design: complexity and a specified pattern. Before I go into this in more detail, it is worth noting there is a vast amount of criticism, disinformation, and polemics on the web from those who loathe anything ID. But what you will not find in the criticism is any recognition of the fact design-detection is something all of us do regularly. If you find an arrowhead in the woods you immediately recognize it as something manmade and not the product of natural forces and erosion. Since on the average critic’s universe, our minds are nothing more than biochemical computers; what sort of processing do you suppose goes on when we see an effect and infer a design cause? Perhaps the process could be discovered, understood and formulated. That is precisely what Dembski and others are trying to do.

The Explanatory Filter

 

Dembski’s explanatory filter is configured to prevent false-positives by giving necessity and chance the benefit of the doubt. This does mean the filter allows false-negatives through, where design is not detected. A good bit of modern-art might not make it past chance for example. That is, the filter might not distinguish an intentional set of splashes of paint on a canvas from several buckets of paint falling off a ladder onto a canvas. Even given this limitation, it's better than a false positive. The following, which I call the mountain archer analogy, explains how the filter works. Imagine an archer shooting an arrow off the top of a mountain down into a valley ten square miles in size. Further, imagine the archer is so high up the mountain, the arrow could reach any spot in the valley below…

·         Hitting the valley is a high probability (HP) and follows necessarily from initial conditions and the law of gravity. The archer could fire over his shoulder, blindfolded, and still hit the valley.

 

·         Hitting one of a small number of trees in the valley the archer was not aiming for is an intermediate probability (IP) – not exactly what one might expect, but certainly within the reach of chance.

·         Hitting a stream running through the valley the archer was aiming for is a specified intermediate probability (Spec + IP) – the filter would chalk this up to chance and register a false negative even though this was a good shot and involved an intelligent cause. But the archer could have been blindfolded and got lucky.

·         Hitting a particular pebble the archer was not aiming for would be a small probability (SP) – but unspecified. There are lots of pebbles in the valley and even though hitting a particular one is a small probability event, it is not unlikely to hit a pebble.

·         Hitting a particular pebble that you had earlier painted a bulls-eye on is a specified small probability (Spec + SP) and would make it through the filter to design. The archer is either an incredible shot or a good magician – either way, we have a design-cause.[ii] No one in their right mind would attribute such an event to chance.

Probabilistic Resources

Dembski introduces the concept of probabilistic resources which include replicational and specificational resources. Probabilistic resources comprise the relevant ways an event can occur.[iii] Replicational resources are basically the number of samples taken. In the above analogy, it could be the number of shots fired. Specificational resources refer to the number of opportunities or ways to specify an event. Using the same analogy, it could be the number of pebbles with bulls-eyes (or some other mark indicating a target.) Obviously, the greater number of pebbles with targets and the greater number of shots fired, the greater the probability of hitting a target. 

Universal Probability Bound

If the marked pebble has a surface area of one square inch, then the odds of hitting it at random are roughly 1 in 2.5e11 or one in 250 billion[iv] - about 3000 times less likely than winning the Power Ball lottery with a single ticket. Even if it may seem impractical, critics would argue that this is still within the reach of chance. This is where Dembski introduces his universal probability bound (UPB) - a degree of improbability below which a specified event of that probability cannot reasonably be attributed to chance regardless of whatever probabilistic resources from the known universe are factored in.[v]  The UPB is 1e150 (one with a hundred and fifty zeros after it.) These odds, by taking the inverse, are so small; it would be about as likely to win the Power Ball twenty times in a row with one ticket each! Something even the contrarian realizes would be the result of intelligence and not luck (i.e. someone is cheating.)

But what does Dembski mean by: regardless of whatever probabilistic resources from the known universe are factored in? Here he is basing his probabilistic resources limit on the product of the number of elementary particles in the known universe (1e80) repeating every instant (1e45 per second [based on Plank time]) since the beginning of time in seconds (1e25) = 1e150. This seems like overkill, but apparently, you need this to overcome skepticism.[vi] But do we really need this much overhead? Take for example the estimated number of grains of sand on all of the beaches on earth. Say I traveled to a random beach and dug down and marked a single grain of sand. Now if you go to a random beach anywhere on earth, to a random spot, dig to a random depth (up to 5 meters), and grab a random grain, the odds of it being the same grain as the one I marked are estimated at one in 7.5e18. A rational person would never believe this would happen by chance. Even so, those odds are 131 orders of magnitude better than one in 1e150. The rational position is to realize there comes a point where theoretical possibility must give way to a practical possibility. The odds one in 1e150 are not zero, so a specified, small-probability event at this scale is not theoretically impossible, but it is rational to conclude its practical impossibility.

So far Dembski’s filter appears to be sound. But there is another criticism from detractors: affinities and constraints in the probability landscape can create the appearance of design completely by chance. Say for example the archer shot multiple arrows at random each with a long string of equal length. The resulting semicircle pattern on the valley below might be considered a design-cause since it is unlikely such a pattern would emerge at random. But this criticism obviously fails to recognize how in this case the probability landscape is greatly reduced by the constraint (the string) so that each event necessarily falls within a semicircle swath in the valley below. But perhaps there are laws governing the universe where affinities and constraints shape chaos into order. In a future post, I will try to tackle this and the other foundational principle of ID – irreducible complexity. It is when specified complexity meets the real world things get tricky.



[i] http://www.discovery.org/csc/topQuestions.php#questionsAboutIntelligentDesign
[ii]This analogy does not take into account Dembski’s universal probability bound of 1e-150 which is over 138 orders of magnitude more stringent than the odds in this analogy

[iii]The Design Inference, William Dembski, pg.181

[iv] This assumes an equal probability of hitting any location across the valley below which in real life would not be the case – for example, if you could hit the corners you could likely land outside the valley as well.

[v] ISCID Encyclopedia of Science and Philosophy (1999)

[vi]This seems straightforward in terms of replicational resources but I do question the validity of also including specificational resources here. Samples repeated as quickly as physically possible in every conceivable location in the universe since the big bang does seem to set an upper limit for replicational resources, but I do not see the relation to how specifications can be varied. Imagine every elementary particle in the universe has a piggyback random number generator cranking out 200-digit numbers one every Plank-time since the big bang. One would reasonably expect that the significand of the square root of two had not been generated out 200 places. But what about the irrational square roots of any positive integer and their significands to 200 places? I’m sure SETI would consider a binary transmission of 200 digits of the square root of two to have an intelligent cause – but what about the square root of 3 or 7, etc?

blog comments powered by Disqus

About the author

I am a Christian, husband, father of two daughters, an owner of ISC, lead architect of MapDotNet, armchair apologist and philosopher, writer of hand-crafted electronic music, and a kid around anything that flies (rockets, planes, copters, boomerangs, hot air baloons, lawn furniture)

On Facebook
On GoodReads