## Dembski’s Impossible Assumptions

One of the things that gets me when reading just about anything by William Dembski is his continual use of probability calculations to try to support his claims about evolution. For example, in this paper by Dembski on his “displacement problem” we get the following,

Take the search for a very modest protein, one that is, say, 100 amino acids in length (most proteins are at least 250 to 300 amino acids in length). The space of all possible protein sequences that are 100 amino acids in length has size 20

^{100}, or approximately 1.27Ã—10^{130}. Exhaustively searching a space this size to find a target this small is utterly beyond not only present computational capacities but also the computational capacities of the universe as we know it.[snip]

When it comes to locating small targets in large spaces, random sampling and random walks are equally ineffective.

There are two problems here. The first is Dembski’s assumption that there is a target that has to be found. This implies the very thing that Dembski is trying to prove–i.e. teleology. That is there is a purpose or goal that is being worked towards. The problem Dembski has with evolutionary theory is that there is supposedly no goal. In short, Dembski is smuggling in (and not very well) the conclusion he wants.

The second problem is this notion of a target also implicitly assumes that there is a highly imporbably outcome when in fact that needn’t be the case at all. For example, evolutionary theory, despite Dembski’s caricature of it, does not work towards a target but merely what workds. That is, to use Dembski’s language suppose there are two ideal targets for his protien of length 100, and call these two targets *T _{a}* and

*T*. Further, suppose that for each of these two targets that there is a neighborhood about the targets where the proteins in that neighborhood would be sufficient for whatever organism needs this protein to go on living, that is suppose we have

_{b}*B(T*and

_{a})*B(T*. Depending on how large these neighborhoods are around our ideal targets the probability of getting inside one of these two neighborhoods might be much, much lower than Dembski’s calculations imply. Then factor in that you might have say 1 million organisms where each one is trying a different path in terms of getting into one of these neighborhoods. Suddenly what appears to be highly unlikely might be much more likely. And finally, suppose that instead of looking for a complete protein of lenght 100 amino acids we need to go from 98 amino acids to 100.

_{b})Dembski’s calculations for his caricature is that a sample of 10^{130} is needed. Indeed a very, very large number. However, what is the number when we take into consideration all of the above extensions to Dembski’s basic caricature? What if the number is brought down to something like 10^{10}. Still a very large number, but there is also another assumption thrown in by Dembski. He thinks that the probability of success has to be about 0.63 for some reason. Why? The only thing that Dembski writes about that number is that the probability of hitting his target in *m* independent trials is 1 – (1 – *p*)* ^{m}* which approachs 0.63 as

*m*approaches 1/

*p*. But why should this be the default cutoff for finding our way into the target neighborhoods? What if we make it 0.25 or 0.10 instead. After all, evolutionary theory is perfectlyfine with organisms going extinct. If the target isn’t found, oh well too bad for whatever it is that is looking for these proteins. Including this change then the sample size required drops down to 1,338,079,092. Again still a very large number, but note that this is about 121 orders of magnitude smaller than Dembski’s number for sample size.

In short, Dembski’s conclusions are highly dependent on his assumptions and his assumptions as have been shown (and pointed out to Dembski) many times are quite false. This is why many consider Dembski to be a dishonest hack when it comes to his writings on evolution and Intelligent Design. I tend to agree and think that he dresses up his discussions with mathematics to hide this from his average reader.

Dishonest is right. It’s a similar pattern to the b.s. about how, if you dump clock parts into a paper bag & shake, you’ll never get a clock. Well, *duh*.

Wouldn’t it be more accurate to say that evolutionary theory would have a range of acceptable solutions (i.e. the creature doesn’t die) for the protein. Then based on environmental events, we find one subset of the range to be more likely to successfully pass on their solution. Repeated winnowing (what worked will for the ice age doesn’t work as well when the glaciers retreat) would further reduce the solution sets available. Further, random changes (either mutations or differences in the ‘normal’ range such as people’s hair color) that still met the first goal (i.e. they don’t kill the organism before it can reproduce) would allow for the solution set to be broadened again after any environmental events pass (again, think ice age). So the issue for evolutionary theory wouldn’t be any targeted goal, but rather the winnowing and expansion that would support the solution/solutions produced.

I think you are letting him set the equation terms and you are just quibbling over the values used for the terms.

I’m not sure YAJ, but I don’t see a big difference with what I’ve written. Your,

Sounds somewhat similar to my concept of a neighborhood in which we find something that allows the organism to survive (and hence breed, and pass on the genes necessary for survival, while those who don’t die off). Obviously, as the environment evolves as well, the neighborhood would change as well. What previously worked (as you note) might no longer work, hence extinction or a major die off.

I’ve a post on a related topic – the alleged need for “target specifications” in genetic algorithms – over on the Panda’s Thumb.

“Target? TARGET? We donÃ¢ï¿½ï¿½t need no stinkinÃ¢ï¿½ï¿½ Target!”

Cheers, Dave Thomas