Saturday, May 13, 2006

Irreducible complexity in Scientific American

An article has been written in Scientific American which talks about irreducible complexity.

Okay, so it's not particularly the ID variety. It's in the March 2006 issue, and it's on "The Limits of Reason". The author, Gregory Chaitin, points out that Godel's Incompleteness Theorem "did for" the idea of maths as a unifying principle for knowledge, by demonstrating that there are statements in maths that can't be proved. He explores the idea that there are also numbers that are well defined, but can't be written down.

"Hang on," I hear you say, "aren't the square root of two and pi and other irrational numbers like that?" No - the point he is getting at is that you can write a short algorithm that will derive each of these. But you can't write a short algorithm that will derive other classes of numbers. So not only are they irrational, they also contain a lot of algorithmic information.

In actual fact, this article does have some relevance to ID. For example, in debates a few months ago here, we talked about whether random processes could generate meaningful amounts of information. Chaitin points out that "a useful theory is a compression of the data; comprehension is compression. You compress things into computer programs, into concise algorithmic descriptions. The simpler the theory, the better you understand something."

So take a random sequence of letters. The specification "means something in English" is fairly concise - even though the implementation of this in algorithmic terms is complex. However, to argue that "we can't exclude the possibility that this contains information in a conceivable code" doesn't mean that the letters contain more information - or may contain more information - because if the algorithmic complexity is high (the amount of programming required to demonstrate the information content), then the stream of letters is incompressible; they have no redundancy; "the best one can do is transmit them directly. They are called irreducible or algorithmically random."

I think that this article then effectively endorses some aspects of Dembski's arguments about the nature of complexity, and certainly seems to weaken some of the counter-analyses.