Here are three Gregory Chaitin lectures on his algorithmic complexity theory and the halting probability Omega. The lectures are "Randomness in arithmetic and the decline and fall of reductionism in pure mathematics", "Elegant LISP programs", "An invitation to algorithmic information theory". There is also "The limits of mathematics".
As far as I understand it [after reading this book, and then having a long conversation with a more mathematically experienced colleague], Chaitin’s argument is as follows. Omega is the probability that a program chosen at random halts (modulo some technicalities to make this definition well behaved). We would like to find the value of Omega. It is possible to increase the lower bound on Omega’s value by observing that particular programs halt. But clearly it is not possible to decrease the upper bound except by reasoning that particular programs do not halt, for it is not possible to observe a program not halting. Algorithmic information theory shows that it is not possible to reason about programs that are much larger than one’s reasoning system. So the only way to tighten the upper bound on Omega is to keep increasing the size of one’s reasoning system, by adding more axioms.
There is some overlap and repetition between the chapters, but this is no bad thing, as he gets to explain the same ideas from different angles. These are edited transcripts of lectures, and have a rather chatty, even exuberant, style, but lack some of the polish that might be expected from a written essay. Never mind – this is a clear explanation of some deep results, from the original inventor himself. Mind expanding stuff.
Chaitin produced his first theoretical results in the 1970s.
What he has done since is program up these theorems, in a form of Lisp. This has allowed him to determine the actual (surprisingly small) values of numerical constants in some of his undecidability theorems, and lower bounds on the halting probability. Probably only a mathematician would make the following definition:
(Although, to be fair, he does say that these "elegant" programs are not the kind of things you would want to write for real!) He goes on to explain the conclusions of his work.
However, this is not necessarily a problem.
But what has made a difference to the working mathematician is the computer. A whole new subject is opening up: empirical, or experimental, mathematics.
Chaitin’s revolutionary discovery, the Omega number, is an exquisitely complex representation of unknowability in mathematics. His investigations shed light on what we can ultimately know about the universe and the very nature of life. In an account filled with passion, Chaitin delineates the specific intellectual and intuitive steps he took toward the discovery. He takes us to the very frontiers of scientific thinking, and helps us to appreciate the art—and the sheer beauty—in the science of math.
This is a more accessible version of Chaitin’s work than his earlier, more academic, book. He explains the background leading up to the definition and properties of a rather fascinating number: Omega, the uncomputable Halting probability. On the way, he leads us through discussions about the value of multiple proofs of deep theorems (they highlight different aspects of the theorems), the unspeakability of the majority of real numbers, and the role of randomness in mathematics.
This is a nice little book, spoiled for me by only one thing! A small thing, admittedly! But annoying nonetheless! I refer to the profusion of exclamation marks!!! At the time of reading, it felt like nearly every sentence ended with one. On looking back, I see that they are not quite that frequent, but still frequent enough to be irritating. The editor should have removed all of them – the material is exciting enough in its own right without the need to keep telling us so.
The Gödel incompleteness phenomenon – the usual formal mathematical systems cannot prove nor disprove all true mathematical sentences – is frequently presented in textbooks as something that happens in the rarefied realms of mathematical logic, and that has nothing to do with the real world. Practice shows the contrary though; one can demonstrate the validity of the phenomenon in various areas, ranging from chaos theory and physics to economics, and even ecology. In this lively treatise, based on Chaitin’s groundbreaking work and on the da Costa-Doria results in physics, ecology, economics and computer science, the authors show that the Gödel incompleteness phenomenon can directly bear on the practice of science and perhaps on our everyday life.
This accessible book gives a new, detailed and elementary explanation of the Gödel incompleteness theorems and presents the Chaitin results and their relation to the da Costa-Doria results, which are given in full, but with no technicalities. Besides theory, the historical report and personal stories about the main character and on this book’s writing process, make it appealing leisure reading for those interested in mathematics, logic, physics, philosophy and computer sciences.
For years it has been received wisdom among most scientists that, just as Darwin claimed, all of the Earth’s life-forms evolved by blind chance. But does Darwin’s theory function on a purely mathematical level? Has there been enough time for evolution to produce the remarkable biological diversity we see around us? It’s a question no one has yet answered—in fact, no one has even attempted to answer it until now.
In this illuminating and provocative book, Gregory Chaitin argues that we can’t be sure evolution makes sense without a mathematical theory. He elucidates the mathematical scheme he’s developed that can explain life itself, and examines the works of mathematical pioneers John von Neumann and Alan Turing through the lens of biology. Chaitin presents an accessible introduction to metabiology, a new way of thinking about biological science that highlights the mathematical structures underpinning the biological world. Fascinating and thought-provoking, Proving Darwin makes clear how biology may have found its greatest ally in mathematics.
This slim book (120pp) recounts a set of lectures Chaitin gave where he describes a mathematical model he uses to ‘prove Darwin’, that is, to show that evolutionary search is exponentially faster than exhaustive search, and only polynomially slower than ‘intelligent design’ (guessing the right answer in one go).
The entities being evolved are (mathematical abstractions of) computer programs. The fitness is how large an integer the program calculates: for a program of a given size, this has the upper bound of the (uncomputable) Busy Beaver number. The evolutionary approach used is what is known as a ‘(1+1) evolutionary strategy’: each generation there is one ‘parent’ which is mutated to form a child, and the fitter of the parent and child becomes the parent of the next generation. (Chaitin [p.44] explicitly says he keeps the mutant only if it is fitter, not the usual approach of keeping it even if it is the same fitness, which allows neutral drift.) The mutations are themselves programs; parent A is mutated to child A’ by applying the mutation program M: A’ = M(A). Bigger mutation programs have exponetially lower chances of being applied.
With this setup, Chaitin proves his result: the evolutionary form of the search is exponentially faster than exhaustive search. On the way, he also has many interesting things to say about modelling, ‘metabiology’, and software.
Now, mathematically, this is all very interesting. But I have a few issues with the physical implementation of this scheme, and it would need to be physically implementable in order to demonstrate that physical Darwinian evolution has the properties being discussed.
My first issue is with the need for an uncomputable ‘halting oracle’ in the proof. He needs to use this oracle twice each generation. The first use is to check that the mutation M, a random bitstring, is itself a valid program that actually produces a child A’: does M(A) halt?. The second use is to check that the resulting child program A’ halts. Chaitin [p.46] says that this halting oracle is where all the ‘creativity’ is coming from in the mathematical model. He also explains how the Busy Beaver problem is related to his uncomputable halting probability Omega (described in more detail in his book Meta Math!), and how his model rapidly converges to Omega.
So, given an uncomputable halting oracle, here is a way to use it to find a good approximation to the uncomputable halting probability. Maybe not very surprising, and maybe not that related to how Darwinian evolution works? All the ‘creativity’ comes from the uncomputability.
My second issue is with the run time of these evolving programs. Unlike living organisms, whose fitness is evaluated as they live, and where halting is dying, these programs don’t deliver their solution until they halt. And how long does that take? In the mathematical model, no time at all: Chaitin [p.19] says his organisms have no hardware, only software. But in an actually executing program, which does need hardware, the execution would take a ridiculously long time. The Busy Beaver output grows inconceivably fast as the size of the program grows, and very soon, even for a computer performing one operation per Planck time, would take longer than the age of the universe to execute, halt, and deliver its output. Darwinian evolution does not have this freedom.
I found the ideas here interesting, and well explained. But I don’t think Chaitin has succeeded in Proving Darwin.