- [
*Algorithmic Information Theory*.] 1987 - [
*Information, Randomness and Incompleteness: 2nd edn*.] 1990 - [
*Information-Theoretic Incompleteness*.] 1992 1998*The Limits of Mathematics*.*The Unknowable*. 1999*Exploring Randomness*. 2001*Conversations with a Mathematician*. 20022005*Meta Math!*.*Godel's Way*. 2012, with Newton C. A. da Costa, Francisco Antonio Doria2012*Proving Darwin*.

- Randomness and mathematical proof. 1975. (In
*From Complexity to Life*) - An algebraic equation for the halting probability. 1988. (In
*The Universal Turing Machine*) - A random walk in arithmetic. 1992. (In
*The New Scientist Guide to Chaos*)

This book presents the final version of Chaitin’s course on the limits of mathematical reasoning.
This course uses algorithmic information theory to show that mathematics has serious limitations,
and features a new more didactic approach to algorithmic information theory using LISP and Mathematica software.
The thesis of the book is that the incompleteness phenomenon discovered by Gödel
is much more widespread and serious than hitherto suspected.
Also Gödel and Einstein’s views on the foundations of mathematics are discussed,
and it is suggested that mathematics is quasi-empirical
and that experimental mathematics should be used more freely.

Here are three Gregory Chaitin lectures on his algorithmic complexity theory and the halting probability Omega. The lectures are "Randomness in arithmetic and the decline and fall of reductionism in pure mathematics", "Elegant LISP programs", "An invitation to algorithmic information theory". There is also "The limits of mathematics".

As far as I understand it [after reading this book, and then having
a long conversation with a more mathematically experienced colleague],
Chaitin’s argument is as follows. Omega is the probability that a
program chosen at random halts (modulo some technicalities to make
this definition well behaved). We would like to find the value of
Omega. It is possible to increase the lower bound on Omega’s value by
observing that particular programs halt. But clearly it is not
possible to decrease the upper bound except by *reasoning* that
particular programs do *not* halt, for it is not possible to
observe a program not halting. Algorithmic information theory shows
that it is not possible to reason about programs that are much larger
than one’s reasoning system. So the *only* way to tighten the
upper bound on Omega is to keep increasing the size of one’s reasoning
system, by adding more axioms.

There is some overlap and repetition between the chapters, but this is no bad thing, as he gets to explain the same ideas from different angles. These are edited transcripts of lectures, and have a rather chatty, even exuberant, style, but lack some of the polish that might be expected from a written essay. Never mind – this is a clear explanation of some deep results, from the original inventor himself. Mind expanding stuff.

Chaitin produced his first theoretical results in the 1970s.

You can’t prove
that an *N*-bit string is algorithmically incompressible if *N
*is larger than the complexity of your axioms. You can’t prove that
an *N*-bit string is algorithmically irreducible if it has more
bits than the axioms you’re using for the proof.

What he has done since is program up these theorems, in a form of Lisp. This has allowed him to determine the actual (surprisingly small) values of numerical constants in some of his undecidability theorems, and lower bounds on the halting probability. Probably only a mathematician would make the following definition:

Call a program "elegant"
if no smaller program has the same output.

(Although, to be fair, he does say that these "elegant" programs are not the kind of things you would want to write for real!) He goes on to explain the conclusions of his work.

The existence
of irreducible mathematical facts shows that in some cases there is
absolutely no compression, no structure or pattern at all in
mathematical truth.

However, this is not necessarily a problem.

I don’t believe
that all this work on the foundations of mathematics has changed at
all the way mathematicians actually work. Gödel’s incompleteness
theorem was initially very shocking. But then mathematicians noticed
that the kind of assertion that Gödel constructed that’s true but
unprovable is not the kind of assertion that you deal with in your
everyday work as a mathematician. And I think it’s fair to say that
about Omega too.

But what has made a difference to the working mathematician is the computer. A whole new subject is opening up: empirical, or experimental, mathematics.

This essential companion volume to Chaitin’s highly successful “The Limits of Mathematics”,
also published by Springer, gives a brilliant historical survey of the work this century
on the foundations of mathematics, in which the author was a major participant.
“The Unknowable” is a very readable and concrete introduction to Chaitin’s ideas,
and it includes a detailed explanation of the programming language used by Chaitin in both volumes.
It will enable computer users to interact with the author’s proofs and discover for
themselves how they work.
The software for “The Unknowable” can be downloaded from the author’s Web site.

This essential companion volume to Chatin’s highly successful books
*The Unknowable* and *The Limits of Mathematics*, also published by Springer,
presents the technical core of his theory of program-size complexity,
also known as algorithmic information theory.
(The two previous volumes are more concerned with applications to metamathematics.)
LISP is used to present the key algorithms
and to enable computer users to interact with the author’s proofs and discover for themselves how they work.
The LISP code for this book is available at the author’s Web site
together with a Java applet LISP interpreter.

G.J. Chaitin is at the IBM Thomas J. Watson Research Center in New York.
He has shown that God plays dice not only in quantum mechanics,
but even in the foundations of mathematics,
where Chaitin discovered mathematical facts that are true for no reason, that are true by accident.
This book collects his most wide-ranging and non-technical lectures and interviews,
and it will be of interest to anyone concerned with the philosophy of mathematics,
with the similarities and differences between physics and mathematics,
or with the creative process and mathematics as an art.

Chaitin’s revolutionary discovery, the Omega number, is an exquisitely complex representation of unknowability in mathematics. His investigations shed light on what we can ultimately know about the universe and the very nature of life. In an account filled with passion, Chaitin delineates the specific intellectual and intuitive steps he took toward the discovery. He takes us to the very frontiers of scientific thinking, and helps us to appreciate the art—and the sheer beauty—in the science of math.

This is a more accessible version of Chaitin’s work than his earlier, more academic, book. He explains the background leading up to the definition and properties of a rather fascinating number: Omega, the uncomputable Halting probability. On the way, he leads us through discussions about the value of multiple proofs of deep theorems (they highlight different aspects of the theorems), the unspeakability of the majority of real numbers, and the role of randomness in mathematics.

This is a nice little book, spoiled for me by only one thing! A small
thing, admittedly! But annoying nonetheless! I refer to the profusion of
exclamation marks!!! At the time of reading, it felt like nearly every
sentence ended with one. On looking back, I see that they are not *quite*
that frequent, but still frequent enough to be irritating. The editor
should have removed all of them – the material is exciting enough in its
own right without the need to keep telling us so.

Kurt Gödel (1906-1978) was an Austrian-American mathematician,
who is best known for his incompleteness theorems.
He was the greatest mathematical logician of the 20th century,
with his contributions extending to Einstein’s general relativity,
as he proved that Einstein’s theory allows for time machines.

The Gödel incompleteness phenomenon –
*the usual formal mathematical systems cannot prove nor disprove all true mathematical sentences* –
is frequently presented in textbooks as something that happens in the rarefied realms of mathematical logic,
and that has nothing to do with the real world.
Practice shows the contrary though;
one can demonstrate the validity of the phenomenon in various areas,
ranging from chaos theory and physics to economics, and even ecology.
In this lively treatise, based on Chaitin’s groundbreaking work and
on the da Costa-Doria results in physics, ecology, economics and computer science,
the authors show that the Gödel incompleteness phenomenon
can directly bear on the practice of science and perhaps on our everyday life.

This accessible book gives a new, detailed and elementary explanation of the Gödel incompleteness theorems and presents the Chaitin results and their relation to the da Costa-Doria results, which are given in full, but with no technicalities. Besides theory, the historical report and personal stories about the main character and on this book’s writing process, make it appealing leisure reading for those interested in mathematics, logic, physics, philosophy and computer sciences.

Groundbreaking mathematician Gregory Chaitin gives us
the first book to posit that we can prove how
Darwin's theory of evolution works on a mathematical level.

For years it has been received wisdom among most scientists that, just as Darwin claimed, all of the Earth’s life-forms evolved by blind chance. But does Darwin’s theory function on a purely mathematical level? Has there been enough time for evolution to produce the remarkable biological diversity we see around us? It’s a question no one has yet answered—in fact, no one has even attempted to answer it until now.

In this illuminating and provocative book,
Gregory Chaitin argues that we can’t be sure
evolution makes sense without a mathematical theory.
He elucidates the mathematical scheme he’s developed
that can explain life itself, and examines the works
of mathematical pioneers John von Neumann and Alan Turing through the lens of biology.
Chaitin presents an accessible introduction to metabiology,
a new way of thinking about biological science
that highlights the mathematical structures underpinning the biological world.
Fascinating and thought-provoking,
*Proving Darwin* makes clear how biology may have found
its greatest ally in mathematics.

This slim book (120pp) recounts a set of lectures Chaitin gave where he describes a mathematical model he uses to ‘prove Darwin’, that is, to show that evolutionary search is exponentially faster than exhaustive search, and only polynomially slower than ‘intelligent design’ (guessing the right answer in one go).

The entities being evolved are (mathematical abstractions of) computer programs. The fitness is how large an integer the program calculates: for a program of a given size, this has the upper bound of the (uncomputable) Busy Beaver number. The evolutionary approach used is what is known as a ‘(1+1) evolutionary strategy’: each generation there is one ‘parent’ which is mutated to form a child, and the fitter of the parent and child becomes the parent of the next generation. (Chaitin [p.44] explicitly says he keeps the mutant only if it is fitter, not the usual approach of keeping it even if it is the same fitness, which allows neutral drift.) The mutations are themselves programs; parent A is mutated to child A’ by applying the mutation program M: A’ = M(A). Bigger mutation programs have exponetially lower chances of being applied.

With this setup, Chaitin proves his result: the evolutionary form of the search is exponentially faster than exhaustive search. On the way, he also has many interesting things to say about modelling, ‘metabiology’, and software.

Now, mathematically, this is all very interesting. But I have a few issues with the physical implementation of this scheme, and it would need to be physically implementable in order to demonstrate that physical Darwinian evolution has the properties being discussed.

My first issue is with the need for an uncomputable ‘halting oracle’ in the proof.
He needs to use this oracle twice each generation.
The first use is to check that the mutation M, a random bitstring,
is itself a valid program that actually produces a child A’: does M(A) halt?.
The second use is to check that the resulting child program A’ halts.
Chaitin [p.46] says that this halting oracle is where all
the ‘creativity’ is coming from in the mathematical model.
He also explains how the Busy Beaver problem is related
to his uncomputable halting probability Omega
(described in more detail in his book *Meta Math!*),
and how his model rapidly converges to Omega.

So, given an uncomputable halting *oracle*,
here is a way to use it to find a good approximation to the uncomputable halting *probability*.
Maybe not very surprising, and maybe not that related to how Darwinian evolution works?
All the ‘creativity’ comes from the uncomputability.

My second issue is with the run time of these evolving programs. Unlike living organisms, whose fitness is evaluated as they live, and where halting is dying, these programs don’t deliver their solution until they halt. And how long does that take? In the mathematical model, no time at all: Chaitin [p.19] says his organisms have no hardware, only software. But in an actually executing program, which does need hardware, the execution would take a ridiculously long time. The Busy Beaver output grows inconceivably fast as the size of the program grows, and very soon, even for a computer performing one operation per Planck time, would take longer than the age of the universe to execute, halt, and deliver its output. Darwinian evolution does not have this freedom.

I found the ideas here interesting, and well explained.
But I don’t think Chaitin has succeeded in *Proving Darwin*.