Science in the 20th century has become focussed on the what, with scant regard for the why. Deutsch wants explanation put back into our way of doing science -- science is our way of understanding the world, not just tersely describing what it does. This book is his attempt to describe what An Explanation of Everything might look like (as contrasted with physicists' quest for a theory of everything, by which they mean one single equation to describe fundamental physics). This explanation is structured around four theories central to modern science:
He argues these theories should be 'taken seriously'; that is, scientists should be exploring their (possibly extreme) logical consequences, not just applying them in narrow domains.
Despite the fact that they are not taken seriously, usually because the consequences are disliked, there are no better alternatives. Deutsch argues that this results in some people supporting worse alternatives, leading to a lot of wasted effort.
I like the rigorously rational view Deutsch takes:
He also has little time for those who artificially handicap themselves:
He makes it clear that what we like to think of as purely abstract computation is in fact deeply grounded in physics, with the physics affecting the kinds of computers we can build, and therefore the kinds of models of computation we devise. Analogue classical physics gives us a fundamentally different model of computation.
[This focus on what we mean by proof in 'finite proof' contrasts rather nicely with Lavine's focus on what we mean by finite.] Similarly, the discrete 'classical' physics of Turing machines also gives us an incorrect model of computation.
Quantum computation is fundamentally different from classical computation. Deutsch states quite clearly that there are quantum programs that cannot be run on a classical Turing machine.
This statement confused me when I first read it: I have listened to quantum computing researchers describe their emulations of quantum computations on classical computers, they just require exponentially increasing resources (either processors, or time). These two statements seem incompatible. But then a few pages later I came across a paragraph that affects his meaning of 'in principle' in the quote above.
Now, a universal computer must, by definition, be able to emulate any other computer using only a 'similar' amount of resources (where 'similar' has a technical meaning that excludes 'exponentially more'). But a 'Universal' Turing Machine does need exponentially more resources than a quantum computer in order to emulate one, and so is not truly 'Universal' by this definition.
So Deutsch, being firmly grounded in physical law and the universe we are living in, argues that there are certain computations that cannot be performed classically in a tractable time, because there are insufficient resources in our single universe, but that can be performed quantumly, when the resources of exponentially many parallel universes can be brought to bear. There are computations that require exponential resources on classical computers, and so are intractable, but are are tractable on quantum computers. One such kind of computation is that of calculating the state of a multi-particle quantum system itself.
Deutsch weaves together his four strands, and comes up with some rather interesting, and sometimes startling, conclusions. In particular, his use of the Turing principle to define a universal virtual reality renderer, and to use that to infer consequences for the laws of physics, in particular, for time travel, is quite ingenious. And one has to admire the author who can conclude that his view
a mere 15 pages after describing Tipler's omega-point argument that it is possible to perform an infinite amount of computation in a universe with a particular configuration, inferring that we are in such a universe, and further inferring
This is an excellent book, with some fascinating ideas. It is particularly nice to find a real practicing physicist who is willing to come out and admit that explanation is what it's all about. Most of the book is solid scientific extrapolation, but his description, in the last chapter, of a universe full of beings who must continue to evolve and grow in knowledge for ever, is particularly exhilarating. (I found it made an optimistic contrast to Chaitin's slightly gloomy view of the place of randomness in increasing knowledge.)
I do have one area where I have some confusion, however, and would have liked more explanation. That is of the Many Worlds interpretation itself. It certainly does give an intuitive explanation of how quantum computers work (or rather, where they do all their work), but I am less convinced that it is the inevitable explanation of quantum interference experiments. (Deutsch is a much better physicist than I am, however.) Cramer's Transactional interpretation, in particular, seems to offer equally plausible explanations of these experiments, without being so profligate with universes. I would also have liked a little more detail of what the Many Worlds interpretation is: before reading this I had a vague picture of a universe branching at every decision point; Deutsch talks of reams of pre-existing identical universes subsequently evolving in different ways depending on the choice.
Despite my area of doubt and uncertainly, I highly recommend this book. It is very well written (the dialogue between Deutsch and the crypto-inductivist is particular fun), brings together some important ideas, avoids the excesses of Penrose and Tipler (whilst exploiting their good parts), and gives a view onto a humane and rational explanation of the world.
In this profound and seminal book, David Deutsch explores the furthest reaches of our current understanding, taking in the Infinity Hotel, supernovae and the nature of optimism, to instill in all of us a wonder at what we have achieved – and the fact that this is only the beginning of humanity’s infinite possibility.
14 years after his previous plea for the return of explanation to science, Deutsch makes the plea again. Last time, the focus was on the science and the effect of taking certain theories seriously. This time, the focus is more on the underlying philosophy, and some of the consequences of that.
The philosophy he champions is that of fallibilism, as opposed to any of the dreadful contortions that the philosophy of science has undergone in its abject failure to respond to the challenges of quantum mechanics. There is an "evolution" of knowledge, with many and varied ideas being conjectured, and the better of these selected through a process of criticism and experimentation. Fallibilism applies more widely than science: it is the process of gaining good knowledge in all areas of life. And hence it is imperative that criticism and experiment be available, used, supported, and encouraged in all areas, to whittle away the bad ideas. A "good" idea is hard to vary without making it an inferior explanation. Deutsch has many strong words to say about the "bad philosophies" (a philosophy that is not merely false, but actively prevents the growth of other knowledge [p308]) that currently infest science and other disciplines. These include logical positivism, behaviourism, and, of course, post-modernism:
He defines knowledge, whether it be found in human brains or DNA, as information physically embodied in a suitable environment that tends to cause itself to remain so [p78].
And this knowledge has to come from somewhere: it evolves through the process of variation and selection. Hence explanations such as "spontaneous generation" for life are bad explanations: they do not explain where the knowledge embodied in the complex living organism come from. Hence also there is no "inductivism": ideas are first conjectured, then tested, not the other way round. He is making a very deep claim here: knowledge cannot be derived, or predicted, or otherwise deduced; it has to be developed through "guess and check". [The second step is crucial of course: pseudoscience is all guess and no check.]
And we can always do better. Not only do explanations get better, and problems get solved, but the solution to a problems opens the way to discovering new, different, better problems. So there are always unsolved problems, and we should not be surprised or concerned by this.
The creation of scientific and other knowledge can be faster and more efficient than the knowledge acquired through biological evolution. And it has another crucial feature: its progress can leap across the ideas landscape without being constrained by viability of intermediates.
Deutsch’s requirement for good explanations, and good science, does not lead him to reductionism; in fact, it leads him away. He gives an example of a domino computer that calculates primality, where the "output" domino falls only if the input is not prime. The machine is set running (or falling) to test the primality of 641; the output domino does not fall. Why? The argument is about the quality of the explanation: "because 641 is prime". He contrasts his view with that of Hofstadter (who introduces this example in I Am a Strange Loop)
That this philosophical approach based on "good explanations" does not reduce to "fundamental physical theories" gives it more "reach". This is shown when Deutsch applies the philosophy of good explanations to moral theory:
He also has a stab at aesthetics, to do with the beauty of flowers. Flowers and insects communicate across a wide species gap:
I’m doubtful about this guess, not least because so many flowers are cultivated, and have been evolved through artificial selection to meet human standards of beauty. However, fallibilism has the answer: this guess should be checked. Are there other species that have to communicate across a wide species gap, and have they evolved beauty in order to do so? Are there flowers we do not find beautiful, and what are they communicating with? What about things we find beautiful that have not evolved to communicated across a wide species gap (trees, sunsets, starry night skies, ...)? Kevin Kelly instead conjectures that "It may be that any highly evolved form is beautiful"; that covers the trees and maybe (if Smolin is right) the starry skies: what explains sunsets?
Deutsch’s argument that criticism is the essential path to knowledge is enlivened by a particularly fine Socratic dialogue, between Socrates and Hermes, contrasting the rigid static Sparta and the more flexible, critical Athens.
(There are, of course, analogues of the Spartan and Athenian philosophies in existence today.) The dialogue ends with a delicious little scene where Socrates is recounting what he learned from Hermes to his followers, and a puppyishly enthusiastic Plato keeps misunderstanding and misrecording everything he says. (This fits perfectly with Popper’s excoriation of Plato as one of the enemies of the Open Society.)
This imperative not to destroy the means of correcting mistakes leads to an interesting viewpoint on voting systems. "First Past the Post" (FPTP) leads to a government that does not represent a large proportion of the electorate, and other voting systems are suggested. Deutsch argues in favour of FPTP, not because it results in representative government (it doesn’t necessarily), but because it makes it easier to remove a bad government. This is fallibilism at work: what is key is not getting things right, but removing what is wrong. This is an intriguing viewpoint, although I’m not totally convinced, since FPTP can bias towards extreme governments trying to differentiate themselves from other parties (so no "better conjectures" are ever possible). However, it shows how a different viewpoint about what elections are for can lead to a different conclusion.
One feature he notes in explanations is that some are universal: they have unbounded "reach", much greater than the domain they initially described. Achieving universality is thus the beginning of infinity. Mathematics, DNA, and computation are universal in this sense.
This infinite reach is the underlying result of fallibilism. In fact, the book starts off looking at the consequences of this reach, with a scenario of far-future manipulation of matter in cubic light-years of space that make some extropians look positively humble. This actually put me off for a while, but the rest of the book settles down after that. (This viewpoint wasn’t described until the end of the previous book.)
Although the book is mainly about the philosophy and fallibilism and good explanations, there are two rather more technical chapters. The first is on infinity; the second (which will come as no surprise to those familiar with Deutsch’s work) is on the many-world interpretation of quantum mechanics. And the chapter on infinity is there to support the many-worlds explanation, and also the idea of infinite reach. He starts off the infinity chapter with the familiar story of Hilbert’s Infinity Hotel, with its (countably) infinite number of rooms, and some of the counter-intuitive things that can happen there. (One nice example given is: guests in low room numbers are better off; every room number is unusually close to the beginning, so every guest is better off than almost all other guests!) This story is a preamble to showing the difficulties of defining probabilities over infinite sets, and how you need to define an order in which to traverse an infinite set of universes to make such probabilities well-defined, which is all needed for the many-worlds chapter. It is also needed to unpick arguments about fine-tuning, and that we are "probably" living in a computer simulation. Most of this explanation is in terms of countable infinities, whereas the many-worlds model appears to require an uncountable infinity, from the way it is described. Uncountable infinities are even more counter-intuitive.
How does fallibility work when we can calculate things about physical laws, and hence predict? Deutsch points out that this is conflating the mathematical model and the physical reality it models. And he goes further: even mathematical proofs are embodied in physical devices, and so depend on physical laws.
So Deutsch thinks computer science is a physical science. I suspect that not all of my colleagues would agree with him, although it is a stance that resonates with me. He also has arguments against the laws of physics being generated by some external universal computation, since the argument from "computational universality" is thereby circular:
For the many-worlds chapter, Deutsch used a concept I haven’t seen used before: the idea of fungible entities. The whole approach is illustrated with a science fictional parable, and gives a clear explanation of the concepts. There is a slight wobble at the crucial point of explaining how two separated worlds can be reunited back into a single one (needed to understand quantum interference): this is done too fast. Everything else is given an intuition, but I felt the intuition here was much weaker. But the whole vision of the multiverse as an infinite set of universes gradually diverging is clearly laid out. Then, at the end, Deutsch brings in knowledge again:
Deutsch emphasises that fallibilism is an essentially optimistic viewpoint, and that many others are inherently pessimistic. Pessimistic philosophies include Utopias, since their perfection does not admit any improvement or progress. Since new knowledge requires a creative step in the "guess and check" approach, it is unknowable before it has been discovered; attempts to "prophesy" through this unknowability can be a route to pessimism, such as prophesies about the "end of physics" (which occurred both at the end of the 19th century, just before quantum mechanics and relativity, and more foolishly at the end of the 20th century, since we know that we don’t know how to unify these two theories). Deutsch brings these arguments together in his Principle of Optimism: All evils are caused by insufficient knowledge [p212]. This is optimistic because is implies that all evils can be overcome by sufficient knowledge. Of course, we have to obtain that knowledge.
This leads to his penultimate chapter, where he talks about "sustainability". All through he has been criticising static societies, because in order to remain static, they must suppress change and criticism. Any such society is unsustainable, because there is always some problem never encountered before [invaders, a new disease or plague, resource collapse, earthquake, climate change (anthropogenic or not), supervolcano eruption, meteor strike, ...], and a static society does not have the resources needed to create a novel solution to the novel problem. So static societies inevitably eventually collapse.
He argues that the only sustainability is indefinite progress in an optimistic, dynamic society. The future is unknowable, because it will have new knowledge that cannot be predicted, and new problems caused by solutions to previous problems, and new potential disasters. The society need to be structured to cope with such unknowability.
This is a description of "resilience", rather than of the more static "sustainability". The philosophy of fallibilism, that mistakes are inevitable so we need mechanisms to recover from them, gives a very different perspective on how we should order our societies. The final message is simultaneously upbeat and a warning: not only can we continue to gain new knowledge without bound, but we must, in order to survive.