The Liar paradox is an old chestnut: consider “This sentence is false”. The naive analysis runs thus: assume it is true, then it has to be what it says, that is false, a contradiction; so, assume it is false, then it actually is what it says, that is true, also a contradiction! How to solve this? The classical mathematical approach is to ban such self-referential sentences. But that throws the baby out with the bathwater: there’s nothing paradoxical about “This sentence has five words” or even about “This sentence has one hundred words”.
The Liar is about an approach to solving the paradox, rather than simply banning it. The usual way to analyse such cases is to build a mathematical model in set theory, then use it to define and analyse the truth values of constructs modelled this way. In the case of the Liar paradox, this involves modelling self-referential propositions. But classical (ZF) set theory is formulated to avoid self-referential sets: the Axiom of Foundation is there to prevent it. Sets have members: these members can themselves be sets, with members of their own. What the Axiom of Foundation says is that if you follow this membership relation downwards, you always eventually get to the “bottom”: atomic members that are not sets (or are the empty set, which has no members, sets or otherwise). This means there are no infinitely descending chains of membership (it isn’t “turtles all the way down”), and there are no circular membership relations (a set cannot be a member of itself, which is what makes it hard to model self-reference).
So, when your mathematics isn’t up to the job – use different mathematics! In this case, the authors use Aczel’s brand of nonwellfounded set theory as a basis for building their models (despite what it might sound like from its name, it is a perfectly well-defined and consistent mathematical theory). In chapter 3, the authors summarise this theory in enough detail to understand how it is being used in their subsequent modelling of the paradox. The approach has a visual representation, in modelling sets as graphs (of the membership relation): wellfounded sets must have acyclic graphs; nonwellfounded sets can have cycles in their graphs. This give a nice intuition for what’s going on, and the explanations have a good mix of English text and mathematical rigour. It can sometimes be a bit confusing, however. For example:
Here “Aczel’s conception of a set” refers to these nonwellfounded sets (or hypersets), now defined in terms of graphs. A graph is “a set of nodes”. What kind of set are these nodes? Wellfounded? Nonwellfounded (relying on a circular definition)? Does it make a difference?
(A very minor problem with the exposition is due to the example atomic members chosen. On p40 we get the equation a = {Max, a}. I had a moment’s confusion, trying to work out what was being maximised, before I remembered that the atoms in the example language include the authors’ children’s names Max and Claire. Moral of this tale: if you are a logician, do not name your children after mathematical functions!)
Now, these new hypersets look strange. In fact, they initially look so counter-intuitive that they must be “wrong”. But that’s because we have been brought up on wellfounded set theory, with its assumptions now bedded into our intuition. We have “got used to it”. But we can get used to nonwellfounded sets, too:
So, once they have a mathematical toolkit up to the job, the authors go ahead with a traditional approach: use this set theory to model the various self-referential sentences, statements and propositions; give this a semantics or two; analyse the resulting systems. They analyse the system in two different ways, which they call “Russellian” and “Austinian”. (They emphasise that these are not actually the approaches that Russell and Austin advocated, but that they are in the spirit of their approaches.) The analyses give different answers. (What, you wanted the answer? But why are you surprised that the answer depends on how you formulate the question?)
Summarising brutally, and inevitably misleadingly, the analyses run as follows.
The Russellian analysis rests on a subtle distinction between denial and negation. Negation is a “positive” statement: it states that there are facts of the world that make proposition p false. Denial is a “negative” statement: it denies that there are facts that make p true. And these are not (in this formulation) the same thing. (p79.the fact of it being false is not a fact of the world). The analysis shows that the naive formulation conflates these two.
The Austinian analysis rests on the approach that propositions are made in the context of situations, and can have different truth values in different situations. The analysis shows that the naive formulation confuses different situations. It takes the form of “diagonalisation” argument: assume you know all the facts of the world, then construct a new fact that is true, but is not in your original set.
This is all fascinating and informative. As usual, naive analysis breaks down at extreme or “edge” cases. What we have here is a thorough analysis of these edges that does not shirk the problem by simply banning it, but takes it seriously, and applies a mathematical approach that fits the problem, thereby shedding light, and uncovering assumptions.
Despite it being interesting, I didn’t initially set out to read this to discover more about the Liar paradox: I read it to find out more about nonwellfounded set theory. That is because a delegate’s throwaway remark during a conference made me wonder if it might be useful for thinking about emergent properties. I hadn’t previously come across it (despite the fact that ideas like bisimulation, and some hairier branches of computer science, are apparently based on it, or equivalent to it). So I could have stopped reading after chapter 3, but it was too interesting! Anyway, I was heartened to see the pictorial approach, and even more heartened to see that much of my existing intuition would probably be fine:
But that comment about induction is interesting. I’m glad I carried on reading, because in chapter 4 we get:
The hairs on the back of my neck rose when I read that. It seems to imply that the whole basis of reductionism is wellfoundedness. (It might be true that the axiom of foundation has played almost no role in mathematics outside of set theory itself, but set theory has had an enormous impact on the way scientists model the world.) In this view, we start at the bottom, with the “atoms”, and construct things inductively. Everything not required (constructible this way) is forbidden. Because wellfoundedness is so strongly entrenched, this seems like the natural, maybe the only, way to do it. But nonwellfoundedness isn’t like this. There are nonwellfounded things that just cannot be built this way: there are things with no “bottom” or “beginning” (it can be turtles all the way down, or all the way back in time, in this model!); there are things that are intrinsically circular, self-referential, chicken and egg, strange loops even. Now everything not forbidden is compulsory; now there is so much more room to have the kind of things we need. But, what would it mean to have, even to “construct”, material things with such nonwellfounded structure?
So, I’m off to read more about this, to see if it might be a better way to model emergent and self-organising systems than classical wellfounded set theory.