Trust keeps the wheels of civilised society turning smoothly, and other authors have decried the gradual erosion of trust in today's society. But is it even rational to trust a rational being to keep their promise, even when it is not to their benefit to do so? Does the Enlightenment ideal of moving to more rational behaviour mean the end of trust, the end of civilised behaviour? Hollis explores this dilemma.
He illustrates this dilemma, and various "solutions", in terms of a recurring puzzle, dubbed the "Enlightenment Trail". Two friends, Adam and Eve, go for a hike along the trail, which has various possible finishing points, assigned game-theoretic numerical values according to how much each would like to stop there (incorporating an amount for pleasing the other, if necessary). The rankings are as follows, for Adam and Eve: A=(1,0), B=(0,2), C=(3,1), D=(2,4), E=(5,3), F=(4,5). Where should they stop? Unfortunately, if they are truly "rational", they will stop at A. The reasoning is as follows. If they get as far as E, Adam will stop there, because he likes E better than F. But Eve realises this, so she will stop at D, which she likes better than E. But Adam realises this, so he will stop at C, which he likes better than D. But Eve realises this, so... Working back along this "centipede" of preferences, they get no further than A, despite the fact that they could both do (much) better. The argument goes that they can't even agree, or promise, at the beginning, to go to the end, because, no matter how far they are along the path, the argument above holds, and it is "rational" to stop there, and not keep the promise that will result in them doing less well.
The resulting discussion is interesting, and ranges over many Enlightenment philosophers' positions, but without, I think, coming to a satisfactory conclusion. And I believe that is because the problem is set up in too artificial a way to begin with.
Hollis touches on "iterated" games, where the maximisation of value occurs over several iterations of the game (Adam and Eve take many hikes, so it is worth keeping a promise this time, because of its value in a later game). However, he does not take it seriously, and says only a little about Axelrod's The Evolution of Cooperation. Yet cooperation really does evolve in similarly simple settings, successfully maximising value, so maybe there is something wrong with these more simplistic analyses?
Even more than this, Hollis neglects the qualitative difference between infinite and finite "centipedes". He points out that no matter how long the centipede, if it has the structure above, it is always "rational" to go no further than the first stop. He does mention that infinite centipedes are different, because there is no end point to start the backward chaining, but again gives this notion rather short shrift. Yet such infinite (or rather, unbounded) cases are much more typical than the artificial simple bounded games he analyses. The game may not actually be infinite, but we don't know when it will end, if there will be another round, if we get a further chance to improve our choice (and when we do know it is ending, our behaviour may well change: witness various deathbed ploys).
And in reality, those all-important game-theoretic values aren't fixed at the start (and the longer the centipede, the less they are fixed): the world is complicated, we change as we hike the trail, the trail itself changes. James P. Carse, in Finite and Infinite Games, explores some of these points. And Simple Heuristics That Make Us Smart explores the fact that even if we are rational, we aren't unboundedly rational: we have only limited predictive abilities, and so can't in practice do the kind of calculations that some of Hollis's arguments require.
So I believe Hollis has set up a partial strawman: his definition of "rational" is simultaneously too simple (the world is more complicated and open than the simple bounded game-theoretic approach allows) and too strict (people are not unboundedly rational). Keeping promises, and trusting, may well still be rational in this more complex world, and the Enlightenment vision may be achievable after all. (Yet even game theory can end up with apparently irrational behaviour: Steven J. Brams applies it to certain biblical events, showing why it may be rational to appear irrational when playing a rational opponent!)