I’m breaking one of my own rules, and reviewing a book I didn’t finish, yet not because it is "unfinishable". The reason is: I think it’s good, but after reading half of it, I feel I’ve got enough from it.
Economic models assume that agents are "rational", that they take the best choice of those available. More sophisticated models assume "bounded rationality" -- that the agents have only a fixed amount of computational power available. The authors point out that the usual definition of bounded rationality actually requires more computational power than full rationality, because the agent must also compute when to stop computing!
The thesis argued here is that agents do not use any of these forms of "rational" decision making: we humans instead make our decisions based on "fast and frugal heuristics", and moreover these heuristics are very effective indeed, even ignoring the benefits of them being so lightweight. The reason for the need for such heuristics is simple: we don’t have much time or brainpower to dedicate to careful consideration and weighing of all the alternatives; survival in the wild often requires snap decisions. The reason for their surprising effectiveness is more subtle:
One example of the adaptation to the environment is the "larger town" test. Given the names of two towns in Germany (for an American test sample), say which is larger. Use the heuristic that, if you’ve heard of one town and not the other, answer the one you’ve heard of. It is quite likely to be the larger, simply because you are more likely to have heard of the larger -- and so the heuristic is adapted to the structure of the information in the relevant environment. And it is best not to know too much, because if you have heard of nearly all the towns, you can’t use the heuristic!
The authors don’t merely claim this to be the case: they have performed careful experiments to demonstrate that it works. Not only that: it, and other simple heuristics, can be more robust than "cleverer" choice methods, because they avoid over-fitting sparse data, avoid seeing patterns where none actually exist.
Various other heuristics are discussed, including techniques of satisficing:
All in all this is fascinating: careful experiments showing that amazingly simple heuristics work very well -- because they exploit structure in the information environment. So, this gives both a collection of heuristics to exploit, plus indications of where they will and will not be effective.
[22 September 2004: many thanks to John Beattie, who emailed me a useful reference for a view on when such simple heuristics are insufficient: Ian McCammon’s "Evidence of Heuristic Traps in Recreational Avalanche Accidents", as referenced in Risks 23.48.]
There are two main points made in this book.
Firstly, you need to use a different reasoning process about situations where you know the risks and their odds, and the uncertain situations where you don’t; and you need to be able to distinguish these cases. In the case of uncertainty, rules of thumb are usually better than trying to calculate unknown odds. Gigerenzer gives some examples. I particularly liked his discussion of the real Monty Hall problem, rather than the “tidied up” version used for probability calculations. The real situation is much messier, and I have pointed out that you need to know the full rules beforehand: the stated solution works only if the host doesn’t cheat.
Secondly, even when the odds and risks are known, most statistics are so badly presented, possibly to make better headlines, that even the experts don’t understand what they say; you need to look at the real underlying rates. Behaviour X doubles the chance of cancer Y may not be a problem, if the chance of cancer Y is extremely small in the first place. Gigerenzer gives examples of a way to present rates rather than conditional probabilities that makes it much easier to see and understand the true risks.There are many good cases discussed in here, with a large chunk of the book given over to healthcare. For example, there is a lot about medical screening, false positives, and increased “survival” rates being due entirely to earlier diagnosis, and nothing to do with living longer in total if diagnosed earlier (“lead time bias”). Survival rates are different from mortality rates.
Some of the discussions do feel a little disjointed. In particular, there is early emphasis on how most real world issues deal with uncertainty (rules of thumb) rather than risk (calculating odds), yet much of the book is on increasing statistical literacy. No matter; there is much good material in here.