Books

Books : reviews

Donella H. Meadows, Dennis L. Meadows, Jorgen Randers, William W. Behrens.
The Limits to Growth.
Earth Island. 1972

Donella H. Meadows.
Thinking in Systems: a primer.
Earthscan. 2008

rating : 2.5 : great stuff
review : 5 April 2011

Thinking in Systems is a concise and crucial book offering insight for problem-solving on scales ranging from the personal to the global. This essential primer brings systems thinking out of the realm of computers and equations and into the tangible world, showing readers how to develop the systems-thinking skills that thought leaders across the globe consider critical for 21st-century life. While readers will learn the conceptual tools and methods of systems thinking, the heart of the book is grander than methodology,

Donella H. Meadows was known as much for nurturing positive outcomes as she was for delving into the science behind global dilemmas. She reminds readers to pay attention to what is important, not just what is quantifiable, to stay humble and to continue to learn.

In a world growing ever more complicated, crowded, and interdependent, Thinking in Systems helps readers avoid confusion and helplessness, the first step toward finding proactive and effective solutions. A vital read for students, professionals and all those concerned with economics, business, sustainability and the environment.

It is strange to see a book dedicated to the author. The reason here is that this is a collection of notes, drafts and talks made by Meadows before she died in 2001, subsequently edited into book form by Diana Wright. As such it is probably less detailed and elaborated than a “real” book would have been, but it does not suffer: it is nicely distilled.

This is an introduction to and overview of Systems Theory: ways of thinking about complex non-linear systems infested with flows, feedback loops and information delays. That is, any real world system of interest.

p2. A system is a set of things—people, cells, molecules, or whatever—interconnected in such a way that they produce their own pattern of behavior over time. The system may be buffeted, constricted, triggered, or driven by outside forces. But the system’s response to these forces is characteristic of itself, and that response is seldom simple in the real world.

[fig 42]Systems are not easy to understand with linear textual descriptions, so Meadows introduces a diagrammatic notation to map out the relationships and feedbacks between the stocks and flows in the system.

p5. … there is a problem in discussing systems only with words. Words and sentences must, by necessity, come only one at a time in linear, logical order. Systems happen all at once. They are connected not just in one direction, but in many directions simultaneously. To discuss them properly, it is necessary somehow to use a language that shares some of the same properties as the phenomena under discussion.
      Pictures work for this language better than words, because you can see all the parts of a picture at once.

Back to definitions, now with a highlighting of the three main types of components of systems:

p11. A system isn’t just any old collection of things. A system is an interconnected set of elements that is coherently organized in a way that achieves something. … a system has three aspects: elements, interconnections, and a function or purpose.

Meadows elaborates this definition with a couple of examples, of a digestive system and a football team, showing how they fit the definition. She then lists a bunch of other systems, including a tree, a forest, the earth, a galaxy. As I find all too often with examples, the easy ones are described in detail, while the harder ones are not, despite not fitting so obviously. For example, Meadows suggests that a galaxy is a system: but what is the “function or purpose” of a galaxy? Having an answer to this question would help explore the difficult corner cases.

Not everything is a system, of course

p12. Sand scattered on a road by happenstance is not, itself, a system. You can add sand or take away sand and you still have just sand on the road. Arbitrarily add or take away football players, or pieces of your digestive system, and you quickly no longer have the same system.

Okay, let’s look for corner cases again. Football defined with one fewer or one more player is certainly a different game. But you can chop bits out of your digestive system, and still have a digestive system: it has a degree of redundancy. And what about a tree-as-system that loses a branch? or an anthill that loses several hundred ants? Are they still the “same” system? There seems to be a difference between designed and evolved systems, or possibly between “fragile” and “robust” systems. In fact, the very next paragraph goes on to explore this robustness and integrity of systems:

p12. When a living creature dies, it loses its “system-ness.” The multiple interrelations that held it together no longer function, and it dissipates, although its material remains part of a larger food-web system. Some people say that an old city neighborhood where people know each other and communicate regularly is a social system, and that a new apartment block full of strangers is not—not until new relationships arise and a system forms. … there is an integrity or wholeness about a system and an active set of mechanisms to maintain that integrity. Systems can change, adapt, respond to events, seek goals, mend injuries, and attend to their own survival in lifelike ways, although they may contain or consist of nonliving things. Systems can be self-organizing, and often are self-repairing over at least some range of disruptions. They are resilient, and many of them are evolutionary. Out of one system other completely new, never-before-imagined systems can arise.

The anthill seems to fit this vision of system better than the football team. Certainly, it can be the “same” team even after all the players have changed, but there are some changes that can’t be made (like the number of players) and leave it a football game (well, modulo designed-in sendings-off). However, this emphasis on interrelationships and adaptability is where we want to be: the interrelationships hold the elements of the system together, and allow it to adapt. How can we “engineer” artificial systems to have these properties, how can we make existing systems adapt in helpful directions? The questions require us to be able to understand such systems.

This requires understanding the three main aspects, but some are harder to identify than others:

pp12-15. The elements of a system are often the easiest parts to notice, because man of them are visible, tangible things. …
     … It’s easier to learn about a system’s elements than about its interconnections.
     Some interconnections in systems are actual physical flows … Many interconnections are flows of information—signals that go to decision points or action points within a system. These kinds of interconnections are often harder to see …
     If information-based relationships are hard to see, functions or purposes are even harder. A system’s function or purpose is not necessarily …. expressed explicitly, except through operation of the system. The best way to deduce the system’s purpose is to watch for a while to see how the system behaves.
     … An important function of almost every system is to ensure its own perpetuation.
     System purposes need not be human purposes and are not necessarily those intended by any single actor within the system. … the purposes of subunits may add up to an overall behaviour that no one wants.

This point that purposes at one level may not support those at another is crucial:

pp15-16. Systems can be nested within systems. Therefore, there can be purposes within purposes. … Any of these sub-purposes could come into conflict with the overall purpose … Keeping sub-purposes and overall system purposes in harmony is an essential feature of successful systems.

Of the three main aspects, some are more crucial to the system-ness than others. Ironically, the ones easiest to identify are the ones that have the least effect on the system.

pp16-17. Changing elements usually has the least effect on the system. … A system generally goes on being itself, changing only slowly if at all, even with complete substitution of its elements—as long as its interconnections and purposes remain intact.
     If the interconnections change, the system may be greatly altered. …
     … A change in purpose changes the system profoundly, even if every element and interconnection remains the same.
     To ask whether elements, interconnections, or purposes are most important in a system is to ask an unsystemic question. All are essential. All inter-act. All have their roles. But the least obvious part of the system, its function or purpose, is often the most crucial determinant of the system’s behavior. Interconnections are also critically important. Changing relationships usually changes system behavior. The elements, the parts of systems we are most likely to notice, are often (not always) least important in defining the unique characteristics of the system—unless changing an element also results in changing relationships or purpose.

The elements are the stocks (stores of stuff within system) and flows (that raise and lower stock levels). Interconnections connect flows and stocks in a network of positive and negative feedback loops. The presence of feedback introduces non-linearities, and change the linear notion of causation.

p34. The concept of feedback opens up the idea that a system can cause its own behavior.

Systems are dynamic. Given a flow, it takes time to change a stock level. And a flow out from one stock into another doesn’t happen instantaneously. There are delays in the system. These size of these delays are crucial to the overall behaviour of the system. For example, they can cause oscillations (and, somewhat counterintuitively, reducing a delay can sometimes amplify an oscillation: think stock-market panics amplified by faster computerised trading).

p58. some delays can be powerful policy levers. Lengthening or shortening them can produce major changes in the behavior of systems.

Meadows identifies resilience, self-organisation, and (possibly surprisingly if you think all this systems-speak is just woolly hippy thinking) hierarchy as three key properties that can be promoted and managed to help dynamics systems to work well.

Resilience allows a system to maintain itself, even in the face of perturbations.

p76. Resilience arises from a rich structure of many feedback loops that can work in different ways to restore a system even after a large perturbation. A single balancing loop brings a system stock back to its desired state. Resilience is provided by several such loops, operating through different mechanisms, at different time scales, and with redundancy—one kicking in if another one fails.
     A set of feedback loops that can restore or rebuild feedback loops is resilience at a still higher level—meta-resilience, if you will. Even higher meta-meta-resilience comes from feedback loops that can learn, create, design, and evolve ever more complex restorative structures. Systems that can do this are self-organizing

With self-organisation, the system is changing, adapting, growing, in order to maintain itself. This “higher-level” resilience might seem desirable, but it comes at a cost, mainly of uncontrollability and unpredictability.

p79-80.
The most marvelous characteristic of some complex systems is their ability to learn, diversify, complexify, evolve. …
     This capacity for a system to make its own structure more complex is called self-organization. …
     Like resilience, self-organization is often sacrificed for purposes of short-term productivity and stability. …
     Self-organization produces heterogeneity and unpredictability. It is likely to come up with whole new structures, whole new ways of doing things. It requires freedom and experimentation, and a certain amount of disorder.

Hierarchy is a way of structuring a system, building a bigger system out of smaller sub-systems. The hierarchy described here isn’t a rigid, military-style, only up-and-down communication structure. There are still sideways communications, but they are weaker.

p83-84. Hierarchies are brilliant systems inventions, not only because they give a system stability and resilience, but also because they reduce the amount of information that any part of the system has to keep track of.
     In hierarchical systems relationships within each subsystem are denser and stronger than relationships between subsystems. Everything is still connected to everything else, but not equally strongly. … If these differential information links within and between each level of the hierarchy are designed right, feedback delays are minimized. No level is overwhelmed with information. The system works with efficiency and resilience.
     Hierarchical systems are partially decomposable. They can be taken apart and the subsystems with their especially dense information links can function, at least partially, as systems in their own right. When hierarchies break down, they usually split along their subsystem boundaries. Much can be learned by taking apart systems at different hierarchical levels … and studying them separately. Hence, systems thinkers would say, the reductionist dissection of regular science teaches us a lot. However, one should not lose sight of the important relationships that bind each subsystem to the others and to the higher levels of the hierarchy, or one will be in form surprises.

Given that systems tend to perpetuate themselves, it is maybe not surprising that hierarchies tend to “forget” their larger purpose.

p84-85. Hierarchies evolve from the lowest level up—from the pieces to the whole … The original purpose of a hierarchy is always to help its originating subsystems to do their jobs better. This is something, unfortunately, that both the higher and the lower levels of a greatly articulated hierarchy easily can forget.
     To be a highly functional system, hierarchy must balance the welfare, freedoms, and responsibilities of the subsystems and total system—there must be enough central control to achieve coordination toward the large-system goal, and enough autonomy to keep all subsystems flourishing, functioning, and self-organizing.

Systems are pervasive, and we need to develop a systems worldview just to navigate through the complexities of the modern day world. Systems with their interconnected feedback loops are not simple, intuitive, and readily understandable.

p87. our mental models fail to take into account the complications of the real world … You can’t navigate well in an interconnected, feedback-dominated world unless you take your eyes off short-term events and look for long-term behavior and structure; unless you are aware of false boundaries and bounded rationality; unless you take into account limiting factors, nonlinearities and delays. You are likely to mistreat, misdesign, or misread systems if you don’t respect their properties of resilience, self-organization, and hierarchy.

So how do we go about understanding systems? First, we need to move up a level, from looking at events (isolated static points in time) to behaviours (dynamic sequences of linked events).

p88. Systems fool us by presenting themselves—or we fool ourselves by seeing the world—as a series of events. … Events are the outputs, moment by moment, from the black box of the system.
     … Like the tip of an iceberg rising above the water, events are the most visible aspect of a larger complex-but not always the most important.
     We are less likely to be surprised if we can see how events accumulate into dynamic patterns of behavior. …
     The behavior of a system is its performance over time—its growth, stagnation, decline, oscillation, randomness, or evolution. If the news did a better job of putting events into historical context, we would have better behavior-level understanding, which is deeper than event-level understanding.

This understanding of dynamics is crucial. A picture of the elements and interconnects is just a static diagram. We need to understand the dynamics of how stocks change, up, down, oscillatory; what dominates, what is secondary, to the behaviours.

p89. Systems thinking goes back and forth constantly between structure (diagrams of stacks, flows, and feedback) and behavior (time graphs).

Understanding both structure and behaviour are essential. Behaviour alone is not enough.

p90. [econometric] behavior-based models are more useful than event-based ones, but they still have fundamental problems. First, they typically overemphasize system flows and underemphasize stocks. Economists follow the behavior of flows, because that’s where the interesting variations and most rapid changes in systems show up. … But without seeing how stocks affect their related flows through feedback processes, one cannot understand the dynamics of economic systems or the reasons for their behavior.
     Second, and more seriously, in trying to find statistical links that relate flows to each other, econometricians are searching for something that does not exist. There’s no reason to expect any flow to bear a stable relationship to any other flow. Flows go up and down, on and off, in all sorts of combinations, in response to stocks, not to other flows.

Nonlinearities skew our intuition. There’s the obvious case of just because a little of what you fancy does you good, doesn’t mean that a lot of what you fancy does you better. But non-linearity combined in a complicated network of dynamic feedback loops goes one better.

p92. Nonlinearities are important not only because they confound our expectations about the relationship between action and response. They are even more important because they change the relative strengths of feedback loops. They can flip a system from one mode of behavior to another.

Again, the route to understanding is raising the description and focus up a level.

p102. There are layers of limits around every [growth process]. Insight comes not only from recognizing which factor is limiting, but from seeing that growth itself depletes or enhances limits and therefore changes what is limiting. … Whenever one factor ceases to be limiting, growth occurs, and the growth itself changes the relative scarcity of factors until another becomes limiting. To shift attention from the abundant factors to the next potential limiting factor is to gain real understanding of, and control over, the growth process.

If we want to modify, engineer, control, or just guide such systems, we need to have good measures of what we want from them. Get the wrong measure, and we’ll get the wrong result.

p140. Although there is every reason to want a thriving economy, there is no particular reason to want the GNP to go up. …
     If you define the goal of a society as GNP, that society will do its best to produce GNP. It will not produce welfare, equity, justice, or efficiency unless you define a goal and regularly measure and report the state of welfare, equity, justice, or efficiency. The world would be a different place if instead of competing to have the highest per capita GNP, nations competed to have the highest per capita stocks of wealth with the lowest throughput, or the lowest infant mortality, or the greatest political freedom, or the cleanest environment, or the smallest gap between the rich and the poor.
     … In seeking the wrong goal, the system obediently follows the rule and produces its specified result-which is not necessarily what anyone actually wants.

Despite their resilience, these complex systems can be affected. There are “leverage points”, where a (relatively) small change can have a (relatively) big effect. But they are hard to find, and hard to understand.

p146. Leverage points frequently are not intuitive. Or if they are, we too often use them backward, systematically worsening whatever problems we are trying to solve.
     I have come up with no quick or easy formulas for finding leverage points in complex and dynamic systems. … And I know from bitter experience that, because they are so counterintuitive, when I do discover a system’s leverage points, hardly anybody will believe me.

Because things are so non-intuitive, we have to be very careful in how we go about understanding systems. But it can be worthwhile: the process can lead to a change in understanding and action, a paradigm shift in the way we think about certain systems.

p164. we change paradigms by building a model of the system, which takes us outside the system and forces us to see it whole.

So how do we go about building a model? Meadows is very strong on valuing facts above theory (not surprising if you are after a paradigm shift in understanding, as this will require a new theory).

p170-2. Before you disturb the system in any way, watch how it behaves. … study its beat. … watch it work. Learn its history. Ask people who’ve been around a long time to tell you what has happened. … make a time graph of actual data from the system …
     … Starting with the behavior of the system forces you to focus on facts, not theories. It keeps you from falling too quickly into your own beliefs or misconceptions, or those of others.
     … Watching what really happens, instead of listening to peoples’ theories of what happens, can explode many careless causal hypotheses. …
     Starting with the behavior of the system directs one’s thoughts to dynamic, not static, analysis… looking to the strengths of the system, one can ask “What’s working well here?” Starting with the history of several variables plotted together begins to suggest not only what elements are in the system, but how they might be interconnected.
     … starting with history discourages the common and distracting tendency we all have to define a problem not by the system’s actual behavior, but by the lack of our favorite solution. … Listen to any discussion … and watch people leap to solutions, usually solutions in “predict, control, or impose your will” mode, without having paid any attention to what the system is doing and why it’s doing it.

However, a focus on facts does not mean a inhumane neglect of unquantifiable qualities.

p176. Pretending that something doesn’t exist if it’s hard to quantify leads to faulty models. … Human beings have been endowed not only with the ability to count, but also with the ability to assess quality. Be a quality detector.

But the main quality in a modeller is the willingness to admit ignorance, and to learn.

p180. The thing to do, when you don’t know, is not to bluff and not to freeze, but to learn.

p183. In spite of what you majored in, or what the textbooks say, or what you think you’re an expert at, follow a system wherever it leads. It will be sure to lead across traditional disciplinary lines. …
     Seeing systems whole requires more than being “interdisciplinary,” if that word means, as it usually does, putting together people from different disciplines and letting them talk past each other. Interdisciplinary communication works only if there is a real problem to be solved, and if the representatives from the various disciplines are more committed to solving the problem than to being academically correct. They will have to go into learning mode. They will have to admit ignorance and be willing to be taught, by each other and by the system.
     It can be done. It’s very exciting when it happens.

(I have been involved in interdisciplinary work -- the learning kind, not the talking past kind. No-one is an expert in all areas of the work: everyone has something to learn from someone else. This tends to promote a degree of humility: no domineering “experts”, no monster egos, just the excitement of building a shared understanding. It’s not all queasily humble, though: continually admitting ignorance takes a certain degree of confidence. It is very exciting, and very refreshing.)

This is a great book, and I wanted to quote a lot more of it here. Go and read it. What I need now is the second volume, that tells me in more detail how to build these models, and what to do with them next. Like all good books, it has an interesting bibliography, which I’m typing into Amazon. Like all good books, reading it worsens my backlog, by turning “unknown unreads” into “known unreads”.