Systems Thinking

Guidelines for thinking about networks of interaction.

Lean Logic makes a distinction between two kinds of system: the complex system and the modular system.

It also recognises two more kinds of system which are special applications of these: the complicated system and the ecological system.

The four are summarised in the table below.

 

COMPLEXITY, MODULARITY, COMPLICATION AND ECOLOGY
Defining properties

COMPLEX
SYSTEM
(Example: you)

MODULAR
SYSTEM
(Example: a herd of antelopes)

COMPLICATED
SYSTEM
(Example: globalised society)

ECOLOGICAL
SYSTEM
(Example: a woodland)

FORM

A system whose diverse parts have little or no individual independence but take on specialised roles and interact for the collective purpose.

A system whose parts (holons) have a high degree of similarity and independence. They all have essentially the same role, but may interact for collective purposes if the need arises.

A modular system which—by force or evolution—has acquired some properties of a complex system, although the transformation remains incomplete and is unstable.

A panarchy; a mixture of complex and modular systems, and systems with both properties. This is typical for a natural ecology.

DIVERSITY

Strong diversity
(structural diversity)

The system’s parts are radically diverse in form, reflecting their specialist roles. The parts of a human body—and those of a complex, self-reliant community—are diverse in this sense.

Weak diversity
(textural diversity)

The differences between the system’s parts are relatively minor; they are variations around a theme. But this weak diversity is sufficient to sustain identity and meaning, and to enable evolution.

Weak diversity
(textural diversity)

The system requires its parts to take on profoundly diverse roles. But the essential modular form remains. As a consequence, parts have to adapt to functions to which they are not ideally suited.

A mix of strong and weak diversity

This is diversity in depth, applying not only to individual organisms or parts but to whole systems or subsystems (holons).

CONNECTEDNESS

Taut links: interdependence

Taut links and interdependence between the parts largely determine each part’s actions. If any one of those taut links breaks, the system as a whole is in trouble.

Slack links: independence

The parts may be mutually supportive, but they are not mutually dependent. There is substantial freedom for the system’s (complex) parts to act as they choose without damage to the system itself.

Taut links: interdependence

In addition to the usual vulnerabilities of a taut, complex system, inflexible, formulaic, top-down control may constrain the freedom of parts to play their roles as effectively as possible, producing ultimately damaging outcomes.

A mix of taut links and slack links

The ecology has connections throughout its functions and energy flows, but there is extreme variety, including taut and slack links, occasional links, disconnections and opportunism.

RESILIENCE

Preventive resilience


The system uses its well-developed competence to keep out of trouble, but if it fails to do so, it has poor chances of survival.

Recovery-elastic resilience


The system has a well-developed ability to recover from shock—or to experience a shock while limiting the damage—but it is poorly-equipped to stay out of trouble in the first place.

Intensification, leading to lower efficiency

The system responds to challenges through greater complication—making additional provisions which involve more work, and lead to greater elaboration and more challenges—right up to the point of collapse.

A mix of preventive resilience and recovery-elastic resilience

There is an extreme variety of responses that evolves creatively. It does not stay still long enough to be described, but it has harmonic order.

 

This entry points to some typical properties of systems which it may be helpful to be aware of when thinking about them. The first thing to note about systems is that, in general, they are not what they seem. Most of what matters is invisible. Here are some of the things you may not see:

• Their small-scale subsystems and components.
• The linkages and interactions between them.
• The large-scale context on which systems depend.
• The linkages between systems and that wider context.
• The rapid, but small, changes that take place inside them.
• The large, but slow, changes that will in due course transform them beyond recognition.

We cannot control a system—practically everything we ever do has unintended consequences—but we can look about us before rushing into the indignant campaign or the technical fix. This is called manners.

Going beyond that—tracing through some, at least, of the often hard-to-understand causes and effects, and having some skill in thinking through the consequences—is called systems thinking. And even that can be inconclusive, in the sense that systems thinking, no matter how clear-sighted, may succeed in no more than improving a situation, falling far short of solving it. Or it may improve matters for some of the interests involved but leave things unaffected (or perhaps even worse) for others. This recognition of the limitations that are present as we move from thinking about systems to living with them is explained in some depth by “Soft Systems Methodology”, and has its own literature.S147

But that should not put us off thinking about the essence of systems—what they are and what they do. The following summary of systems’ properties is set out in three groups, covering: the form taken by systems and their parts, the feedback they provide and respond to, and the usefulness of their function.

 

Form (see also Resilience > Resilient Systems)

1. Holons and hierarchy

Holons are the parts, or subassemblies, that make up complex and modular structures. An antelope is a holon, in that it is both complete in itself—looking after its own interests—and also part of a herd, with its place in the wider ecology, whose balance and diversity it helps to sustain. Organs and cells in the antelope’s body are holons in this same sense.

Holons join together in subassemblies and larger assemblies—for example, liver cells cooperate to make a functioning liver; the body’s organs cooperate to make an antelope. And in these cases the sense of hierarchy is clear—the holons are, in a sense, pointing in the same direction.

Other systems contain holons and hierarchies which also belong to different systems with different missions, pointing in different directions—here there are often many hierarchies and it is hard to say which is at the “top”. For example, a woodland consists of a host of interacting holons: the cooperative, hostile, parasitic, complementary, benign, essential or irrelevant coexist in the same system and the same space. Lumping all this lot together into one overarching super-hierarchy called the ecosystem is not convincing, so the collective name for the complex and seemingly disorderly muddle of many hierarchies that make an ecosystem is “panarchy”.S148

And these holons, hierarchies and panarchies change over time. Systems change in response to events—they are dynamic—and this means that observations we make about the present form of a system may be true only temporarily. In particular, systems change as they move through the adaptive cycle discussed in the Wheel of Life. Many of the most profound changes will be slow-acting, but in some cases the pressure, having built slowly and over a long period, may break suddenly and violently (Kaikaku, Climacteric, Resilience). The more effectively the change is held back, the more sudden will the breakage be, when it comes.S149

2. Networks

Think of a network of connections between centres of activity—almost anything will do: towns, airports, film stars, academic researchers, websites. These centres are called “nodes” in the literature, and there are links between them. Depending on the kind of system, the links may take the form of (for instance) roads, air routes, friendship networks, co-casting (actors appearing in the same film), citations of academic papers, or links between websites. Those links come about almost by accident as a result of the nodes needing to be in touch with each other; they tend not to be planned in advance. And two kinds of pattern develop.

First, there is the “random” network:

Imagine a scatter of nodes. Now gradually join them up with random links. At the early stages, you still have a lot of disconnected nodes, with just a few joined up: it is not a network. But then, as further links are added, it rather abruptly becomes a connected network: each new link joins nodes that are already joined, so the whole thing then becomes a system (see first diagram). Now you can reach any part of it via the links between the nodes; each node has roughly the same number of links, and in that sense all the nodes are on the same scale. Neighbouring towns and villages linked up by roads are a network of this kind.S150

The second kind of network is the “scale-free” network:

Imagine the same scatter of nodes, but this time a small number of them begin to be better known and more visited than the others (marked in black on the second diagram). Once that happens, people choose them for preference. For instance, some airports achieve a superstar—hub—status, as do some towns, and a few academics are hubs in that it would be foolish not to cite their papers if you are working in their field. Film stars are hubs in the sense that other actors want to be seen with them. These networks are scale-free in the sense that there no limit to the number of links that those few dominant nodes can have. For example, on the internet, while there is a long tail of sites with few links and visitors or none at all, others have millions of links: there is no sense of shared scale. Links are distributed between hubs according to the “Power Law” (discussed below), with a few hubs having many links, and many hubs having few.S151

 

Now, why does this matter? Well, each type of network has its significance in specific situations. The scale-free network is the form that naturally occurs when it is the connections themselves that are the function of the system—as, for instance, in the case of air routes. If you want to fly from one small city to another distant city, it makes most sense to fly from hub to hub rather than travel via a random network, because then you may only have to change planes once. Direct links between every small city would require vast numbers of flights in small planes. For example, direct links between each of 1,000 towns would require half a million two-way flights; if all were routed via the hub, links between them all would require a mere 1,000 two-way flights.S152

Random networks, on the other hand, work better when it is the nodes themselves, rather than the connections between them, that are the focus of attention: the connections may still be important, but here the nodes have a degree of self-sufficiency which makes it less critical—not a matter of life and death—that they should maintain efficient links with all other nodes at all times. Here there are no hubs; just nodes joined together locally. As such, a random network cannot compete with a scale-free network in terms of long-distance travel because to get from A to (a distant) B in a random network, you have to go through a lot of hubs (e.g., local towns, and stopping off at a lot of inns on the way, which is what horse-drawn transport was about).

But the random network, with its local links, has advantages. For one thing, it is modular, and therefore has recovery-elastic resilience—its built-in redundancy enables the system to survive even if some or many of its parts don’t, making it much less vulnerable to catastrophic breakdown after a shock. If one of the hubs of a scale-free network is knocked out, the effects are traumatic: even if other less important hubs are brought into service, they risk being quickly overloaded and are likely to break down too. By contrast, the locally-linked network will have much slower connections, but its modular structure will survive all but extreme trauma, with links easily repaired or replaced with others. That rather homespun quality—the slower connections and responses throughout the system, the remoteness of distant parts of the system—is the cost of recovery-elastic resilience.S153

There are implications here for the internet. We cannot be sure whether the internet will survive the climacteric. And yet, it is hard to see how it can. It is a scale-free system, dependent on a relatively small number of giant hubs. If even a small number of hubs failed, then the whole system would be at risk of failure. It also depends on an uninterrupted flow of energy (with carbon emissions to match), and it requires a large infrastructure of minerals-mining, manufacture, distribution and maintenance. The Lean Economy may have to get by without it. In fact, duplicate databanks and fail-safe sources for renewable energy are being built for the internet on an awesome scale, so its carbon emissions may be kept in some kind of check. But in an energy-scarce, localised world, the force will be with the technologies which do not depend on a continuously functioning grid and a hyperconvergence of material supplies, technologies, energy and financial flows. When resilience really matters, the advantage will lie decisively with technologies such as the book.S154

3. The Power Law

Any system that has survived so far has resilience to at least some degree, though it may be trivial, giving the system no protection—no ability to sustain its form—when the first significant shock comes along. So the question arises, how much shock ought a system be designed, or intended, to withstand? It is hard to answer this, for, in due course, a shock is likely on a scale beyond experience, imagination, or the ability to cope. As explained by the mathematician Benoit Mandelbrot, the behaviour of complex systems, natural or man-made, does not stay within the limits of a well-behaved distribution around an average, in the way that the differences between people’s heights do. Instead, the scale of shocks (as in, for example, earthquakes, the turbulence of ocean currents, the flooding cycles of the River Nile, or the behaviour of economies and stock prices) behaves according to the “Power Law”.S155

We met the Power Law briefly in the discussion of networks, but let’s take a closer look at it. This law exists where one variable changes in response to the power of another variable—e.g., the square or the cube of the other one. For instance, the scale of the maximum shock on record as having been endured by a system may be a power function of the number of years over which the record is taken. If you are measuring the height of the Nile over a period of time and you go back far enough, you will find catastrophic extremes. If people’s heights were distributed according to the Power Law, most of us would be very short, but some would be 100 feet tall, and in a population of 6 billion someone, somewhere would be 8,000 feet tall. That is to say that, in systems of this kind, what seems to be normality will from time to time deviate into extremes—and occasionally into spectacular extremes. In complex systems, extraordinary shocks are to be expected. An extended period of stability, such as the “Great Moderation” of the post-war period of economics, is a special case.S156

4. Exactness

A system’s design is only as good as its detail. The Impressionist painters would not have been good system designers. It is the detail, leaving nothing to the imagination, that matters. An illustration of this comes from the early years of aircraft design. Reasonably affordable commercial passenger flights were not possible until, in 1935, the McDonnell Douglas DC-3 brought together five specific features: the variable-pitch propeller, retractable landing gear, a lightweight “monocoque” body, radial air-cooled engines and wing flaps. Four were not enough. All five had to be in place.S157

Rough indications along the lines of “Oh, this is how it works” tell us nothing about a system except that here we have a set of good intentions which won’t fly. They sound as though they ought to; they are fair; they are cheap; they are just what we need. But they won’t. Systems are pedantic like that. They have an anorak personality. They need care and kindness, faithful observation, deference to the detail, time to understand, a tolerance to surprise.

 

Feedback (see also Resilience > Feedback)

Feedback is a property of all functioning systems, and is rule 5 of lean thinking. The classic and elegant example of feedback is the mechanism called the “governor”, which keeps a steam engine running at constant speed. It consists of a spindle which spins at the speed of the engine, and it has two arms which, responding to centrifugal force, fly out as the speed rises. As they do so, they close the throttle, which slows the engine down.S158

But a related meaning of feedback, less elegant but no less important, is the information which we receive as participants in a system (such as a bed of carrots, a garden, a community, a nation or a planet). If that feedback is accurate and quick, and if we do not close our minds to it, there is a possibility of influencing the system in ways consistent with resilience and reason. If it is inaccurate or slow, allowing mistakes to be merrily persisted in for a good long time before revealing themselves as mistakes, or if we are so important and powerful that we do not need to notice the feedback when it happens, then we will be on course to destroy the system and ourselves.

It does sound rather mechanical. That steam-governor is so beautiful that it invites a desire to stay at this sweet level of technical detail for the whole of one’s life. But in reality, feedback is a big idea: as big as “rational”, “reason” and “resilience”. Lean Logic is all about feedback. Here we shall chew over some of what it means.S159

1. Feedback and forecast

There is a sad-looking plant on your desk. You want it to cheer up. You water it. It becomes green and healthy: useful (positive) feedback. So you water it a lot. It goes brown: useful (negative) feedback. So you water it in moderation, and it gets healthy again. Clearly the feedback is working. Then the plant goes brown again. Feedback tells you that something is wrong, but it does not tell you what: you won’t know that it is zinc deficiency without more information. Moreover, even the feedback you have found so useful so far could have been misleading. Maybe it wasn’t your watering that cheered it up, but the green-fingered cleaner who took pity on it and fed it with liquid fertiliser. So the importance of feedback is not that it has all the answers you need, since it can be misleading, but that it is all the information you have. It is observation. You can be good or bad at reading it, and indeed there are ways of improving the quality of your observation and of the conclusions you draw (see “Causal Loops” sidebar).

The important thing is to be alert to the information and consequences which you don’t expect. For instance, if you come at a situation as an expert in a narrow specialism of the subject, you may exclude yourself from noticing the information that comes from another part of it, or from making the connections between the bits of information you do get. Instead of seeing a system, reductionism sees bits of it. Observing things is easy; joining them up is hard: it needs a long attention span, and the brain may need training.

CAUSAL LOOPS
Reductionism can have its uses

This is a case where reductionist methods can be part of building up a coherent understanding. Start with a particular system which you would like to understand better. Now, every intervention can be associated with more or less of particular outcomes.

For instance, for a particular class in a particular school, a smaller class size could lead to (amongst other things) less noise, more individual attention, more motivated teachers and better education—but lower school income, cost-cutting, fewer teachers, and larger class sizes, an increased workload on teachers, who therefore demand higher salaries . . .

In other words, there is a series of consequences—“causal loops” between what you do and what happens. The sum of all the consequences can be positive (a signal to reduce the size of the class) or negative (to keep it large, or make it larger), or neither.

In principle these complementary effects will settle at an equilibrium, where the class size is at its optimum and, in principle, if you trace through all the causal loops—on paper, on a computer, and/or by experiment—you will get close to the class size that works best from all points of view.

The reason this approach is useful is that it provides a protocol for exploring—and to a significant degree forecasting—the implications of a proposal. You may be tempted to say, “larger class sizes save money”, and have done with it. The protocol says, “Calm down; let’s think this through”, and the way is open to trace the links one at a time. For each causal loop, there are sequels and feedbacks: e.g., more individual attention → more discovery of individual talent → more resistance to the standard curriculum → more demand on individual teachers → more absenteeism → larger class sizes . . .

Your list, or diagram, of causal links can never be finished, but there is encouragement here for tracing the sequence of cause-and-effect—an advance on the passionate incomprehension which has shaped our world (Five Whys).S160

 

2. Feedback and flow

A system tends to maintain its form and character, changing only slowly if at all, or maintaining a cyclical constancy despite shocks which would change it beyond recognition, if it were a cloud or a sand dune. And yet, this sustained integrity as a system is not achieved due to any of its members showing any particular self-restraint (Responsibility Fallacy). Most populations—plant, animal or human—would quickly dominate their ecosystem with their numbers, given half a chance. The reason why such amplifiers, such as the propensity to breed without limit, do not continually destroy systems is that there are compensating dampers which stop them doing so.S161

Amplifiers (positive feedback) tend to increase the effect that caused them; dampers (negative feedback) tend to reduce it. Practically all successful outcomes everywhere have aspects of positive feedback in them. Success goes from strength to strength. And it sometimes happens that, at first, there is no damper (negative feedback) on hand to suppress it. Confidence in stock markets and mortgage markets boils over. Plants genetically modified to resist predators can be expected to multiply out of control. Climate change, beyond a certain threshold (or ‘tipping point’), triggers more climate change. Unless a damper can kick in at a reasonably early stage, allowing a flow of corrections, then the reaction may mature from feedback to shock or collapse. The problem lies, not with the positive feedback, but with the absence of early and effective dampers. And, to be effective, the dampers need, as a general rule, to have two qualities:

First, they need to be present in some variety. As James Lovelock shows, a modelled ecology that contains just two relevant animal species—rabbits and foxes—is unstable: if there are too many foxes, their population crashes, and that is followed by there being too many rabbits, whereupon the population of foxes grows. But as Lovelock points out, when we see . . .

. . . a bank where the wild thyme blows,
Where oxlips and the nodding violet grows,
Quite over-canopied with luscious woodbine,
With sweet musk-roses and with eglantine . . .S162

. . . there is a good chance that, with that variety of species on hand to stabilise it, this is an iterative network of positive and negative feedback which conserves its form and nature for some time. It is the interaction of the diverse species living there that gives the species stability.S163 Eugene Odum summarises,

The ecosystem is the basic functional unit in ecology, since it includes both organisms and abiotic environment, each influencing the properties of the other and both necessary for maintenance of life as we have it on the Earth.S164

Secondly, the damping process needs to be a continuous (or, at least, high frequency) one. Some ecosystems—such as forests that have adapted to periodic wildfires—do depend on infrequent limiters, but the immediate results are rather drastic. “Stability” is more comfortably associated, as Lovelock puts it, with “close coupling” between action and response—that is, with constant, small-scale nudges and limitations (Kaizen).

And Lovelock adds an aside:

Perhaps it is a metaphor for our own experience that the family and society do better when firm, but justly applied, rules exist than they do with unrestricted freedom.S165

In other words, systems are able to benefit from their parts’ uninhibited potential for positive feedback because at the same time they maintain a steady flow of dampers. And yet, it is the positive feedback that in a sense moves the whole process along, and the idea of positive feedback in an ecology is so often used in a disturbing context—such as runaway climate change, overfishing, overgrazing and deforestation—that we should acknowledge, too, its significance in a positive sense; the sense of reinforcement, the realisation of what we have the potential to become: pianist, doctor, friend, woman, man.

Birmingham, at the centre of Britain’s largest industrial conurbation, is not often cited as an ecological example to follow. But it is in fact a story of reinforcement, of positive feedback in the constructive sense of its increasing, self-reinforcing confidence. It invented its own standards of engineering, technical ingenuity, teaching and apprenticeship, its radical politics—and Birmingham was a vital habitat for the Pre-Raphaelite movement. This city developed a brilliant critical mass of creative talent, a storming contribution to the Industrial Revolution, and a culture of responding to acute observation—aka both negative and positive feedback. It had flow.

We now need the magic of that energy and alertness, committed to a profoundly different aim.

3. Fast feedback

Guidance by feedback is everywhere we look, but we need to be aware of the occasions when it evolves to a level which is so fast-acting and sophisticated that it is not at first easy to recognise its existence. For example, a man on a tightrope is kept aloft by feedback loops. He responds to a slight tilt with a compensating action—a wobble. Tilts and wobbles present the man with a highly complex problem, in which he must take into account their speed, their direction in three dimensions and the resulting kinetic energy (depending on his weight, the length and weight of his balancing bar, and the movement of the rope). Each of these is strictly a causal loop, requiring evaluation in terms of positive and negative feedback. If he paused to work all this out on his calculator, he would fall off, so it is done at a subconscious level, using the principle of resonance (discussed below), which judges each changing state continuously. The brilliance of our subconscious minds—and those of (for instance) flocking birds, or hawks standing on the wind or diving onto their prey—is awesome. The subconscious of a dog chasing a stick carries out millions of interactions with balance, electro-chemical muscle coordination, enzyme messaging, finely-tuned cardiovascular responses and parallax: the best the conscious mind can manage is ‘dog fetch stick’.

The aim of skilled systems thinking—as in riding a bicycle, sustaining a permaculture system, or making a community—is to set things up in the first place so that the system runs itself. You then only have to think about it from time to time—interacting with, rather than intervening in, a healthy system, which responds faster than you can think.

4. Synergy and emergence

Synergy is the interaction between parts of a system, producing results greater than would have been available to them acting on their own. Emergence is synergy which leads to the evolution of a complex system.

Synergy and emergence are significant in the case of the collective action of a community. A community can achieve results—in, say, the transformation to energy-self-reliance—which are not simply the sum of what its individual members can do (added together), but the product of their interactions (multiplied together). Here is the power of lean thinking and common purpose. The big accomplishment may not be thought possible, so it may not be attempted, but—if the group possesses the relevant talent, if it persists, if it has the freedom to decide for itself and the time to implement what it has decided, if the accomplishment is seen as a set of small tasks in which there is local presence—if all these ifs and more are met, there will be emergence. The exhilaration that comes with it can be spectacular.S166

But synergy may turn negative, too, and this can be critically significant in the context of systems that have become stressed. For example, an economy that is in trouble owing to energy depletion, changing climate, declining soil fertility, a growing population and declining cultural confidence, may become sensitive to minor stresses. Just one thing could be the final straw that breaks the back of a system so weakened. A human society at that stage has reached the end of its ability to deal with additional trouble. Even a minor additional problem can nudge it across the threshold into an entirely different state, working to different rules (Intermediate Economy).S167

5. Time

The impact of time on feedback is decisive. In the case of our own species’ current large-scale systems, there is a long series of feedback lags between events and responses (see “The Lag Sequence” below).

 

THE LAG SEQUENCE
10 steps to being timed-out

1. Observation lag: from the time the problem begins to unfold, to its existence first being noticed by pioneers.

2. Comprehension lag: . . . to its significance being understood.

3. Communication lag: . . . to that understanding being first published.

4. Diffusion lag: . . . to wide awareness.

5. Denial lag: . . . to acceptance. But denial may persist.

6. Action lag: . . . to action.

7. Project lag: . . . to completion of a relevant project.

8. System lag: . . . to termination of the practice which is causing the damage.

9. Pipeline lag: . . . to termination of the added trauma suffered by the system.

10. Overshoot: between the time of termination and the earlier time it would have had to be terminated.

We can’t wait that long. A systems-feasible future depends on tight coupling between event, observation and response, fulfilling the lean thinking requirement of flow, with moderate early responses to small stimuli rather than late blockbuster responses to incipient catastrophe. Lean Logic is substantially defined by key principles which affirm and enable a flow of this kind, including practice (the hands-on culture which observes events quickly and accurately), appropriate scale (with its implications of elegance, judgment and presence), and the principles of resilience and lean thinking.

But there is menace lurking in the concept of time. Here are three of its darker aspects:

The decision-window. In most situations, and for most of the time, big strategic decisions are unnecessary; you can simply let things ride. However, within a long period of time, there are occasions—sometimes short-lived and not much more than ‘last moments’—at which crucial decisions can be made which will have radical consequences. And they need to be made, since the default position of making no decision will also have radical consequences. It is not always clear until later that the moment has passed.S168

The anthropologist Marvin Harris offers an example of this. As described in the entry for Unlean, the regimes of the Oriental civilisations of the past were organised around autocratic, giant-scale systems of water management. The problem is that, once such regimes are established, it is virtually impossible to reform them. Harris writes,

. . . despotic forms of government may arise which can neutralise human will and intelligence for thousands of years. This implies . . . that the effective moment for conscious choice may exist only during the transition from one mode of production to another. After society has made its commitment to a particular technological and ecological strategy for solving the problem of declining efficiency, it may not be possible to do anything about the consequences of an unintelligent choice for a long time to come.S169

Slow change. Slow changes are usually the big changes which transform systems, for better or worse. Examples:

a. the build-up of phosphorus from farmland surrounding the Everglades, with the result that the native sawgrass ecosystem and the species that depend on it are being replaced by an emerging monoculture of cattail bulrushes;

b. gradual decline in the functional effectiveness of the family or the community’s culture;

c. the decline in male fertility.

Slow changes tend to be missed, or dismissed on the grounds that there is nothing new there, or that there is nothing to be done about it, or that the evidence is weak, or that more immediate problems claim priority.S170

Overshoot is where action taken has already caused damage, or it is too late to prevent it. It belongs to a set of problems which can be illustrated by lags (see “The Lag Sequence” sidebar above) and inventories. Inventories take many forms, with the most obvious being the stock of finished goods in a company’s warehouse, ready and waiting for a sale. In good times, there is nothing wrong with this. As soon as a sales order comes in, it can be supplied out of what the firm has in stock and, just in case an order comes in one day which is exceptionally large, it is handy to have an inventory, so that the customer is not kept waiting.

No problem so far. But suppose there is a downturn in the economy, or the company begins to feel less confident about the future, or is short of cash. The obvious first thing to do is to let the inventory run down: this leaves sales unaffected, but it can reduce the costs of production: workers can be laid off and orders for parts can be cancelled, without this affecting the company’s prospects in any obvious way. And that means that the existence of the inventory allows a small problem for the company to turn into a big problem for its suppliers: multiplied across the economy, reduced confidence can quickly multiply into recession. And even when demand starts to pick up again, inventory-rich companies may still be able to supply it from their inventories, so that the economy as a whole remains in trouble . . .

. . . until the warehouse is empty, whereupon firms must pile back into production quickly in order to keep up with demand, despite having got rid of their inventories. That is, inventories can lead to large lags and shocks in a system. These do not simply reduce the accuracy of feedback, but make it profoundly misleading.

An engaging but hair-raising illustration of this was devised at the Sloan School of Management in the 1960s. It’s called the Beer Game: a pop band mentions a brand of beer (Lover’s Beer) in a song; demand for the beer doubles; the lags in the supply chain, along with alarmed and understandable overreactions by everyone involved, rapidly join up. Retailers, wholesalers and the brewery run out of stock. Then everyone does the obvious: the retailers and wholesalers massively increase their orders; the brewery goes into double time. The result is that the whole supply chain suddenly finds itself chock-full of over-ordered beer. Finally, the point is reached where the brewery has to shut down for a year to allow the inventories in the system to clear.

In a sense, of course, it was an accident waiting to happen. Communications were poor—this was before the days of email—and the system was able only to offer quantity responses to time problems. And yet flaws of just this kind are present in other time-related problems, such as climate change. No one actually does anything wrong. Lags are in the system, killing flow, and taking the system as a whole forward to overshoot. Then everyone looks for someone to blame.S171

6. Closed-loop system

A closed-loop system, using all (or most of) its own material waste, is well placed to observe and respond to feedback. It is close to events, and the recognition that it cannot import what it cannot provide focuses the mind. In a closed-loop system, hyperbole convinces nobody—unless (as everybody knows, but doesn’t like to admit), it is too late.S172

7. Path dependence

The character of a system, along with the features of its local environment, is substantially determined by the path it embarked upon a long time ago (“the story so far”). The way a system responds to feedback is deeply shaped by this accumulated experience, leading to where it is now. Our lives congeal behind us, but watch us—like Tutankhamen’s army—governing our choices, uneasy witnesses to our flawed decisions (Emergence).S173

8. Homeostasis

Homeostasis is the ability of species and natural systems to persist unchanged over very long periods in spite of profound changes in the environment and climate. The flow of small-scale, rational adjustments to events, despite its mixed and largely failed record in human history, has a wonderful record in the case of natural systems. The attention of biology has been focused on its evolution rather than its stability, but its fundamental significance was recognised by Charles Darwin, who wrote in a letter to Charles Lyell, “If I had to commence de novo, I would have used ‘natural preservation’.”S174

 

Function

This third and final group of principles is about getting the results you want from a system, avoiding the results you don’t want, and (the ideal and leanest option) letting the system take its course to your mutual benefit.

1. Composition gains

These are benefits arising from the links between parts of the system, and/or from the extent of others’ participation in the system. A well-tempered system yields three such gains:

a. common purpose: shared intention to reach a shared goal. An alignment of individual and collective purposes, so that actions and aims which individuals recognise as in their own interests are also in the interests of the system as a whole, and vice versa.

b. common capability, an aim which becomes available if and only if it is a collective aim, developing the system’s potential for synergy; and

c. common resource, a resource which is intrinsic to everyone’s activities, so that events affecting its usefulness affect—and could ultimately transform—the system (e.g., the political economy) as a whole.

2. Leverage

Leverage is accurate, small-scale, systems-literate action which makes a system want to do what you want it to do. It utilises resonance—that is, the tendency of a system to oscillate at maximum amplitude at certain frequencies.

Systems are drenched in resonance: events amplify each other with startling effect—like a child’s rhythmically kicking legs on a swing—or they damp each other down, or change each other; there is rhythm and synergy, magnifying the effects of seemingly minor promptings and causes (small kicks while on the swing will sustain an oscillation containing a lot of energy). This is summarised by the science writer John Gribbin as a way of getting a large return for a relatively small effort, by making the effort at the right time and pushing the system in the way it wants to go.S175

Leverage can sound at first like a case of scheming manipulation: find the weak point in the system and apply pressure to force it to do what you want. Well, perhaps sometimes it is just that, but more usefully, it is about working with the system. The example that Buckminster Fuller gives is the rudder on a ship:

Something hit me very hard once, thinking about what one little man could do. Think of the Queen Mary [ocean liner]. The whole ship goes by and then comes the rudder. And there’s a tiny thing at the edge of the rudder called a trim tab. It’s a miniature rudder. Just moving the little trim tab builds a low pressure that pulls the rudder around. Takes almost no effort at all. So I said that the little individual can be a trim tab. Society thinks it’s going right by you, that it’s left you altogether. But if you’re doing dynamic things mentally, the fact is that you can just put your foot out like that and the whole big ship of state is going to go. So I said, call me Trim Tab.S176

And leverage works the opposite way round from the way you expect: to make the rudder turn right, you turn the trim tab left.

Well-designed, healthy systems do not need heavyweight management to produce intended consequences. It may take no more than the “nudge” advocated by Richard Thaler and Cass Sunstein:

Exceedingly small features of social interactions can have massive effects on people’s behaviour.S177

The alternative—to throw massive resources into trying to force the system to do what it doesn’t want to do—does, however, have the advantages of keeping you very busy and setting you up with a job for life, so it tends to be preferred over looking for the point of leverage.

Nonetheless, the heroic, frantic effort is usually a sign that things have been left too late, that the opportunity of applying leverage has passed and that the outcome will be regrettable. Both the “flow” of lean thinking and the “nudge” of informal systems thinking are interventions with the time, knowledge and manners needed to be effective.S178

3. Efficiency

This is the ratio between the outcomes desired from a system (e.g., the food, warmth and music delivered to residents of a large-scale civic society) and the regrettable necessities which it must maintain in order to produce them (e.g., the intermediate economy of that society). A low ratio is elegant.

But note that this (like many other concepts in Lean Logic) has fuzzy edges. A snail, sliding modestly up the trunk of an apple tree, does not depend on any intermediate economy at all—no transport, no waste disposal contractors, no tax, no environmental consultants. And yet, it depends on the whole complex of global tectonic movements and ocean currents, the exchanges of gas and rock, the algae, the dimethyl sulphide which seeds clouds and makes it rain, and all the other interactions that comprise Gaia. On one analysis, that’s a terribly inefficient snail. On another, it might be best not to think of systems in such a reductionist, snail-centred way.

 

Systems thinking is the art of holding a subject in the mind for long enough at least to recognise the existence of connections which are intrinsic to it—and, better still, of having a tentative idea of what some of those connections might be. It is enabled by a sustained concentration span. It is at the heart of Lean Logic, and its security as a defining property of our species and culture is in trouble. This breaking up of vision is an almost unavoidable outcome of science—for it is easier to observe a thing than to trace through its connections with all other things—but there is a cost to this. As the pioneer of organic thinking, Eve Balfour, writes,

Whatever we study, our tendency is to break it up into little bits, thereby destroying the whole, and then to study the effect or behaviour of the separate pieces as though they were independent, instead of—as in fact they are—interdependent.S179

And now, connected thinking has to contend with the forces of the electronic media, distracting us from the attention span needed for reflection, inductive problem-solving, critical thinking and imagination. Systems are hard to think about by any standards. For the scattered and fragmented minds reflected on in Nicholas Carr’s The Shallows, it may be becoming even harder. Systems thinking is not about understanding. It is deeper than that. It is about encounter.S180

 

Related entries:

Resilience, Emergence, Gaia, Reductionism, Dirty Hands, Hippopotamus.

« Back to List of Entries
David Fleming
Dr David Fleming (2 January 1940 – 29 November 2010) was a cultural historian and economist, based in London, England. He was among the first to reveal the possibility of peak oil's approach and invented the influential TEQs scheme, designed to address this and climate change. He was also a pioneer of post-growth economics, and a significant figure in the development of the UK Green Party, the Transition Towns movement and the New Economics Foundation, as well as a Chairman of the Soil Association. His wide-ranging independent analysis culminated in two critically acclaimed books, 'Lean Logic' and 'Surviving the Future', published posthumously in 2016. These in turn inspired the 2020 launches of both BAFTA-winning director Peter Armstrong's feature film about Fleming's perspective and legacy - 'The Sequel: What Will Follow Our Troubled Civilisation?' - and Sterling College's unique 'Surviving the Future: Conversations for Our Time' online courses. For more information on all of the above, including Lean Logic, click the little globe below!

Comment on this entry: