Systems have a number of behavioural traites. Knowledge of these greatly inform the business design process, enbling us to understand and predict the behaviour of the system over time; particularly how to affect change and the likely effects of any interventions – both desirable and undesirable.
- Emergent Properties
- Simple and Complex Systems
- Stability and Leverage
- Side Effects
- Feedback Loops
- Cause and Effect
- Open and Closed Systems
- Limiting Complexity
- System Paradoxes
Systems are more than the sum of their parts. Systems function as a whole, so they have properties above and beyond the properties of their parts, called ‘Emergent Properties’. If you take the system apart, you will not find its essential properties. These only arise when the system is operating.
Simple and Complex Systems
A system maintains itself through the interaction of its parts. It is the relationship between these parts that is important, rather than the number or size of the parts. These relationships, and therefore the systems, can be simple or dynamically complex. Dynamic complexity is where the elements can relate to each other in many different ways.
Stability and Leverage
Complex systems are bound together by many links, so they tend to be very stable. A potential down side is their resistance to change. In the case of our planet, this is no bad thing, and let’s hope the resistance is greater than current projections would indicate. But for organisations, this lack of flexibility can be a major problem.
New business practices are often resisted. People are comfortable with the old ways of doing things. This is not the people being difficult, it is the system that they are in and are part of, resisting the change.
When systems do change they tend to do so in a dramatic and rapid manner. There is a threshold beyond which a system will suddenly change or break down – referred to as a ‘Tipping Point’ – a major area of concern in relation to climatic change.
Systems can also suddenly change if you find the right combination of actions. This is known as the ‘Principle of Leverage’.
Complex systems are not always continuous in their behaviour. System behaviour is described as continuous when it behaves predictably through its range of states. Discontinuous system behaviour is when something unexpected happens under the right set of circumstances. This is a very useful feature. If you know where and how to intervene, small interventions can deliver large benefits.
To apply the concept of Leverage, ask the simple question “What stops the change? Look at the connections that hold the part of the system you want to change in place. Cut or weaken these and making the change becomes much easier.
Some parts of the system are more critical than others – they exercise a higher degree of control. As a general rule, the higher degree of control a part of a system has, the more pervasive and wide-ranging the effects of any change in that part will be.
Expect side effects. When you change one part of a system, this change will ripple through the system, creating side-effects. These may be desirable or undesirable. Fortunately, if you truly understand your system, you can begin to predict them and design your change to maximise the desirable side-effects, whilst minimising the undesirable.
As all parts of a system are connected directly or indirectly, a change in one will affect all the other parts; and there will be a subsequent response in the original source of the change. This is referred to as feedback. Fortunately, all systems, however complex, consist of just two types of feedback:
Reinforcing feedback – when changes in the whole system feedback to amplify the original change.
Balancing feedback – when changes in the whole system feedback to oppose the original change and dampen its effect.
A reinforcing feedback loop drives a system in the direction it is going. It may lead to growth or decline, depending on the starting conditions. It may be favourable or unfavourable. It can create vicious circles as well as virtuous ones. Reinforcing feedback, if unchecked, can lead to exponential change, instability and destruction. The increase in greenhouse gases in our atmosphere is believed to be one such example of this, with potential destructive consequences for humankind.
A balancing feedback loop restricts, limits or opposes the initial change. It keeps a system stable, counter-balancing the effects of reinforcing feedback. It achieves this by acting to reduce the difference between where a system is and where it ‘should’ be. Whilst there is a difference between the present state of the system and its desired state, balancing feedback will move the system in the direction of the desired state. The closer the system is to the desired state, the smaller the difference represented by the feedback, and so the less the system moves.
A system needs a way of measuring, otherwise it could not tell the difference between where it is and where it should be. It needs to measure accurately. If it does not then it can trigger feedback loops where none are needed. Secondly, the measuring system needs to measure at the right level of precision for the system to work.
Most of the time, feedback gives chains of cause and effect. Each action influences what follows. Cause and effect goes in circles; and what was the cause from one point of view becomes the effect from the other.
Feed-forward describes a slightly different effect. It comes from our ability to anticipate the future. An anticipated effect in the future triggers the cause in the present, which would otherwise not have happened. Feed-forward creates self-fulfilling, or self-defeating prophecies.
There is frequently a time delay between cause and effect. We will often not see the result of a change immediately. The more dynamically complex the system, the longer the feedback takes to travel around the network of connections. The time the feedback takes to go through the system is referred to as its ‘memory’.
Cause and Effect
When considering what causes a particular effect, we tend to generate a list of factors and weight these in terms of importance. This is sometimes referred to as ‘laundry list thinking’.
This approach assumes a linear relationship between cause and effect, with each factor having a fixed relative importance. If only life was that simple. Unfortunately, causes are more often dynamic, rather than static. The relative importance of any one factor may change over time, depending on the feedback loops that exist. It is better to think in terms of influencing factors, rather than causes. This is an important point to remember next time you find yourself using an Ishikawa or fishbone diagram, as part of a quality improvement initiative.
Systems Thinking highlights three fallacies surrounding how we traditionally view questions of cause and effect:
Cause and Effect are separate, and the cause precedes the effect. Feedback tells use that the effect of a cause can be the cause of an effect. Does the shortage cause the hoarding, or the hoarding cause the shortage? Which one comes first depends on where you start;
Effect follows cause closely in time and space. This is a common expectation. But in systems there is always a delay and the effect may appear in a completely different part of the system. This means that when we are examining cause and effect chains, we need to extend our time horizon and focus. If you see a repeated pattern, look for the cause in the pattern, not the different circumstances or explanations for each event. Repetition is a clue that the cause lies within the underlying structure of the system;
The effect is proportional to the cause. This may be true for physical objects, but it is not necessarily the case for systems. Some actions may lead to big responses, or no response at all. Systems have thresholds. Only if the threshold is reached do you get a response.
Open and Closed Systems
Conventional physics deals with closed systems, those that can be considered isolated from their environment. In a closed system, the final state is completely determined by the initial conditions – a thermostat is an example of a closed system.
Social and living systems are open systems – they maintain themselves from moment to moment, by taking in and giving out to the surrounding environment. They change constantly, just to stay the same.
Open systems are extremely sensitive to their starting conditions. This is what makes them so unpredictable. A very small difference in starting conditions can lead to a very different result, under exactly the same stimulus. This is the starting point for chaos theory and the study of highly complex systems. Fortunately, business organisations are complex, but not that complex!
There are two kinds of complexity: inherent and apparent. Inherent complexity is the real thing. The system is so complex, the feedback loops form such a tangle, that their study is a task best left to chaos theorists and their super computers. Fortunately, business organisations do not full into this category.
Apparent complexity looks complicated, but there is order to be found within. Aspects of an organisation, or problem, can seem very complex and unfathomable, but as a systems thinker we need to be looking for patterns in this apparent complexity.
There are two ideas that help to limit complexity in a system. The wider you cast the net, the more complexity there will be. So rule one – establish useful boundaries.
The other idea is that complex systems have a strong tendency to revert to a stable state. Order seems to arise spontaneously in complex systems. These ordered states, or ‘attractors’, happen because of the way feedback is organised within the system.
Business organisations settle into stable states. Organisational change therefore involves first destabilising the system in its present state and then creating another attractor state. This is in affect what happened at Toyota in the 1950s; and started their journey towards becoming the leading car manufacturer that they are today, through the creation of the Toyota Production System (TPS).
Systems create some strange, seemingly illogical paradoxes. A good illustration of this is traffic congestion. The logical answer to traffic congestion is to add more roads. Unfortunately, very often it is not the road space that is the problem, but the number of junctions, or congestion points. More roads create more junctions. This is known as the ‘Bräss paradox’. It has implications in many areas, including the flow of information in organisations – more is not necessarily better.
Adding new roads to a congested network can also create a second problem, exemplified by the M25 London orbital motorway. It was build to attract traffic away from London, providing a route around London, rather than through it. It worked well – too well. As well as attracting traffic from London, it created more traffic, creating more wear and tear, resulting in frequent roadworks, leading to a reinforcing feedback loop of more congestion.
The M25 is an example of a basic system pattern known as the ‘tragedy of the commons’. When there is an attractive common resource more and more people tend to use it. The more people who use it, the less attractive it becomes, until it loses its value to everyone. In the case of the M25, a balancing loop has set in. Certain sections of the road have become so congested that people avoid them and use other routes or forms of transport. A balance has been reached. Worth noting that the response has been to add more lanes. This is a good example of trying to solve a problem with the same thinking that created it.
Dennis Sherwood, “Seeing the Forest for the Trees: A Manager’s Guide to Applying Systems Thinking”
Has a good introduction to System Dynamics. Also covers the use of system modeling techniques, specifically the use of Causal Loop Diagrams.
First Published: 07/10/2011 Last Updated: 27/06/2012