·Systems & Complexity
Section 1
The Core Idea
In 1973, the Canadian ecologist C.S. Holling published a paper that redefined how scientists think about stability. The prevailing view — engineering resilience — treated stability as a system's speed of return to equilibrium after a disturbance. A bridge that deflects under load and snaps back is resilient. A spring compressed and released is resilient. The measure is recovery time: how fast does the system return to its prior state? Holling argued that this definition was catastrophically incomplete for ecological systems — and, by extension, for any complex adaptive system including organisations, economies, and careers. He introduced ecological resilience: the magnitude of disturbance a system can absorb before it shifts into a qualitatively different regime. A lake can absorb increasing nutrient loads while remaining clear — until a threshold is crossed and it flips to a turbid, algae-dominated state from which recovery is extraordinarily difficult. The lake did not fail to bounce back quickly. It shifted to an entirely different basin of attraction. Engineering resilience measures speed of return. Ecological resilience measures how much the system can take before it transforms into something fundamentally different.
The distinction matters because it changes what you optimise for. If you optimise for engineering resilience — fast recovery — you build rigid systems that snap back efficiently from small perturbations but shatter under large ones. A just-in-time supply chain recovers from a one-day shipping delay in hours. It collapses under a three-month pandemic shutdown because its entire architecture assumed perturbations would be small enough to snap back from. If you optimise for ecological resilience — absorption capacity — you build flexible systems with redundancy, diversity, and modularity that can absorb massive disturbances without crossing the threshold into catastrophic regime change. The cost is efficiency during normal operations. The payoff is survival during the events that determine whether normal operations continue to exist.
A third dimension — adaptive resilience — emerged from the work of Brian Walker, Lance Gunderson, and the Resilience Alliance in the early 2000s. Adaptive resilience describes a system's capacity not merely to absorb disturbance and return to its prior state, nor merely to absorb disturbance without regime change, but to reorganise in response to disturbance while retaining its essential identity, function, and feedback structures. The adaptive-resilience lens recognises that in a changing environment, returning to the prior state may itself be maladaptive. A company that bounces back to its pre-crisis strategy in a post-crisis market has recovered in the engineering sense but may have lost fitness in the ecological sense. Adaptive resilience holds that the highest-performing systems absorb the shock, extract information from it, and reorganise into a configuration better suited to the post-shock environment — preserving core identity while updating structure.
The concept applies with equal force to organisations, supply chains, and personal leadership. An organisation with deep resilience maintains sufficient cash reserves, diversified revenue streams, and cultural adaptability to absorb market shocks without losing its core capability or strategic identity. A supply chain with deep resilience has multiple suppliers, geographic diversification, and buffer inventory that prevent a single-point disruption from cascading into systemic failure. A leader with deep resilience has the psychological capacity to absorb setbacks — failed products, lost key hires, market downturns — without losing the judgment, relationships, and strategic clarity that constitute their essential leadership function. In each case, the resilient system is not the one that avoids disturbance. It is the one whose architecture ensures that disturbance does not destroy the capacity to function, adapt, and continue.
The deepest insight is that resilience is not a property that can be added to a finished system. It is an architectural choice made at the design stage — a choice that frequently conflicts with the metrics that organisations optimise for during calm periods. Efficiency, speed, cost minimisation, and lean operations all reduce resilience by eliminating the buffers, redundancies, and slack that absorb disturbance. Every dollar removed from cash reserves improves return on equity during a bull market and reduces the organisation's capacity to survive a bear market. Every redundant supplier eliminated from a supply chain reduces procurement costs and increases the probability that a single supplier failure cascades into production stoppage. Resilience is expensive when nothing goes wrong. It is the only thing that matters when something does. The asymmetry between the visible cost of resilience in calm periods and its invisible value in crisis periods is the fundamental reason that organisations systematically under-invest in it — and the fundamental reason that the organisations which do invest in it disproportionately survive.
The concept maps onto personal leadership with striking directness. A leader's resilience is not their ability to avoid setbacks — no career of consequence avoids them — but their capacity to absorb setbacks without losing the judgment, relationships, and strategic vision that constitute their essential leadership function. The resilient leader has built psychological, financial, and social buffers that prevent any single professional disturbance from overwhelming their capacity to think clearly and act decisively. The leader without these buffers is a single point of failure in their own organisation.
The practical challenge is that resilience is invisible until it is tested. You cannot observe a system's resilience by watching it operate under normal conditions. A fragile system and a resilient system look identical during calm periods — and the fragile system often looks superior because it has not paid the cost of the buffers that the resilient system maintains. The only reliable test of resilience is the disturbance itself, and by the time the disturbance arrives, the architecture is fixed. This temporal asymmetry — the cost is paid before the test, and the benefit is realised only during the test — creates a chronic under-investment bias that rational analysis alone cannot overcome. The leaders who build resilient systems do so not because the expected-value calculation is obvious but because they have internalised, usually through painful experience, that the expected-value calculation is wrong: it systematically underweights the probability and consequence of the tail events that resilience is designed to survive.