·Systems & Complexity
Section 1
The Core Idea
In 1999, Donella Meadows — the systems dynamicist who had spent three decades modelling the behaviour of complex systems at MIT and Dartmouth — published an essay that distilled the most consequential strategic insight of the twentieth century into a single ranked list. The essay was called "
Leverage Points: Places to Intervene in a System," and its argument was deceptively simple: within any complex system — an economy, a corporation, an ecosystem, a technology platform — there exist specific structural locations where a small shift in one thing can produce large changes in everything. These are leverage points. They are not random. They are not equally powerful. And most people, most of the time, intervene at the wrong ones.
Meadows identified twelve leverage points, ranked from least effective to most effective. The hierarchy is counterintuitive and that is precisely why it matters. The interventions that feel most concrete and actionable — adjusting parameters like tax rates, quotas, subsidies, and budgets — sit at the bottom of the list. They are the interventions that politicians, managers, and consultants reach for instinctively, and they are the interventions that almost never produce lasting systemic change. The interventions that feel abstract and difficult — changing the goals of the system, shifting the paradigm from which the system arises, cultivating the power to transcend paradigms entirely — sit at the top. They are the interventions that transform civilisations, industries, and organisations, and they are the interventions that almost no one attempts because they require operating at a level of abstraction that most decision-makers find uncomfortable.
The twelve points, from shallowest to deepest: (12) constants, parameters, and numbers — the knobs that everyone wants to turn; (11) the sizes of buffers and stabilising stocks; (10) the structure of material stocks and flows; (9) the lengths of delays relative to the rate of system change; (8) the strength of negative feedback loops relative to the impacts they are trying to correct; (7) the gain around positive feedback loops; (6) the structure of information flows — who has access to what information and when; (5) the rules of the system — incentives, punishments, and constraints; (4) the power to add, change, evolve, or self-organise system structure; (3) the goals of the system; (2) the mindset or paradigm out of which the system's goals, rules, and structures arise; and (1) the power to transcend paradigms.
The hierarchy maps an insight that experienced systems thinkers recognise immediately but that linear thinkers find baffling: the deeper you intervene in a system, the less effort is required and the greater the effect — but the deeper interventions are harder to see, harder to articulate, and harder to implement because they operate on the invisible architecture that generates visible behaviour. Adjusting a tax rate is visible, concrete, and politically legible. Changing the informational structure that determines how economic actors perceive their options is invisible, abstract, and requires understanding the system at a level that most participants never reach. Yet the informational intervention reshapes every subsequent decision in the system, while the tax adjustment reshapes only the narrow transaction it directly touches.
The hierarchy is not merely academic. It explains a pattern that recurs across every domain where humans attempt to change complex systems. Governments adjust tax rates (level 12) and wonder why economic behaviour barely shifts. Corporations restructure departments (level 10) and discover that the same dysfunctions reappear in the new structure. Schools revise curricula (level 12) without changing the incentive architecture that determines what teachers and students actually optimise for (level 5). Health systems add beds and staff (level 11) without redesigning the information flows that determine how patients move through the system (level 6). In every case, the intervention targets the tangible, visible layer of the system — the parameters — while the intangible, invisible layer — the goals, rules, information architecture, and paradigm — continues to generate the behaviour that the intervention was supposed to change. The system absorbs the shallow intervention and regenerates the original pattern, leaving the interveners frustrated and convinced that the problem is intractable when in fact it is merely mislocated.
This model is distinct from
Leverage (Physics), which describes how amplification mechanisms convert small inputs into large outputs. Leverage (Systems) asks a different question: not "how do I amplify my effort?" but "where in this system should I intervene to produce the largest change?" The physics model is about the magnitude of force applied through a mechanism. The systems model is about the location of intervention within a structure. A founder who understands both knows not only how to multiply the impact of their actions but where to direct those actions for maximum systemic effect. The combination is what separates builders who optimise within existing structures from builders who reshape the structures themselves.