·General Thinking & Meta-Models
Section 1
The Core Idea
Every decision you make is based on a model of reality, not on reality itself. Your financial projections, your customer personas, your market maps, your org charts, your mental image of what the competitor is doing — all of these are representations. Simplifications. Compressions of an infinitely complex world into something your brain or your spreadsheet can process. The model is useful. The model is also, by definition, wrong in ways you cannot fully specify in advance.
Alfred Korzybski, a Polish-American philosopher and engineer, formalised this in 1931 with five words: "the map is not the territory." He was working on general semantics — the study of how language and symbols shape thought — and his central argument was this: humans do not interact with reality directly. They interact with their abstractions of reality. And those abstractions always leave things out.
Consider a road map. It shows you highways, intersections, distances. It does not show you the pothole on the exit ramp, the construction crew that closed a lane Tuesday, the deer that will cross Route 9 at dusk. The map is accurate enough to get you from Philadelphia to Boston. It is not accurate enough to guarantee you'll arrive on time, undamaged, or by the optimal route. And critically, the map cannot tell you what it has omitted. You discover the omissions by driving.
This asymmetry — between what the model includes and what reality contains — is where the most consequential errors in business, investing, science, and statecraft originate. Not from using bad maps. From forgetting that you're using a map at all.
The 2008 financial crisis is the canonical example at scale. The credit rating agencies — Moody's, S&P, Fitch — used quantitative models to assess the risk of mortgage-backed securities. Those models assumed that housing prices across different geographic regions were largely uncorrelated: a downturn in Las Vegas wouldn't coincide with a downturn in Miami. The model was built on historical data that showed exactly this pattern. For decades, regional housing markets had moved independently. The model was not invented in bad faith. It reflected the available territory as of, say, 2004.
But the territory had changed. Subprime lending, securitisation, and nationwide speculation had created correlations that didn't exist in the historical data. When housing prices fell, they fell everywhere simultaneously — the one scenario the models said was vanishingly unlikely. The models rated tranches of mortgage-backed securities AAA. The territory rated them junk. The distance between those two ratings destroyed roughly $2 trillion in global wealth.
The people who profited — Michael Burry, Steve Eisman, the Scion Capital and FrontPoint Partners teams portrayed in The Big Short — didn't have a better model. They had a better relationship with the concept of modelling itself. They read the actual mortgage tapes. They visited housing developments in Florida and Arizona. They asked what the models assumed and then checked whether those assumptions still held. The answer was no. Everyone else was navigating by a map that hadn't been updated to reflect a territory that had fundamentally shifted.
Korzybski's insight isn't that maps are useless. Maps are essential — you cannot think without abstraction, any more than you can navigate a city without some spatial representation. The insight is that the relationship between map and territory requires constant, active maintenance. The map degrades the moment you stop checking it against the ground. And the more successful the map has been historically, the more dangerous it becomes — because success breeds confidence, and confidence stops you from looking out the window to verify that the road still goes where the map says it does.
The pattern isn't confined to finance. In 2003, the United States intelligence community presented a map — a National
Intelligence Estimate — concluding with "high confidence" that Iraq possessed weapons of mass destruction. The map was built from satellite imagery, signals intercepts, and the testimony of a single source codenamed "Curveball." The territory — the actual physical ground in Iraq — contained no such weapons. The map looked authoritative because it was internally consistent: each data point reinforced the others within the model's framework. But the framework itself was built on unverified assumptions about source reliability and imagery interpretation. Over 4,000 American service members and hundreds of thousands of Iraqi civilians died in a war launched, in significant part, because policymakers treated a confidence-weighted intelligence estimate as though it described a reality they could see.
Gregory Bateson, the anthropologist and systems theorist, extended the idea: "The map is not the territory, and the name is not the thing named." Every layer of abstraction — language, metrics, dashboards, KPIs, financial statements — introduces a gap between the symbol and the referent. The gap is sometimes trivial. It is occasionally fatal. And you cannot know which in advance.
The operational question isn't whether your map is wrong — it is. The question is whether the ways in which it's wrong are tolerable for the decision you're making. A map that omits one-way streets is fine for estimating drive time and dangerous for navigation. The same model can be adequate for one purpose and lethal for another. Korzybski's discipline is to always ask: adequate for what?