·Business & Strategy
Section 1
The Core Idea
On February 12, 2002, Donald Rumsfeld stood at a Department of Defense press briefing and delivered a sentence that was mocked, parodied, and ultimately recognised as one of the most precise articulations of epistemological risk ever spoken in public:
"There are known knowns — things we know we know. There are known unknowns — things we know we don't know. But there are also unknown unknowns — things we don't know we don't know."
The press laughed. Rumsfeld won a "Foot in Mouth" award from the Plain English Campaign. The concept he described, however, predates him by decades. NASA engineers used the known/unknown matrix during the Apollo programme. The intelligence community had operated with the framework since at least the
Cold War. Psychologists had studied metacognitive blindness — the inability to recognise what you don't know — since the 1970s. Rumsfeld didn't invent the idea. He gave it a formulation so clean that it became permanent vocabulary.
The framework creates a 2x2 matrix that maps everything an organisation knows or could know. Known knowns are the facts in your spreadsheet — your revenue, your customer count, your burn rate. Known unknowns are the questions you've identified but can't yet answer — will the competitor launch in Q3? Will the regulation pass? Will the key hire accept the offer? Both categories are manageable. Known knowns require monitoring. Known unknowns require research, scenario planning, and hedging. Standard strategic planning handles both.
The third category breaks the system. Unknown unknowns are the questions you haven't thought to ask — the risks that don't appear on any risk register, the competitive threats that don't show up in any market map, the failure modes that no pre-mortem has surfaced. They are invisible not because they are hidden but because your mental model has no category for them.
COVID-19 was an unknown unknown for most businesses in January 2020. Not "a pandemic" in the abstract — epidemiologists had modelled pandemic scenarios for decades, making pandemics a known unknown for anyone paying attention. The unknown unknown was the specific cascading consequence: that a respiratory virus originating in Wuhan would shut down global commerce for months, shift consumer behaviour permanently toward digital, and destroy businesses optimised for physical presence while creating trillion-dollar opportunities for those positioned for remote everything. No strategic plan contained that scenario. No risk committee had stress-tested it. The businesses that survived weren't the ones that predicted it. They were the ones whose structures could absorb shocks they had never imagined.
The iPhone was an unknown unknown for Nokia in 2006. Nokia held 49.4% of global smartphone market share. Its strategic documents — revealed later in parliamentary testimony — contained no scenario for a touchscreen-only device with no physical keyboard and a software ecosystem that would make the hardware a commodity. Nokia's known unknowns included competitive pricing, carrier relationships, and component costs. The unknown unknown was a category of product that rendered Nokia's entire competitive framework irrelevant.
Nassim Taleb's Black Swan theory is the mathematical formalisation of unknown unknowns with outsized impact. A Black Swan is an unknown unknown that, upon arrival, produces consequences disproportionate to anything the prevailing model could accommodate. The two frameworks overlap but aren't identical. Unknown unknowns describes the epistemological category — things outside your awareness. Black Swans describe a specific subset — the unknown unknowns that carry extreme impact. Every Black Swan is an unknown unknown. Not every unknown unknown is a Black Swan. Your competitor quietly launching a feature you hadn't considered is an unknown unknown. It probably isn't a Black Swan.
There is also a fourth quadrant that Rumsfeld didn't mention but that the Johari Window captures: unknown knowns — things you know but don't realise you know, or things the organisation knows but refuses to acknowledge. These are the institutional blind spots, the tacit assumptions, the risks that engineering has flagged but management has buried. The Challenger disaster in 1986 was an unknown known: Morton Thiokol's engineers knew the O-rings failed in cold temperatures and had documented the risk. Management overruled the engineers. The knowledge existed inside the organisation. The decision-making process excluded it. Unknown knowns are often more dangerous than unknown unknowns because they're fixable — the information exists, but the organisation's structure prevents it from reaching the people who need it.
The defence against unknown unknowns cannot be prediction — you cannot predict what you cannot imagine. The defence is structural. Build optionality so that when the unimaginable arrives, you have room to pivot. Build margin of safety so that the shock doesn't kill you before you understand it. Build antifragility so that certain categories of surprise actually strengthen your position. The founders who survive unknown unknowns share a common trait: they designed their systems to function in worlds they couldn't describe.