·Military & Conflict
Section 1
The Core Idea
In September 2002, the United States Joint Forces Command conducted Millennium Challenge 2002 — the most expensive war game in American military history, designed to validate the Pentagon's new doctrine of network-centric warfare before the anticipated invasion of Iraq. The Blue Team, representing the United States, deployed the full suite of advanced intelligence, surveillance, and precision-strike capabilities that would define twenty-first-century American warfighting. The Red Team, representing an unnamed Middle Eastern adversary, was commanded by retired Marine Corps Lieutenant General Paul Van Riper. Within the first two days, Van Riper sank the Blue Team's carrier battle group — sixteen ships, including an aircraft carrier — using a combination of motorcycle couriers instead of radio communication (defeating signals intelligence), swarm attacks by small boats packed with explosives (overwhelming missile defense systems), and cruise missiles launched from shore positions hidden in civilian infrastructure. The Blue Team suffered casualties that, in a real conflict, would have exceeded 20,000 personnel.
The exercise was halted. The simulation was reset. Van Riper was given restrictions on his tactics. The Blue Team was allowed to refloat its ships. The war game continued to its predetermined conclusion: a Blue Team victory that validated the Pentagon's doctrine. Van Riper resigned in protest, later telling reporters that the exercise had been rigged to confirm the answer the Pentagon wanted rather than reveal the answer it needed.
The story of Millennium Challenge 2002 is the definitive case study in why Red Teaming exists — and what happens when organizations refuse to accept its findings. Red Teaming is the deliberate practice of assigning a dedicated group to attack your own plans, strategies, and assumptions from the adversary's perspective. The Red Team's job is not to offer constructive feedback, suggest improvements, or propose alternatives. It is to break your plan. To find the failure modes you missed, the vulnerabilities you assumed away, the scenarios you didn't model because they contradicted your thesis. The Red Team succeeds when your plan fails — in the simulation rather than in reality.
The practice originated in the Catholic Church, not the military. During the canonization process, the Vatican appointed an Advocatus Diaboli — the Devil's Advocate — whose formal responsibility was to argue against sainthood by presenting every possible objection, every piece of negative evidence, every reason the candidate was unworthy. The role was not adversarial in spirit; it was adversarial in method. The Church understood that a process designed to confirm sainthood would confirm sainthood whether the candidate deserved it or not. Only a process that actively tried to disprove the conclusion could be trusted to validate it.
The military adopted the concept because warfare punishes confirmation bias with body bags. The Israeli Defense Forces institutionalized Red Teaming after the 1973 Yom Kippur War, when a surprise Egyptian-Syrian attack nearly destroyed the country because Israeli intelligence had dismissed indicators of imminent attack that contradicted the prevailing assessment — called "the Concept" — that Egypt would not go to war without first achieving air superiority. The intelligence was available. The analysts had it. The institutional framework filtered it out because it contradicted the conclusion the system had already reached. After 1973, the IDF created the Ipcha Mistabra unit (a Talmudic phrase meaning "on the contrary") — a dedicated team whose sole mission was to construct the strongest possible case against the prevailing intelligence assessment. The unit existed not because Israeli intelligence was incompetent but because competent analysts operating within a consensus framework will systematically suppress disconfirming evidence unless the institution forces them not to.
The principle translates directly to business, investing, and any domain where decisions are made under uncertainty by groups of humans subject to cognitive bias. Every strategic plan, every investment thesis, every product roadmap is a hypothesis about how the future will unfold. The natural organizational process — the one that operates by default in every company, every fund, every government agency — is to gather evidence that supports the hypothesis, dismiss evidence that contradicts it, and move forward with increasing confidence toward a conclusion the organization reached before the analysis began. Red Teaming is the institutional mechanism that interrupts this process. It assigns someone the explicit job of attacking the plan, not because the plan is necessarily wrong, but because only a plan that survives a determined attack can be trusted.
The deepest insight of Red Teaming is not tactical but epistemological: you cannot validate your own thinking by thinking more about it in the same way. The biases that produced your plan are the same biases that evaluate your plan. Confirmation bias doesn't announce itself.
Narrative construction feels like analysis.
Groupthink feels like consensus. The only reliable corrective is an external perspective that is structurally incentivized to disagree — not a colleague who raises a concern in a meeting and backs down when the CEO pushes back, but a dedicated function with the organizational authority to present the worst case without career consequences. The Red Team is not an opinion. It is a process. It is the institutionalization of doubt in organizations that naturally select for certainty.