Use this when a decision branches into a sequence of choices and uncertain outcomes, each with different probabilities and payoffs. The decision tree maps every path forward, assigns values and likelihoods, then calculates backward to reveal which first move maximises expected value — turning a tangle of contingencies into arithmetic.
Section 1
What This Tool Does
Most consequential decisions aren't single choices. They're chains. You decide whether to launch a product, and that decision leads to a market response — strong or weak — which triggers another decision about whether to scale or pivot, which leads to another set of uncertain outcomes. The human brain handles this poorly. Not because people can't think sequentially, but because they can't hold the full tree of possibilities in working memory while simultaneously weighting each branch by its probability and value. By the third branching point, intuition has quietly dropped half the paths and overweighted the most vivid remaining scenario. You end up optimising for the future you can most easily imagine, not the future with the highest expected value.
The decision tree was formalised in the operations research tradition of the 1950s and 1960s, with Howard Raiffa and Robert Schlaifer at Harvard Business School providing the foundational framework in their 1961 work on applied statistical decision theory. The tool emerged from a simple observation: military and industrial planners were making sequential decisions under uncertainty using gut feel and scenario narratives, and they were systematically getting it wrong. Not because they lacked intelligence, but because narrative reasoning — "if we do X, then probably Y will happen, and then we'll do Z" — collapses the probability distribution into a single storyline. The decision tree forces you to keep all the storylines alive simultaneously.
The mechanism is straightforward. You draw the decision as a tree, branching left to right. Square nodes represent choices you control. Circular nodes represent uncertain outcomes you don't control. Each branch from a chance node gets a probability (which must sum to 1.0 across siblings). Each terminal branch — the end of a path — gets a payoff value. Then you work backward from the endpoints, calculating expected values at each chance node (probability × payoff, summed across branches) and selecting the highest-value option at each decision node. The result is a single recommended action at the root of the tree, backed by the full arithmetic of every downstream possibility.
The core cognitive shift: the decision tree forces you to price uncertainty rather than ignore it. Instead of asking "what will happen if we do this?" — a question that invites a single narrative answer — it asks "what are all the things that could happen, how likely is each, and what is each worth?" That reframing is the entire intervention. It converts a story into a calculation, and calculations don't suffer from availability bias or anchoring.
The tool's elegance lies in its backward induction logic. You don't decide what to do first and then figure out the consequences. You start at the consequences — every possible endpoint — and let the math pull you backward to the optimal first move. This reversal is unnatural for human cognition, which is precisely why it works. Your brain wants to reason forward from the present. The decision tree reasons backward from every possible future.