·Economics & Markets
Section 1
The Core Idea
Every strategic decision you make depends on what someone else decides to do — and their decision depends on what they think you'll decide. This recursive loop, where outcomes are determined not by individual choices but by the interaction of multiple rational agents, is the territory game theory maps. It's the mathematics of interdependence, and it governs everything from nuclear deterrence to airline pricing to whether you should bluff in a poker hand.
John von Neumann and Oskar Morgenstern formalised the field in 1944 with Theory of Games and Economic Behavior, a 641-page treatise that reframed economics from the study of isolated agents maximising utility to the study of agents whose payoffs depend on one another's strategies. Von Neumann — a polymath who also contributed to quantum mechanics, computer architecture, and the Manhattan Project — had published the foundational minimax theorem in 1928, proving that in any two-person zero-sum game, there exists an optimal strategy for each player that minimises their maximum possible loss. The theorem was elegant but narrow: it applied only to situations where one player's gain was exactly another's loss. Real economic life was messier than that.
Morgenstern, an Austrian economist who had fled the Anschluss for Princeton, provided the economic intuition. His insight was that classical economics had a blind spot: it assumed each agent could optimise independently, as if their competitors' behaviour were a fixed feature of the landscape rather than a strategic response to their own moves. Adam Smith's invisible hand, Alfred Marshall's supply and demand curves, the entire neoclassical apparatus — all of it treated other market participants as environmental constants, like the weather. Morgenstern saw this as a fundamental error. A firm setting its price doesn't face a fixed demand curve. It faces a demand curve that shifts based on what competitors charge, which in turn depends on what they expect the first firm to charge.
The two men's collaboration produced the first rigorous framework for analysing situations where your best move depends on someone else's best response to your best move — an infinite regress that the mathematics was designed to tame.
The field's most celebrated result came a decade later, from a 21-year-old mathematics doctoral student at Princeton named John Nash. In 1950, Nash proved that every finite game with any number of players has at least one equilibrium — a set of strategies where no player can improve their outcome by unilaterally changing their own strategy, given what everyone else is doing. The
Nash Equilibrium didn't require zero-sum conditions. It applied to cooperative and competitive settings alike. It was general enough to model arms races, price wars, auction design, evolutionary biology, and traffic congestion.
The proof was 27 pages long. Its implications reshaped half a dozen disciplines. Nash received the Nobel Prize in Economics in 1994, forty-four years after the proof, by which point game theory had colonised virtually every social science — from political science (voting systems, coalition formation) to evolutionary biology (John Maynard Smith's concept of the
Evolutionarily Stable Strategy, published in 1973, was a direct application of Nash
Equilibrium to natural selection).
The power of the framework lies in its core reorientation: stop asking "what is my best move?" and start asking "what is my best move given that my opponent is also trying to find their best move?" That shift — from optimisation to strategic interaction — is what separates game theory from decision theory. Decision theory works when you're playing against nature. Game theory works when you're playing against other minds. And in business, markets, negotiation, and politics, you are always playing against other minds.
The taxonomy matters for practical use. In
zero-sum games, one player's gain is exactly another's loss — poker, military engagements, market share battles in fixed-size markets. In
positive-sum games, cooperation can expand the total payoff — trade agreements, technology standards, platform ecosystems. In
repeated games, the same players interact multiple times, and reputation, trust, and punishment become strategic variables that don't exist in one-shot encounters. Robert Axelrod's famous 1980s computer tournaments showed that in repeated
Prisoner's Dilemma games, the simplest cooperative strategy — Tit-for-Tat — outperformed every sophisticated exploitative approach. The lesson for business: if you're in a game you'll play repeatedly, cooperation often dominates exploitation because the long-term cost of destroyed trust exceeds the short-term gain from defection.
The distinction between simultaneous games (where players choose without knowing the other's move, like sealed-bid auctions) and sequential games (where players move in turn and observe previous moves, like chess or market entry decisions) determines which analytical tools apply. Simultaneous games require mixed strategies and probability; sequential games require backward induction — reasoning from the final move backward to determine the optimal first move.
Nearly all business strategy unfolds as a sequential game, which is why "working backward from the endgame" is one of the most valuable habits a strategist can develop. When Bezos considered entering the cloud computing market in 2005, the critical question wasn't "is this a good business?" — it was "if we enter, how will IBM, Microsoft, and Google respond, and does our strategy still win given those responses?" The answer depended on backward induction from the eventual competitive equilibrium, not on a static market analysis.
What makes the field genuinely useful for founders, investors, and strategists isn't the mathematics itself — most practical applications don't require solving systems of equations. It's the mental discipline of modelling the other side's incentives, constraints, and likely responses before committing to a course of action. The founder who launches a price war without considering the competitor's cost structure and willingness to absorb losses is making a decision-theoretic move in a game-theoretic world. The investor who buys a stock without considering what the seller knows is bringing a calculator to a chess match.