·Psychology & Behavior
Section 1
The Core Idea
Herbert Simon introduced bounded rationality in 1955 and won the Nobel Prize in Economics for it in 1978. The core claim dismantled a century of economic orthodoxy: humans do not optimise. They cannot. They lack the time, the information, and the cognitive capacity to evaluate every option, compute every outcome, and select the mathematically best answer. So they do something else entirely — they satisfice. Simon fused "satisfy" and "suffice" into a single word that describes how decisions actually get made. A person scans options until they find one that clears a threshold of acceptability, and then they stop searching. Not the best option. A good enough option. The search ends not because the optimal solution was found but because the cost of continued searching exceeds the expected benefit of a marginally better answer.
Classical economics assumed a rational agent with perfect information, unlimited processing power, and infinite time. Simon looked at actual human behaviour and saw something different: a decision-maker operating under severe constraints. The information available is incomplete. The time available is finite. The brain's processing capacity tops out at roughly seven items in working memory. Under these constraints, optimising is not just difficult — it is mathematically impossible for any real-world decision of meaningful complexity. The number of possible chess moves exceeds the number of atoms in the universe. The number of possible career paths, investment strategies, or product configurations is functionally infinite. No human — no computer — can evaluate them all. Everyone satisfices. The question is whether they do it consciously or pretend they are optimising while satisficing anyway.
Amazon's "disagree and commit" principle is institutionalised satisficing at the organisational level. Bezos articulated it in his 2016 letter to shareholders: when a decision is reversible, speed matters more than precision. A leader who disagrees with a proposed direction can say so, register the disagreement, and then commit fully to execution — rather than blocking the decision until consensus is reached or perfect information is available. The mechanism works because most decisions are what Bezos calls "Type 2" — reversible, recoverable, two-way doors. For Type 2 decisions, the cost of delay exceeds the cost of being wrong, because being wrong is correctable and delay is not. Disagree and commit operationalises Simon's insight: the optimal decision does not exist in the time available, so find a good enough decision and move.
Barry Schwartz's The Paradox of Choice (2004) exposed the dark side of bounded rationality in consumer environments. More options do not help bounded decision-makers. They paralyse them. Schwartz documented that supermarkets carrying 30,000 SKUs produced more decision fatigue and less purchase satisfaction than stores carrying 5,000. A jam study by Sheena Iyengar and Mark Lepper (2000) became the field's signature experiment: shoppers confronted with 24 jam varieties were 10 times less likely to purchase than shoppers offered 6. The bounded mind cannot process 24 options. It freezes, defers, or chooses randomly — all of which produce worse outcomes than a constrained choice set that the mind can actually evaluate. The paradox is that expanding options makes satisficing harder, not easier, because the threshold of "good enough" rises with the number of alternatives the person knows exist but cannot evaluate.
Simon's deepest insight — the one most people miss — is that rationality is bounded by the environment, not just the individual. A chess grandmaster and a novice have the same cognitive architecture. The grandmaster makes better decisions not because their brain is fundamentally different but because their environment — years of pattern recognition, memorised positions, trained intuitions — has restructured the decision landscape. The bounds on rationality are not fixed properties of the human brain. They are interactions between the brain and the structure of the problem.
Change the structure, and you change what "rational" looks like. This is why decision environments matter as much as decision-makers: the same person makes better decisions in a well-structured environment and worse decisions in a poorly structured one. The environment is not backdrop. It is architecture.