·Psychology & Behavior
Section 1
The Core Idea
Show me the incentive and I will show you the outcome. Incentive-caused bias is the tendency for human cognition to warp — unconsciously, automatically, predictably — in the direction of whatever reward or punishment structure a person operates within. It is not corruption. It is not dishonesty. It is something more dangerous than both: sincere belief, shaped by self-interest, that the person holding it genuinely cannot distinguish from objective analysis.
Charlie Munger ranked it among the most powerful of his 25 causes of human misjudgment. "Never, ever, think about something else when you should be thinking about the power of incentives," he told a Harvard audience in 1995. He wasn't being rhetorical. He was issuing a warning that most people hear, nod at, and then proceed to violate within the hour — because the bias operates beneath the threshold of awareness. The mortgage broker who originates a $500,000 loan for a borrower who cannot afford it does not think "I am destroying this family's financial future for my commission." The broker thinks "this borrower's situation is more manageable than it appears, and rates will probably come down." The incentive doesn't corrupt the reasoning. It
becomes the reasoning.
The experimental evidence is robust. A 2012 study in the Journal of the American Medical Association found that physicians who received meals from pharmaceutical companies — not cash payments, not luxury trips, simply meals averaging $20 in value — were significantly more likely to prescribe the promoted brand-name drug over generics. The physicians studied would have vigorously denied that a $20 lunch influenced their clinical judgment. The prescribing data said otherwise. The gap between what people believe about their own objectivity and what incentive structures actually produce is one of the most documented findings in behavioral science.
Munger's favourite illustration was FedEx. In the early years, the company's central sorting hub in Memphis couldn't get packages transferred between planes fast enough. Teams worked shifts, and the packages moved slowly. Management tried every operational fix they could design — new procedures, better training, supervisory oversight. Nothing worked. Then someone changed the pay structure from hourly wages to per-shift completion: finish the sort and you go home with full pay. The problem vanished overnight. The same workers, the same packages, the same facility. The only variable that changed was the incentive. Munger used the story for decades because it demonstrated the point with zero ambiguity: the workers weren't lazy before. They were rational. The system was paying them to work slowly, and they obliged — not through conscious calculation, but through the invisible cognitive adjustment that incentive structures produce in every human being.
Upton Sinclair captured the mechanism in a single sentence in 1935: "It is difficult to get a man to understand something when his salary depends upon his not understanding it." The sentence survives ninety years later because it describes something everyone recognises in others but almost nobody detects in themselves. The surgeon who recommends surgery. The consultant who recommends more consulting. The auditor who approves the books of the client who pays their fees. None of these people are necessarily dishonest. Many are deeply principled. The incentive doesn't require dishonesty to distort judgment. It only requires a human brain.
The bias scales from individuals to institutions to entire economies. The 2008 financial crisis was not primarily a failure of financial engineering — it was a failure of incentive architecture. Mortgage originators earned fees per loan originated, with no retention of credit risk. Rating agencies earned fees from the banks whose securities they rated. Traders earned bonuses on annual mark-to-market gains that evaporated over longer time horizons. At every node in the system, the incentive pointed toward volume, complexity, and short-term profit. At no node did the incentive point toward the question that would have prevented $22 trillion in household wealth destruction: "What happens if housing prices fall?"
The Wells Fargo scandal of 2016 provided the domestic sequel. Retail branch employees, pressed by cross-selling quotas that determined their take-home pay and job security, opened 3.5 million fake customer accounts over a period of fourteen years. The executives who designed the quota system weren't trying to incentivize fraud. They were trying to incentivize "deep customer relationships." But the metric they chose — products per household, target of eight — was so disconnected from genuine customer need that the shortest path to the reward was fabrication. The incentive architecture produced the behavior it was designed to measure, while destroying the value it was designed to create.