Most people attack problems head-on. How do I build a great company? How do I make this investment work? How do I live a good life? Inversion says: stop. Turn the question around. Ask instead what would guarantee failure — then systematically avoid those things.
The German mathematician Carl Gustav Jacob Jacobi formalised the instinct in the 19th century with a three-word maxim that became a principle: "man muss immer umkehren" — one must always invert. Jacobi wasn't giving life advice. He was describing how he solved intractable mathematical proofs by working backward from the desired result. But the technique reaches far beyond elliptic functions. It may be the single most transferable cognitive tool in the history of problem-solving.
Charlie Munger made Jacobi's principle the centrepiece of his decision-making framework, repeating "Invert, always invert" across decades of Berkshire Hathaway shareholder meetings. Munger's key insight: it is far easier to avoid stupidity than to achieve brilliance. "It is remarkable how much long-term advantage people like us have gotten by trying to be consistently not stupid, instead of trying to be very intelligent," he told a gathering at the Harvard-Westlake School. The statement sounds modest. It's actually a radical claim about the architecture of success.
The asymmetry is structural. There are a near-infinite number of ways to succeed, and most of them are unpredictable. But the causes of failure are well-documented, finite, and avoidable. A startup can fail for a thousand idiosyncratic reasons, but the big categories — running out of cash, co-founder conflict, building something nobody wants, ignoring unit economics — show up with statistical regularity in post-mortem after post-mortem. Y Combinator's data across 4,000+ funded startups confirms this: the failure modes cluster. Inversion says work the cluster.
The subtlety that most people miss: inversion is not pessimism. It's not "assume everything will go wrong." It's a cognitive technique for widening the aperture of analysis. When you think only forward — "what should I do?" — you anchor on your current assumptions and optimism bias fills the gaps. When you also think backward — "what would make this fail catastrophically?" — you surface risks and blind spots that forward thinking reliably misses.
Daniel Kahneman's research on the planning fallacy demonstrates the problem: people systematically underestimate completion times, costs, and risks when thinking forward, but produce far more accurate estimates when asked to explain how a project could fail. Inversion exploits that asymmetry. It's not a different way of feeling about the future. It's a different way of computing it — one that produces better-calibrated outputs because it forces the brain to search a different part of its information landscape.
Warren Buffett's "Rule No. 1: Never lose money. Rule No. 2: Never forget Rule No. 1" is inversion compressed to its essence. It sounds like a platitude until you watch Berkshire's actual capital allocation process. Every investment decision begins not with "how much can we make?" but with "how could we lose money here, and under what circumstances?" The approach looks passive — boring, even — during bull markets. It looks like genius during crashes.
The 2008 financial crisis offered a brutal demonstration. Banks that modelled forward — "how do we maximise returns on mortgage-backed securities?" — levered up 30-to-1 and collapsed. Lehman Brothers, Bear Stearns, Washington Mutual — each had sophisticated risk management functions. Each modelled the upside with precision and treated the downside as a statistical improbability. Buffett, who had been asking the inverted question — "what happens if housing prices decline 30%?" — was sitting on $44 billion in cash, ready to deploy capital when others were liquidating at distress prices. He invested $5 billion in Goldman Sachs on terms that eventually netted Berkshire over $3 billion in profit. He bought Burlington Northern Santa Fe railroad for $34 billion when infrastructure assets were available at generational discounts. The edge wasn't superior intelligence. It was a superior question.
Jeff Bezos's 1994 decision to leave D.E. Shaw and start Amazon was explicitly framed through inversion. He called it a "regret minimisation framework" — projecting himself to age 80 and asking which choice he'd regret more. Not "will this succeed?" but "what would I kick myself for not doing?" The inversion reframed a risky career move into a psychologically clear decision. At 80, he'd regret not trying. He wouldn't regret failing.
The framework didn't predict Amazon's success. It couldn't — nobody in 1994 could reliably forecast the trajectory of e-commerce. What it did was clarify the decision by revealing that the real risk wasn't failure — it was inaction. That distinction matters. Forward analysis asks "what's the expected value of this bet?" and often returns an indeterminate answer when uncertainty is high. Inverted analysis asks "which outcome would be permanently painful?" and the answer is frequently much clearer. Bezos's framework is a template for any high-stakes decision under genuine uncertainty: when you can't calculate the upside, calculate the regret.
The principle keeps surfacing independently across disciplines that rarely speak to each other. Stoic philosophers practised premeditatio malorum — the premeditation of evils — as early as the 1st century AD, systematically visualising worst-case scenarios not to induce fear but to build preparedness. Seneca wrote to Lucilius around 65 AD: "We should project our thoughts ahead of us at every turn and have in mind every possible eventuality instead of only the usual course of events." Marcus Aurelius embedded the practice into his daily routine, opening each morning by anticipating the worst that could happen — ingratitude, betrayal, obstruction — so that none of it would derail his judgment when it arrived.
In engineering, the entire discipline of reliability analysis works backward from failure. In medicine, diagnosis of exclusion works by ruling out what a condition is not — identifying what would be fatal if missed, then checking for those first. In chess, grandmasters routinely calculate from the desired endgame position backward to the current board state, a technique called retrograde analysis. The technique is independently rediscovered every century because it addresses a universal cognitive limitation: the human brain's default mode is to seek confirmation of what it already believes, and inversion is the most direct antidote.
Section 2
How to See It
Inversion is everywhere once you know what to look for. The best operators use it instinctively — they ask the backwards question before anyone else in the room thinks to.
Business
You're seeing Inversion when Amazon's leadership team begins every new product discussion with a "working backwards" press release — writing the hypothetical announcement of the finished product before a line of code exists. The press release forces the team to articulate what success looks like from the customer's perspective and, critically, to identify every assumption that could make the launch fail. When AWS was conceived in 2003, the internal press release exposed that the service would only work if developers trusted Amazon with their infrastructure — a non-obvious dependency that shaped the entire product and pricing strategy. Inversion disguised as a writing exercise.
Investing
You're seeing Inversion when Howard Marks at Oaktree Capital writes in his memos that "the road to long-term investment success runs through risk control more than through aggressiveness." Marks doesn't start with "what's the upside?" He starts with "what can go wrong, what's the downside, and can I survive it?" His entire fund structure — focused on distressed debt — is built on the inverted question: where are others failing, and how do we avoid failing the same way?
Engineering
You're seeing Inversion when NASA engineers conduct "failure mode and effects analysis" (FMEA) for every component of a spacecraft. They don't ask "how does this system work?" They ask "in how many ways can this system fail, and what happens when it does?" The Apollo 13 crew survived in 1970 precisely because engineers had pre-computed responses to hundreds of failure scenarios, including the one that actually occurred — an oxygen tank explosion 200,000 miles from Earth. The entire safety architecture was built through systematic inversion: cataloguing failure modes before designing solutions.
Personal life
You're seeing Inversion when you stop asking "how do I get promoted?" and start asking "what behaviours would guarantee I never get promoted?" The inverted list — showing up unprepared, failing to build relationships with decision-makers, avoiding visible projects, never advocating for yourself — is often more actionable than any forward-looking career advice. You can't control whether you get promoted. You can control whether you do the things that make promotion impossible.
Section 3
How to Use It
The practical application of inversion follows a consistent pattern: define the goal, then immediately flip it. Don't start with "what do I need to do?" Start with "what would guarantee the worst outcome?" Then work backwards from that failure state to identify the specific actions, decisions, and conditions you must avoid.
The technique applies across every domain, but the implementation varies by context. Here are three role-specific applications:
Decision filter
"Before pursuing any goal, ask: what would guarantee failure here? List the top three to five failure modes, then design your approach to eliminate or mitigate each one. The residual — what's left after you've removed the obstacles — is your strategy."
As a founder
Before building anything, write the post-mortem. Literally. Draft the memo explaining why your company failed. What killed it? Cash burn without product-market fit? A co-founder dispute that paralysed decision-making? Dependency on a single customer that churned? A regulatory change that made the business model illegal? The exercise forces you to confront risks that optimism bias actively suppresses.
Brian Chesky did a version of this at Airbnb in 2008: he listed every reason the concept was "insane" — strangers sleeping in your house, liability nightmares, regulatory risk, no supply in a chicken-and-egg marketplace — and then addressed each objection before launching. The objections didn't disappear. But identifying them early meant they could be designed around rather than discovered in crisis. The trust and verification systems that became Airbnb's competitive moat were built specifically to address the failure modes that inversion had surfaced.
As an investor
For every investment thesis, construct the bull case — then invert it. Write the detailed bear case. What has to go wrong for this to lose 50% or more? Is the downside scenario plausible or remote? What are you assuming about management quality, competitive dynamics, or macroeconomic conditions that might not hold?
Charlie Munger applies this at Berkshire by asking "what's the anti-case?" for every stock pitch. If you can't articulate a compelling bear case, you don't understand the investment well enough to hold a position. The best short sellers — Jim Chanos famously with Enron in 2001, when he identified the gap between reported earnings and actual cash flow — are essentially professional inverters. They don't predict success. They identify the specific, falsifiable claims embedded in the bull case and test whether those claims hold up under scrutiny.
As a decision-maker
When facing a complex decision, run the "kill this option" exercise. For each alternative, assign someone to argue the strongest possible case for why it will fail. This is structurally different from devil's advocacy — it's not about being contrarian for its own sake, it's about systematically identifying the specific mechanism by which each option collapses.
The U.S. intelligence community adopted this as "red team analysis" after the 9/11 Commission found that failures of imagination — the inability to invert — were more dangerous than failures of intelligence gathering.
The CIA now maintains a dedicated Red Cell unit whose sole job is to argue the inverted case: not "what does our intelligence suggest?" but "what if our core assumptions are wrong, and what would the world look like if they were?" The lesson extends to any organisation: the person arguing the failure case isn't the pessimist in the room. They're the most important person in the room.
Common misapplication: Inversion becomes toxic when it degenerates into pure risk aversion. The goal is not to avoid all failure — that's paralysis. The goal is to avoid preventable, catastrophic failure while still taking asymmetric bets.
A founder who inverts to the point of never launching has misunderstood the model. Inversion is a pre-launch diagnostic, not a permanent defensive crouch.
The tell: if inversion makes you more confident about what not to do, it's working. If it makes you too afraid to do anything at all, loss aversion has hijacked the process. Munger himself says it clearly: inversion is a complement to forward thinking, not a replacement for it. You need both lenses. The forward lens shows you where to aim. The inverted lens shows you what will kill you on the way there. Neither alone is sufficient. Used together, they produce decisions that are both ambitious and survivable.
Section 4
The Mechanism
Section 5
Founders & Leaders in Action
Inversion isn't a philosopher's toy. It's the operating system behind some of the most consequential decisions in business, investing, and statecraft — the invisible question that separates operators who survive from those who are remembered only in case studies about failure.
The pattern is consistent across eras and domains: the leader who inverts doesn't have better information or superior intelligence. They ask a different question — and the question, not the answer, is where the advantage lies. Five cases, spanning the 1860s to the 1990s, illustrate how inversion operates in practice when the stakes are highest.
Charlie MungerVice Chairman, Berkshire Hathaway, 1978–2023
Munger didn't just preach inversion — he built an entire investment philosophy around it. His approach to evaluating businesses begins not with upside potential but with a systematic search for ways to lose money permanently. In the late 1990s, when Berkshire considered investing in various technology companies, Munger's standard question wasn't "how big can this get?" but "what could make this business worth zero in ten years?" For most dot-com-era companies, the answer was uncomfortably easy to construct: no pricing power, no switching costs, commoditised technology, burn rates exceeding any plausible path to profitability.
The deeper Munger contribution is applying inversion to personal decision-making. His famous quip — "All I want to know is where I'm going to die, so I'll never go there" — sounds like a joke, but it's an operational principle. At the 2003 Berkshire shareholder meeting, he described how he and Buffett avoid the standard causes of business disaster: they don't use excessive leverage, they don't depend on the kindness of strangers for refinancing, they maintain enormous cash reserves, and they never let ego drive capital allocation.
None of these are strategies for maximising returns. They're strategies for avoiding ruin. The returns followed because survival compounds. Munger reportedly kept a "list of standard causes of human misjudgment" — twenty-five psychological biases that reliably produce bad outcomes. The list itself is an inversion: rather than cataloguing the traits of genius, it catalogues the traits of stupidity. Avoid all twenty-five, and you don't need to be brilliant. You just need to be less foolish than the competition.
In the spring of 1994, Bezos was a 30-year-old senior vice president at D.E. Shaw, one of the most sophisticated quantitative hedge funds on Wall Street. He was earning a substantial salary with a clear path to partner — the kind of position most finance professionals spend a career trying to reach. He'd identified an opportunity to sell books online after noticing that web usage was growing at 2,300% annually, but leaving to pursue it meant walking away from guaranteed compensation and career security for a concept that had no precedent.
Forward thinking offered no clarity. The internet was nascent, online retail was unproven, and reasonable people disagreed about whether consumers would ever trust a website with their credit card numbers.
Bezos resolved the decision through explicit inversion. He projected himself to age 80 and asked which choice he'd regret more — trying and failing, or never trying at all. The "regret minimisation framework," as he later called it, inverted the standard risk calculus. Instead of "what's the probability of success?" (unknowable), the question became "what's the cost of inaction?" (quantifiable: a lifetime of wondering). The inverted framing made the answer obvious. He wouldn't regret failing at an internet bookstore. He would absolutely regret not attempting it when the window was open.
The deeper lesson: Bezos didn't use inversion to predict Amazon's success. He used it to convert an impossible expected-value calculation into a clear asymmetry. Forward analysis asks "what's the probability-weighted upside?" — and when the domain is genuinely novel, that question has no good answer. Inverted analysis asks "which version of the future would be permanently painful?" — and that question almost always has a clear answer. The technique is especially powerful for career and entrepreneurial decisions, where the psychological cost of regret often exceeds the financial cost of failure.
In 1985, Intel was losing $173 million annually in its memory chip business, battered by Japanese competitors who were selling equivalent chips below Intel's manufacturing cost. The company had been founded as a memory company — the 1103 DRAM chip was Intel's first major commercial success in 1970. Its identity, culture, and engineering talent were all oriented around memory. Forward thinking — "how do we win the memory war?" — produced only costly and increasingly desperate strategies: price cuts, manufacturing investments, partnerships. None worked.
Grove broke the deadlock with a famous act of inversion. He asked his co-founder Gordon Moore: "If we got kicked out and the board brought in a new CEO, what would he do?" Moore's answer was immediate: "He'd get out of memories." Grove replied: "Why shouldn't you and I walk out the door, come back in, and do it ourselves?"
The question inverted the frame from "how do we save our identity?" to "what would a rational outsider, unburdened by our history, see as obviously stupid?" The answer was immediate and uncomfortable: staying in memories was the guaranteed path to failure.
Intel exited the memory business, laid off over 7,000 employees, closed manufacturing facilities, and redirected all resources to microprocessors. The transition was wrenching — Grove later described it as a "valley of death" that nearly destroyed the company's culture. But within a decade, Intel became the most valuable semiconductor company in the world. By 1992, the 486 processor had made Intel synonymous with personal computing. Revenue grew from $1.9 billion in 1986 to $25 billion by 1999. Grove's inversion wasn't about finding the right strategy — it was about removing the emotional attachment that made the wrong strategy feel inevitable. Sometimes the hardest part of inversion is accepting what the answer tells you.
By late 1862, the Union war effort was stalling. Generals who thought forward — "how do we capture Richmond?" — kept producing frontal assaults that resulted in catastrophic casualties without decisive gains. The Army of the Potomac had suffered over 23,000 casualties at Antietam in September, the bloodiest single day in American history, and still failed to destroy Lee's army. Lincoln, who had no formal military training but possessed an unusual strategic instinct, began reframing the problem through inversion. His key insight, expressed in correspondence with his generals, was to stop asking "how do we win?" and start asking "what is the one thing the Confederacy cannot survive?"
The inverted answer was attrition. The Confederacy had a smaller population (9 million versus 22 million in the Union), limited industrial capacity, no significant arms manufacturing, and an agricultural economy that couldn't sustain prolonged conflict. The South had brilliant tactical generals. It did not have the structural capacity to fight a long war.
Lincoln didn't need a brilliant strategy to win. He needed to avoid the specific failure mode that could lose — a political collapse of Northern will driven by high-profile defeats and war-weariness. The 1864 election was the fulcrum: if Lincoln lost to McClellan, the war would likely end in negotiated peace. Everything depended on avoiding that single failure mode.
His selection of Ulysses S. Grant as general-in-chief in March 1864 reflected this inverted logic. Grant's approach — relentless pressure across all fronts simultaneously — was not elegant. It was expensive in human terms. But it was designed to exploit exactly the vulnerability that inversion had identified: the Confederacy couldn't replace its losses. Lincoln inverted the problem from "capture the enemy's capital" to "prevent the enemy from surviving a long war," and the strategy that emerged was structurally unbeatable.
When the Space Shuttle Challenger disintegrated 73 seconds after launch on January 28, 1986, NASA convened a presidential commission to investigate. Most commissioners approached the inquiry forward: reviewing launch procedures, interviewing officials, examining the flight data sequence step by step. The process was methodical, comprehensive, and slow — exactly the kind of thorough analysis that institutional investigations are designed to produce.
Feynman inverted the question entirely. Rather than asking "what went right and where did the process break down?", he asked "what specific physical failure could destroy a shuttle, and was there a known vulnerability that was ignored?"
The inverted question led him directly to the O-ring seals in the solid rocket boosters. NASA engineers had documented concerns about O-ring resilience in cold temperatures as early as 1977. Morton Thiokol engineers had explicitly recommended against launching below 53°F the night before. The temperature at launch was 36°F. Feynman's famous televised demonstration — dropping a section of O-ring rubber into a glass of ice water and showing it lost elasticity — was the outcome of inverted reasoning. He didn't need to understand the full complexity of shuttle systems. He needed to identify the single point of failure whose physics were unambiguous.
The inversion cut through institutional complexity, political pressure, and organisational defensiveness to isolate the mechanical truth: rubber gets brittle when it's cold, and NASA launched anyway. Forward analysis would have taken months of committee work. Inverted analysis took a glass of ice water and a C-clamp. Feynman's appendix to the Rogers Commission report contained a line that captures the lesson: "For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled." Inversion is the cognitive tool that forces you to consult Nature before consulting your optimism.
Section 6
Visual Explanation
The core mechanic of inversion can be visualised as a two-lane process. Forward thinking and inverted thinking operate on the same problem from opposite directions. Neither alone is sufficient — forward thinking without inversion is naive, and inversion without forward thinking is paralytic. Together, they produce a strategy that is both ambitious and survivable.
Inversion — The two-direction thinking process: forward reasoning identifies what to do, inverted reasoning identifies what to avoid. The overlap produces robust strategy.
Section 7
Connected Models
Inversion doesn't operate in isolation. It connects to a constellation of models that together form the architecture of rigorous thinking. Some reinforce its logic, some pull against it in productive ways, and some represent the natural next step once inversion has done its work.
Reinforces
Pre-Mortem
A pre-mortem is inversion applied to project planning. Gary Klein's technique — imagine the project has already failed, now explain why — is structurally identical to Jacobi's principle. The pre-mortem gives inversion a specific format and social permission: it makes backwards thinking a team exercise rather than an individual habit, legitimising the surfacing of risks that optimism bias would otherwise suppress.
Klein's research found that pre-mortems increase the ability to identify reasons for future outcomes by 30%. The mechanism is prospective hindsight: treating failure as a certainty rather than a possibility changes how the brain searches for explanations. Organisations that run pre-mortems are practising inversion whether they use the term or not.
Reinforces
[Via Negativa](/mental-models/via-negativa)
Nassim Taleb's concept of Via Negativa — improvement through subtraction rather than addition — is inversion's philosophical cousin. Both models argue that knowing what to remove is more valuable than knowing what to add. A doctor who avoids prescribing harmful treatments (Via Negativa) and an investor who avoids catastrophic losses (Inversion) are running the same cognitive algorithm.
The compounding effect of avoiding big mistakes systematically outperforms the additive effect of seeking big wins. Taleb's own example: the most reliable way to improve your diet isn't adding superfoods — it's removing processed junk. The inversion logic is the same: subtract the known bad before adding the speculative good.
Tension
[Loss Aversion](/mental-models/loss-aversion)
Loss Aversion explains why inversion feels psychologically natural — humans already overweight potential losses relative to equivalent gains. Kahneman and Tversky's research suggests losses are felt roughly 2x as intensely as equivalent gains, which means the brain is already primed to think about what can go wrong.
Section 8
One Key Quote
"All I want to know is where I'm going to die, so I'll never go there."
— Charlie Munger, Berkshire Hathaway Shareholder Meeting
Section 9
Analyst's Take
Faster Than Normal — Editorial View
Inversion is one of those rare mental models that is genuinely underrated despite being widely cited. Everyone knows the Munger quote. Almost nobody applies the technique systematically.
Here's what I see most often: founders and executives nod along when they hear "invert, always invert," then walk into their next strategy session and think exclusively forward. They build pro forma models with optimistic assumptions, construct plans that depend on everything going right, and treat risk analysis as a compliance exercise rather than a creative one. I've sat in board meetings where the risk register was literally a checklist that nobody read. The problem isn't that people don't know about inversion. It's that forward thinking is emotionally satisfying and inversion is psychologically uncomfortable.
Asking "what would make this fail?" forces you to confront scenarios you'd rather not think about. Co-founder disagreements. Market timing being wrong by two years. The core technology not working at scale. Revenue projections being fiction.
Most people don't want to sit in that discomfort, so they do inversion superficially — a quick "what could go wrong?" brainstorm that produces obvious answers and changes nothing. The real power of inversion requires dwelling in the failure scenario long enough to feel its plausibility. Not as a catastrophising exercise, but as an analytical one. Can you construct a coherent, specific, step-by-step story about how this fails? If you can — if the failure narrative is more convincing than the success narrative — that story contains information you desperately need. The quality of the inversion is directly proportional to the specificity of the failure scenario. "Things might not work out" is useless. "We run out of cash in Q3 because our sales cycle is 9 months, not 3, and our burn rate assumes the shorter cycle" — that's actionable.
The founders who use inversion most effectively don't treat it as a one-time exercise. They build it into recurring processes. Amazon's "working backwards" methodology. Intel's OKR system, which requires defining both the objective and the conditions under which you'd abandon it. Ray Dalio's "pain + reflection = progress" framework at Bridgewater, where every mistake is catalogued and studied for systemic patterns. These are all institutionalised inversion — the technique woven into standard operating procedures so it doesn't depend on any individual's discipline.
The model is most powerful when it stops being a clever technique and becomes an organisational habit — when the inverted question gets asked automatically, by multiple people, at every major decision point. The mediocre version: a founder reads about inversion and asks "what could go wrong?" once during a planning retreat. The excellent version: the inversion question is embedded in the launch checklist, the investment memo template, the quarterly strategy review. It becomes infrastructure rather than inspiration.
Section 10
Test Yourself
Sharpen your ability to spot inversion in the wild — and, just as importantly, to distinguish it from simple pessimism, risk aversion, or analysis paralysis. The model is powerful when applied correctly and counterproductive when misapplied.
Is this mental model at work here?
Scenario 1
A venture capital firm develops a checklist of 'automatic no' criteria for startup investments — including 'solo founder over 50,' 'no paying customers after 18 months,' and 'cap table with more than 40% allocated to non-operating investors.' Any deal matching two or more criteria is passed on without further analysis.
Scenario 2
A product team delays launch by six months because the engineering lead keeps identifying new edge cases that could cause problems. After three rounds of additional testing, the CEO steps in and forces a launch with known minor bugs.
Scenario 3
Before accepting a new job offer, a senior executive writes a detailed memo titled 'Why I Will Regret This Decision in Two Years.' She lists specific scenarios: the company's funding runway is only 14 months, the CEO has fired three VPs of her function in the past two years, and the product has declining Net Promoter Scores. She ultimately declines the offer.
Section 11
Top Resources
The best material on inversion spans mathematics, cognitive science, and applied investing. Start with Munger for the practical framework, then build the scientific foundation with Kahneman and Bevelin.
The single most important source on inversion as a decision-making tool. Munger's collected speeches — particularly "The Psychology of Human Misjudgment" and the 1994 USC Business School talk — lay out both the Jacobi connection and the practical applications across investing, business, and personal life. Every serious student of mental models should own this book.
02
Seeking Wisdom: From Darwin to Munger — Peter Bevelin (2007)
Book
Bevelin's synthesis of Munger's thinking with evolutionary psychology and behavioural science. The chapter on backward thinking is the best standalone treatment of inversion as a cognitive technique, complete with examples from mathematics, military history, and business strategy. Dense, practical, and directly influenced by conversations with Munger himself. The section on "causes of human misjudgment" is essentially a master checklist of what to avoid — inversion applied to the entire domain of human cognition.
The scientific foundation for why inversion works. Kahneman's research on the planning fallacy, anchoring, and the distinction between inside and outside views explains the cognitive mechanisms that make forward thinking systematically overoptimistic — and why inverting the question produces more accurate assessments. Chapters 23–24 on the outside view are directly relevant.
Marks's investment philosophy is essentially inversion institutionalised. His chapters on risk, defensive investing, and the role of luck demonstrate how Oaktree Capital built a multi-decade track record by asking "what can go wrong?" before "what can go right?" The memos included in the book — particularly "The Route to Performance" and "Risk" — are masterclasses in inverted analysis applied to real market conditions. Marks's distinction between "first-level thinking" (what's the expected return?) and "second-level thinking" (what does the market expect, and where is it wrong?) is itself an inversion.
Buffett's annual letters are the longest-running record of inversion applied to capital allocation. The letters from 1999–2002 show Buffett explaining his avoidance of technology stocks not through bearish conviction but through honest admission of what he didn't understand — a rare example of a public figure demonstrating intellectual humility in real time. The 2008–2009 letters show inversion during crisis: while others asked "how bad can it get?", Buffett asked "what is now mispriced because others are panicking?" The 1989 letter contains one of the clearest articulations of his "three boxes" system — In, Out, and Too Hard — which is inversion applied to portfolio construction. Free, online, and indispensable.
The tension: inversion harnesses loss aversion productively (focus on avoiding failure), but unchecked loss aversion distorts inversion into pure risk avoidance. A founder who inverts to identify avoidable failures is applying the model correctly. A founder who inverts to justify never launching because something might go wrong has let loss aversion hijack the technique. The line between productive caution and destructive fear is thinner than most people realise.
Tension
First Principles Thinking
First Principles builds forward from fundamental truths. Inversion works backward from failure states. In theory, they're complementary — use first principles to design the plan, inversion to stress-test it. In practice, they pull in opposite directions.
First-principles thinkers can become so enamoured with their elegant logic that they resist the backwards question. Elon Musk's first-principles approach to Tesla manufacturing produced revolutionary insights about battery costs — reducing pack costs from $600/kWh to under $150/kWh — but also contributed to the 2018 "production hell," where the inverted question ("what could go wrong on the assembly line?") might have flagged problems with over-automation earlier. The productive tension: first principles for design, inversion for stress testing.
Leads-to
Second-Order Thinking
Inversion naturally leads to second-order analysis. Once you've identified the primary failure modes, the next question is "what are the downstream consequences of avoiding those failures?" and "what new risks does the avoidance strategy itself create?"
Munger's decision to maintain enormous cash reserves at Berkshire — an inversion against liquidity crises — created a second-order advantage: the ability to deploy capital opportunistically during market panics when others were forced sellers. The 2008 Goldman Sachs deal was a second-order consequence of a first-order inversion. Avoiding one failure mode (liquidity risk) created an offensive capability (opportunistic deployment) that inversion alone wouldn't have predicted.
Leads-to
Margin of Safety
Inversion identifies the failure modes. Margin of Safety determines how much buffer you build against them. Benjamin Graham's principle — buy at a price sufficiently below intrinsic value that even if your analysis is wrong, you're protected — is the quantitative expression of inversion applied to investing.
First invert to find what kills you. Then build a margin wide enough that those failure modes can't reach you. The two models form a natural sequence: diagnose, then protect. An engineer who identifies that a bridge could fail under 50-ton loads designs it to hold 100 tons. The inversion found the failure mode; the margin of safety determined the buffer. Without inversion, you don't know what to protect against. Without margin of safety, knowing the risk doesn't help if you've built right at the edge.
One caveat worth taking seriously. Inversion has a ceiling. It's exceptionally good at preventing failure but mediocre at generating breakthrough insight. The Wright brothers didn't invent the airplane by asking "what would prevent us from flying?" Picasso didn't create Cubism by asking "what would guarantee bad art?" Creative leaps require forward thinking, imagination, and a willingness to be wrong in novel ways that inversion can't produce. The best operators pair the two: inversion to clear the minefield, forward thinking to chart the path through it. All inversion and no imagination produces a company that survives but never surprises.
The thing that gives me the most confidence in inversion as a durable tool: it scales with complexity. Simple decisions don't need it — you don't invert to decide what to eat for lunch. But as the stakes increase, as the number of variables multiplies, as uncertainty compounds, inversion becomes more valuable, not less. A Series A founder deciding whether to pivot. A military commander planning an amphibious landing. A portfolio manager sizing a position in a volatile market. In each case, the forward question produces paralysing complexity, while the inverted question produces a manageable shortlist of things to avoid. That's not a parlour trick. It's a genuine cognitive advantage — and one that, unlike most mental models, gets stronger as the decision gets harder.
Scenario 4
A hedge fund manager, after losing 30% of her fund's value in a downturn, decides to move entirely to Treasury bills 'because the market is too unpredictable.' She stays in T-bills for four years, missing a 120% rally.