Think and Save the World

How To Identify The Load-Bearing Assumptions In Any Plan

· 8 min read

The Architecture of Plans

Plans exist to coordinate action toward an outcome. They specify what you'll do, in what order, with what resources, toward what goal. But they also contain, implicitly or explicitly, a theory of the world — a set of claims about how things work, what's true, what others will do. Those claims are assumptions.

The challenge is that assumptions are heterogeneous. Some are background noise — so obvious and stable that noting them is pedantic. (Your consulting plan assumes the internet still works. Yes, technically. Write it down? No.) Some are central but robustly supported by evidence. Some are plausible but untested. Some are wishful thinking dressed as analysis.

The load-bearing/decorative distinction cuts across this heterogeneity. The question isn't just "how confident are you?" but "what happens if you're wrong?" An assumption can be uncertain but not load-bearing: if it fails, you adapt and continue. An assumption can be fairly confident but load-bearing: it probably holds, but if it doesn't, the plan is gone.

The intersection to watch most carefully: uncertain AND load-bearing. These are the assumptions that most deserve explicit scrutiny and early testing.

The Dependency Map

A structured approach to finding load-bearing assumptions is mapping the dependency structure of the plan.

Start with the outcome: what needs to happen for the plan to succeed? Call this O.

Now ask: what has to be true or happen for O to occur? List those as O's parents — call them A, B, C. The plan succeeds only if A, B, and C are all true/achieved.

For each of A, B, C, ask the same question. What has to be true for A? What has to be true for B? Build the tree down.

At every node in this tree, you have an assumption or a commitment. The load-bearing assumptions are the ones high in the dependency hierarchy — if they fail, entire branches of the tree collapse, which means O is unreachable.

This sounds more formal than it needs to be in practice. The informal version is: trace the chain of "but for" reasoning. "We get outcome O, but only if we achieve A. We achieve A, but only if B is true. B is true, but only if C holds." Follow that chain down to bedrock. The further up the chain an assumption is, the more load-bearing it is.

The Pre-Mortem: Making Failure Thinkable

Gary Klein, the cognitive psychologist who studies naturalistic decision-making, developed the pre-mortem technique. Before committing to a plan, the team imagines that it's X months in the future and the plan has completely failed. The task is to write an explanation of what went wrong — not brainstorm possibilities, but write a specific, narrative explanation.

The pre-mortem produces better results than standard risk analysis for several reasons.

It bypasses motivated reasoning. When you're in planning mode, you want the plan to work, so you unconsciously generate reasons it will. The pre-mortem forces you into a different mental state: explaining failure that has already occurred (in imagination). This framing activates different cognitive processes — the same ones that help us explain past events, which we do fluently.

It legitimizes dissent. In groups, the pre-mortem gives permission to raise concerns. Saying "I'm worried this might fail because X" in a planning meeting feels like obstructionism. Saying "in the pre-mortem, I wrote that it failed because of X" is following the exercise. The format creates psychological safety for skepticism.

It surfaces diverse failure modes. Individual pre-mortems, before group discussion, let each person's genuine concerns emerge before social pressure shapes the conversation. Aggregating these gives a richer map of the failure landscape.

It clusters around load-bearing assumptions. When you look at what killed the plan in multiple people's pre-mortems, the same things tend to appear repeatedly. The assumptions that show up in most failure narratives are probably load-bearing — because they're the assumptions whose failure leads to collapse.

Practical protocol: before committing major resources to a plan, ask everyone involved to independently write a 200-word explanation of why it failed, set one year out. Share these, aggregate the failure mechanisms, and identify which assumptions are carrying the most structural weight.

The Red Team

Red teaming originates in military and intelligence planning, where it was used to identify vulnerabilities by having a team deliberately try to defeat the plan. A genuine red team doesn't look for reasons the plan might fail; it actively tries to construct the plan's defeat. This is a different psychological posture.

For organizational planning, a red team is a group charged with attacking the plan rather than defending it. Their job is to find the cheapest path to the plan's failure — the assumption whose falsification most easily kills the entire thing.

Effective red teams share several characteristics:

Independence from the plan's creators. People who worked on the plan are invested in it and will (unconsciously) soft-pedal their critiques. The red team should be people who weren't involved in planning and have no stake in the outcome.

Explicit adversarial mandate. The red team needs to know their job is to break the plan, not to "provide feedback." The framing matters. "What do you think of our plan?" gets diplomatic suggestions. "Your job is to explain why this fails" gets genuine attacks.

Access to the plan's assumptions. The red team needs to see not just what you're going to do but what you're assuming to be true. Hidden assumptions can't be attacked.

Psychological safety for findings. If the organization's culture treats red team findings as threats to the people who made the plan, red teams will soften their findings to preserve relationships. The red team process only works if the organization treats findings as valuable intelligence rather than as criticism.

What comes out of red teaming, ideally, is a ranked list of the plan's most vulnerable assumptions — the places where a determined adversary (or a merely hostile reality) can most easily break it.

The Outside View vs. The Inside View

Daniel Kahneman and Amos Tversky identified a systematic bias in planning: the planning fallacy. When making plans, people tend to take what they call the "inside view" — focusing on the specific case at hand, its unique features, and the intended scenario. This produces unrealistically optimistic predictions because it ignores base rates and doesn't account for the full distribution of ways things can go wrong.

The "outside view" asks: what is the base rate for projects like this? What fraction of similar projects come in on time and on budget? What fraction succeed at all? This information is usually available and usually ignored.

Flyvbjerg's research on infrastructure megaprojects found that cost overruns are systematic and predictable — not random errors but consistent bias. Projects are consistently completed at 2-3x the projected cost and take significantly longer than planned. This isn't bad luck; it's systematic bias in the inside view.

The outside view doesn't replace detailed planning. But it calibrates it. Before finalizing your plan and its assumptions, ask: what would a base rate analysis tell me about plans like this? What actually happens to ventures in this category? Where do they typically fail?

This often reveals that the most common failure modes for similar plans are different from the failure modes you've been worrying about. The failure modes you've been worrying about are often ones specific to your situation, while the base rate failure modes are more generic and less visible because they're not novel.

Surfacing Hidden Assumptions

The hardest assumptions to examine are the ones you don't know you're making. The implicit assumptions embedded in the plan's structure, in the problem framing, in the category of solution you're pursuing.

Techniques for surfacing hidden assumptions:

The "What Would Have to Be True" protocol. For each major element of the plan, ask: what would have to be true about the world for this to be the right move? This often reveals implicit assumptions about market structure, customer behavior, competitor response, regulatory environment, or technological capability that were never made explicit.

Reversing the conclusion. Assume the opposite of your plan's central thesis and try to construct a plausible story in which that's true. If you can't construct one, your assumption is probably robust. If you can easily construct one, the assumption deserves scrutiny.

The naive outsider. Explain your plan to someone smart but entirely outside your domain. Their questions will often surface assumptions so embedded in domain convention that domain insiders can't see them. The question "but why do you assume X?" from someone who doesn't know that X is standard practice can be exactly the right question.

Assumption inversion. List your assumptions. For each one, invent a scenario in which it's false. Some scenarios are clearly ridiculous — this is fine. Some are plausible. The plausible inversions point to real risks.

When Plans Fail: Diagnostic Retrospective

When a plan fails, the most valuable question is: which assumptions were false? Not "what did we do wrong in execution?" — because usually the plan failed at the strategic level before execution could address it. Mapping the actual failure onto the original assumption structure tells you what your assumption-identification process missed, and improves the next iteration.

The root cause analysis question: was the failed assumption one you identified and misjudged? Or one you didn't know you were making? The first is a calibration failure — you need to adjust your confidence levels in that type of assumption. The second is a structural failure — you need a better process for surfacing hidden assumptions.

Building this retrospective habit transforms failures from demoralizing events into genuine learning about your planning process. The goal is not to have no failed assumptions — in an uncertain world, some of your assumptions will be wrong. The goal is to fail on decorative assumptions, not load-bearing ones, and to fail early and cheaply enough that you can recover.

Checklist for Load-Bearing Assumption Analysis

Before committing major resources to any plan:

1. List every assumption the plan depends on, including implicit ones. 2. For each assumption, ask: "If this is false, does the plan collapse or just change?" 3. Mark load-bearing assumptions explicitly. 4. For each load-bearing assumption, ask: how confident are you? What's your evidence? 5. Rank by: (low confidence) × (high load-bearing weight). These are your priority tests. 6. Design the cheapest possible test for each high-priority assumption. 7. Execute those tests before committing full resources. 8. Set explicit tripwires: what would you observe if a key assumption were failing? How would you know? 9. Monitor those tripwires during execution.

The discipline isn't about being pessimistic. It's about being clear-eyed about what your plan is actually betting on, and making those bets explicitly rather than accidentally.

Cite this:

Comments

·

Sign in to join the conversation.

Be the first to share how this landed.