Think and Save the World

Cognitive Biases Catalog

· 9 min read

Why the Catalog Approach Fails

The internet is full of lists of cognitive biases. The Wikipedia article lists over 180. Psychology Today runs features on them. Corporate training programs cover them. Almost none of this makes anyone think better.

The research on bias education is sobering. A 2019 meta-analysis by Morewedge and colleagues found that while training can reduce specific biases in lab settings, the effects are small and poorly transferable — teaching you about the anchoring effect doesn't reliably reduce your susceptibility to anchoring in a different context. Studies that have measured actual decision quality before and after bias training generally find minimal improvement.

Why? Because knowing about a bias and recognizing it in real-time are different cognitive operations. When you're reading an article about confirmation bias, you're in a reflective, deliberate mode of thought. When you're actually making a decision, you're in an intuitive, fast mode that doesn't wait for your conscious knowledge to catch up. The bias happens before the conscious review. By the time you're analyzing the decision, the bias has already shaped it.

This suggests a different approach: rather than trying to think without biases, design processes that work even with biased components. This is what Kahneman, Thaler, Sunstein, and others have argued for. The question is not "how do I remove the bias from my thinking?" but "how do I design my thinking process so the bias is caught before it causes damage?"

That said, a working understanding of the major biases — not 180 of them, but the 7-8 that account for the majority of serious errors — is still valuable as a starting point. You need to know what you're designing against.

The Core Biases: A Working Catalog

1. Confirmation Bias

The most pervasive and probably the most consequential cognitive bias. Once a belief is formed, the mind acts as a confirmation machine: you seek information consistent with the belief, interpret ambiguous information in favor of the belief, remember confirming evidence better than disconfirming evidence, and find disconfirming evidence less credible and less compelling.

The mechanism is not primarily conscious. It operates in attention (you notice confirming evidence more readily), interpretation (you give it more weight), and memory (you retain it longer). The combined effect is that beliefs tend to become more certain over time even when the evidence landscape is stable or even shifting against them.

Peter Wason's famous 2-4-6 task demonstrates this cleanly. Participants are told "2-4-6 follows my rule" and asked to discover the rule by proposing other sequences. The experimenter tells them whether each sequence follows the rule. Most people immediately hypothesize "even numbers increasing by 2" and then test only confirming examples (8-10-12, 20-22-24). They never test 1-2-3 (which also follows the rule — it was simply "ascending numbers"). They confirm a wrong hypothesis and never consider testing against it.

The practical implication: confirmation bias cannot be overcome by trying harder to be objective. It requires actively seeking disconfirmation. This means deliberately looking for the strongest available argument against your position, not the weakest straw man you can knock down. It means seeking out people who disagree with you for reasons worth understanding, not people who disagree badly. It means asking "what would I have to see to change my mind?" and then actually looking for that evidence.

2. The Availability Heuristic

Amos Tversky and Daniel Kahneman identified the availability heuristic in 1973: people estimate the likelihood of an event based on how easily examples come to mind. What makes examples come to mind easily? Recency, emotional intensity, personal experience, and media coverage — none of which track statistical frequency.

Plane crashes are covered extensively because they're dramatic and unusual. Car accidents kill 40,000 Americans per year and are barely newsworthy. The media coverage ratio is inverse to the actual mortality ratio. The result is that most people are more afraid of flying than driving, a fear that is backwards relative to the actual risk.

The heuristic extends beyond risk estimation to frequency estimation generally. If you can think of many examples of a phenomenon, you assume it's common; if you can think of few, you assume it's rare. This is distorted systematically by what information you've been exposed to — which is increasingly a function of algorithmic media selection, not representative sampling.

The practical correction: when estimating frequency or probability, explicitly consult base rates. Don't trust your intuitive sense of how common something is. Look it up. The outside view — what's the actual statistical frequency of this class of event? — almost always outperforms the inside view — how many examples can I think of?

3. Anchoring

The anchoring effect is one of the most robust and consistent findings in behavioral economics. An initial piece of numeric information — even a transparently random one — has a predictable and measurable effect on subsequent estimates.

In Tversky and Kahneman's original studies, participants who spun a wheel that landed on 65 before estimating the percentage of African nations in the UN guessed 45%; those whose wheel landed on 10 guessed 25%. The wheel was obviously random. It didn't matter. The anchor moved the estimate.

In higher-stakes versions: the first salary offer in a negotiation functions as an anchor that predictably biases both parties' sense of what's reasonable. The first figure on a price tag shapes willingness to pay. In medicine, an initial diagnosis acts as an anchor that makes subsequent diagnostic information less likely to overturn it.

Knowing you're susceptible to anchoring is not sufficient protection. The studies that have warned participants about anchoring effects before exposing them to anchors have found that warning reduces, but does not eliminate, the effect. The practical corrections are: generate your own estimate before receiving any anchor from an external source; consider explicitly whether the anchor is relevant; in negotiations, be willing to reject an extreme anchor rather than compromising from it.

4. Base Rate Neglect

Kahneman and Tversky demonstrated base rate neglect through what became known as the "lawyer-engineer problem." Participants were told that a sample consisted of 70 lawyers and 30 engineers (or vice versa). They were then given a personality description designed to match the stereotype of one group, and asked to estimate the probability that the described person was a lawyer or engineer. Participants almost entirely ignored the base rate (70/30 vs 30/70) and judged primarily by how well the description matched the stereotype.

The Bayesian correct answer weights both: the prior probability (the base rate) and the likelihood of the evidence given each hypothesis. In practice, people use the evidence and largely ignore the prior. This produces systematic errors in diagnosis, risk assessment, and decision-making under uncertainty.

The canonical example: if a disease has a 1% prevalence in a population, and a test has a 99% sensitivity (true positive rate) and 99% specificity (true negative rate), a positive test result means approximately a 50% chance of having the disease, not 99%. The test is accurate, but the base rate is low enough that most positives are false positives. This is not intuitive, and the failure to reason this way correctly costs lives in medical diagnosis.

5. The Planning Fallacy

Kahneman and Tversky introduced the planning fallacy in 1979 to describe the consistent tendency to underestimate the time, cost, and risks of future actions while overestimating their benefits. The bias appears to be specific to personal planning; people are much better at estimating how long other people's projects will take.

The mechanism is the inside view versus the outside view distinction. When planning your own project, you naturally think about the specific project — its components, your capacity, the ideal scenario. This inside view doesn't adequately weight the relevant reference class of similar projects, which almost universally take longer and cost more than planned.

Bent Flyvbjerg's research on large infrastructure projects is striking: across thousands of projects worldwide, the average cost overrun is 45%, and nine out of ten projects exceed the initial time estimate. This is not random variation — it's a systematic bias. The inside view consistently produces underestimates.

The correction is the outside view: before committing to a time or cost estimate, find the relevant reference class (projects similar to this, in this domain, by people with similar resources) and use their actual outcomes to anchor your estimate. This is uncomfortable, because outside view estimates are typically worse than inside view estimates and more depressing to contemplate. They're also more accurate.

6. Dunning-Kruger: The Actual Findings

The 1999 Kruger and Dunning paper "Unskilled and Unaware of It" had a specific finding that has been mischaracterized in popular communication. The study measured participants' performance on tests of logical reasoning, grammar, and humor and compared their actual scores to their self-assessed scores and their estimates of how they ranked relative to peers.

Finding: participants who scored in the bottom quartile on actual performance significantly overestimated their performance and their percentile rank. Participants who scored in the top quartile slightly underestimated their performance relative to peers (though correctly estimated high performance).

The explanation Kruger and Dunning offered: the metacognitive skills required to evaluate performance in a domain are substantially the same skills required to perform in that domain. If you don't have the skill, you don't have the ability to evaluate the absence of the skill. This is not the same as "incompetent people think they're smart." It's a more specific claim: in domains where skill is required to recognize quality, poor performers can't see their own poorness.

The misconception matters because the popular version implies that low performers are arrogant about their skills in general. The actual finding is domain-specific and mechanistic. The implication for self-evaluation: seek external calibration. In any domain where you can't easily verify your own quality, you need feedback from outside your own head.

7. Attribution Errors

The fundamental attribution error (Lee Ross, 1977) is the tendency to over-weight dispositional explanations for other people's behavior ("he's a bad person") and under-weight situational explanations ("he's under terrible stress"). When watching someone behave badly, we attribute it to character. We're much better at recognizing situational factors when the person is ourselves.

The self-serving bias is the motivated version of the same asymmetry: your successes tend to be attributed to your qualities; your failures tend to be attributed to circumstances. This is not consciously calculated — it's automatic. It serves psychological protection. It also makes accurate self-assessment very hard.

These biases interact with confirmation bias: once you've attributed someone's behavior to their character, subsequent behavior that confirms the attribution is much more salient than behavior that would revise it. Initial judgments of people harden into stereotypes because confirmation bias feeds off attribution errors.

The Meta-Bias: Bias Blind Spot

Emily Pronin and colleagues identified the bias blind spot — the tendency to see one's own biases as smaller than other people's biases. Participants rated themselves as less susceptible to a list of cognitive biases than the average person. They were also more likely to recognize bias in people who disagreed with them than in people who agreed.

This is not humility. It's a specific cognitive error: because we don't have direct introspective access to our own biased processing, we don't see it as bias. We experience our own cognition as straightforward perception of reality. Other people's divergent views, by contrast, are visible as biased or motivated. The conclusion: your felt sense of your own objectivity is not reliable evidence of objectivity.

What Actually Helps

Given that bias education is insufficient, here are the practices that have demonstrated evidence for improving decision quality:

Pre-mortems (Gary Klein): Before committing to a plan, imagine it is twelve months in the future and the plan has failed badly. Ask: what happened? Generate as many plausible failure modes as possible. This counteracts optimism bias and the planning fallacy by forcing consideration of negative scenarios before commitment.

Reference class forecasting (Kahneman/Lovallo): Identify the relevant reference class for any prediction and use the base rates from that class to anchor estimates. Override the inside view with the outside view, especially for time and cost estimates.

Structured devil's advocacy: Assign someone (or your own thinking time) the specific role of arguing the strongest possible case against your position. Not weak counterarguments — the strongest available case. This counteracts confirmation bias by forcing genuine engagement with opposing evidence.

Red team/blue team thinking: Mentally or organizationally separate the construction of an argument from its critique. The same mind that builds an argument is not well-positioned to critique it. Give the critique function to a separate person or a separate deliberate effort.

Calibration practice: Regularly make explicit predictions with confidence levels and track them over time. "I'm 80% confident that X will happen in the next six months." After six months, check. Calibrated thinkers — people whose stated confidence levels match their actual accuracy rates — make better decisions. Calibration improves with practice, especially with feedback.

The goal is not a mind without bias. That's not available. The goal is a decision process that catches bias before it causes irreversible damage — a process with enough structure, enough external input, and enough deliberate counterweight to the systematic errors that all minds commit.

This is not comfortable. The best thinking is effortful. But the cost of unreflective thinking — personally, institutionally, politically — is compounded failure. The work is worth it.

Cite this:

Comments

·

Sign in to join the conversation.

Be the first to share how this landed.