Think and Save the World

Bayesian Thinking For Updating Beliefs With New Evidence

· 6 min read

The Problem With Believing Things

There's a game being played every time you state a belief publicly. It's not about truth. It's about position. Once you've said something — especially to a group, especially with confidence — backing off feels like defeat. And humans really don't like defeat, even when the defeat is just admitting you were working with incomplete information.

This is how intelligent people end up defending positions they privately suspect are wrong. The social cost of updating is too high, so they stay put. Or they do the opposite: they see the social wind shift, and they abandon their position completely, not because evidence convinced them, but because the room changed. Neither of these is thinking. Both are social performance.

Bayesian thinking is what replaces this. It's not just a technique — it's a fundamentally different relationship with your own beliefs.

Bayes' Theorem Without the Math

Thomas Bayes was a Presbyterian minister in 18th-century England who worked on an interesting problem: how should you reason under uncertainty? His answer, published posthumously in 1763, is the foundation of an entire field of probability theory.

The theorem itself describes how to update the probability of a hypothesis given new evidence. In symbols it gets elaborate. In English:

Your posterior belief (after evidence) = your prior belief (before evidence) × how much more likely this evidence would appear if your belief is true than if it's false.

That last part — the ratio of how likely the evidence is under different hypotheses — is called the likelihood ratio. And it's where Bayesian thinking becomes genuinely useful as a thinking tool.

Here's a concrete example. Say you wake up with a sore throat. You have a prior belief about whether you're sick versus just dry from sleeping with your mouth open. Let's say you're 30% convinced you're actually sick. The evidence (sore throat) should update that probability upward — but by how much depends on how much more often sore throats appear with illness versus without. If sore throats are twice as common when sick, you update upward modestly. If you also have a fever, that's much stronger evidence with a higher likelihood ratio — bigger update.

Most people don't reason this way. They see a sore throat and immediately think "I'm sick" (complete update, prior ignored) or they think "I'm fine, must be the heat" (no update, prior defended). Neither accounts for the actual information content of the evidence.

The Two Failure Modes

Stubbornness: You have a belief, someone challenges it, and you double down regardless of the quality of their challenge. This is actually reinforced by social dynamics — being seen to cave to pressure looks weak, even when the pressure is coming with good evidence. Over time, stubborn people's beliefs drift further and further from reality, because reality keeps producing evidence they're ignoring.

Flip-flopping: You have a belief, someone challenges it, and you immediately reverse. This isn't open-mindedness — it's the absence of a stable prior. People who flip-flop aren't updating based on evidence; they're updating based on whoever spoke last, or most confidently, or with the most social authority. They're as likely to be moving away from truth as toward it.

The Bayesian path threads between these. Strong evidence = meaningful update. Weak evidence = small update. Strong evidence from a biased source = discount the source, consider the evidence independently. The question is always: what does this actually tell me about what's true?

Philip Tetlock's Superforecasters

In 2011, the US intelligence community ran a competition called the Good Judgment Project. They wanted to know: can ordinary people, trained in probabilistic thinking, outperform professional intelligence analysts with access to classified information?

The answer was yes. Significantly.

Philip Tetlock and his team studied the people who consistently performed best — the superforecasters. What distinguished them wasn't IQ, domain expertise, or inside access. It was a set of thinking habits that are essentially Bayesian:

They expressed beliefs as probabilities, not facts. Not "this will happen" but "I'd put this at 65%."

They updated frequently and incrementally. When new information arrived, they'd revise — sometimes by a lot, sometimes by a point or two. The key was that they took the evidence seriously rather than treating updating as weakness.

They actively sought disconfirming information. Instead of looking for evidence their view was right, they looked for evidence it was wrong. This is counterintuitive but powerful — you learn much more from what challenges you than from what confirms you.

They tracked their accuracy. Tetlock calls this being "calibrated" — your 70% predictions should be right about 70% of the time, not 90% and not 50%. If you're right 90% of the time on things you call 70% likely, you're underconfident. If you're right 50%, you're overconfident. Most people, when tested, are significantly overconfident.

The superforecasters weren't uniformly humble about everything. They had strong opinions and made strong predictions. But they held those predictions as probabilities that could change, kept records, and got feedback. That feedback loop is what made them calibrated over time.

The Practice: Thinking Bayesian Without a Calculator

You don't need to compute actual numbers. What you need is the habit structure.

Assign a probability when you state a belief. Even rough buckets work: am I 90%+ confident, 70%, 50%? The act of quantifying forces you to confront how certain you actually are. Most people discover they've been stating 50% beliefs with 90% confidence.

When challenged, separate two questions. First: is this challenge coming from a credible source with actual evidence, or from someone with an agenda? Second: even if the source is compromised, is the underlying evidence real? Sometimes a bad-faith actor is still pointing at something true. Don't dismiss evidence because of who's holding it.

Ask what would change your mind. Before you're challenged, while you're still calm: what would it take to move you significantly away from your current position? If the answer is "nothing," that's a flag. It means you're no longer treating it as a belief — you've made it an identity. Identities can't be updated, which is exactly the problem.

Do post-mortems when you're wrong. When a prediction fails, the natural move is to explain it away — bad luck, someone else's fault, couldn't have known. Superforecasters instead ask: where was my prior miscalibrated? What evidence did I underweight? What would I do differently? This is uncomfortable. It's also how you get better.

Resist updating on social pressure alone. The question is never "is this person confident?" or "will I look bad if I agree?" The question is "does this evidence actually change what's likely to be true?" If yes, update. If no, hold — and say why you're holding.

Why This Matters at Scale

Here's the world-stakes angle: bad collective beliefs are catastrophic. Policies based on wrong beliefs cause suffering. Wars start over misread evidence. Economic crises happen when enough people hold the same miscalibrated belief about risk.

A population that thinks Bayesian is a population that can be reached by evidence. It's a population where truth has a fighting chance in the marketplace of ideas. When everyone instead holds beliefs as identities — tribal, irrevocable, righteous — you get a world where evidence is irrelevant, where argument is just warfare, and where the only way opinions change is through catastrophe.

The discipline of probabilistic belief is, at its base, a civic skill. It's the difference between a person who can be reasoned with and one who can only be defeated. And a world full of people who can be reasoned with is a world where problems actually get solved.

The Internal Shift

The last thing to name, because it's the most important: Bayesian thinking requires you to stop treating your beliefs as part of your identity.

If what you believe is who you are, then evidence against your belief is an attack on you. That's the trap. Most people live there — which is why most arguments generate heat instead of light.

If what you believe is your current best estimate of what's true, then evidence against it is a gift. It's data that helps you aim better. The person who convinces you that you were slightly wrong has done you a service, not inflicted a wound.

That internal shift — from belief as identity to belief as estimate — is what Bayesian thinking actually produces when you practice it long enough. Not just better predictions. A different relationship with truth itself.

Cite this:

Comments

·

Sign in to join the conversation.

Be the first to share how this landed.