Belief Perseverance: Why Evidence Against A View Can Strengthen It
The Original Research and Its Correction
Nyhan and Reifler's 2010 paper introduced the backfire effect through experiments in political psychology. When subjects who believed a political claim (e.g., that weapons of mass destruction had been found in Iraq) were shown a correction article clearly contradicting it, some doubled down — reporting higher confidence in the false claim after seeing the correction.
The finding was striking, widely cited, and fit intuitively with lived experience in an era of political polarization. It spread quickly through popular science coverage and became a core piece of conventional wisdom about why it's futile to correct misinformation.
Then the replications came back. In 2019, a larger and more methodologically robust study by Wood and Porter ("The Elusive Backfire Effect") tested corrections across fifty-two political claims with 10,000 participants. They found essentially no evidence of the backfire effect. Corrections worked — beliefs moved in the direction of the evidence, even for politically charged claims, even for participants who were initially resistant.
Where does this leave us? Not with the conclusion that beliefs are easily corrected. The Wood and Porter finding is that corrections move beliefs — but often by small amounts, and often without people being aware of the movement. Explicit doubling-down (the original backfire effect) may be less common than initially claimed. But belief perseverance — the general tendency for beliefs to resist change, and for resistance to increase under social threat — is robust and well-documented across decades of research.
The nuanced picture: you're unlikely to make someone's false belief measurably stronger by presenting accurate corrections. But correction alone is often insufficient, beliefs are stickier than they should be given the evidence, and the conditions under which belief change is most difficult are specific and identifiable.
The Mechanism: Identity-Protective Cognition
The most robust framework for understanding belief perseverance comes from Dan Kahan's work on identity-protective cognition. Kahan's research found that beliefs about empirical questions — including factual questions with clear scientific consensus — are often organized around cultural identity rather than evidence evaluation.
In a series of studies, Kahan found that more numerate and analytically capable individuals were actually more polarized on identity-linked empirical questions (like climate change or gun control) than less analytically capable individuals. Not less polarized — more. The intelligence was being deployed in service of identity protection, not truth-seeking.
The mechanism: when a belief is part of your identity — tied to your group membership, your self-narrative, your social relationships — evaluating evidence about that belief triggers a different cognitive mode than evaluating ordinary empirical claims. It shifts from accuracy motivation ("I want to believe what's true") to directional motivation ("I want to maintain this belief").
Under directional motivation, you don't simply ignore evidence. You process it actively — but you process it as a lawyer, not as a scientist. The lawyer's job is to find the problems with the opposing case and build the strongest case for your client. The scientist's job is to figure out what the evidence actually supports. These produce very different cognitive outputs. And they feel similar from the inside — both feel like reasoning.
The Role of Social Context
The threat that triggers belief perseverance is not only to the belief itself but to the social group the belief belongs to. We don't hold beliefs in isolation — we hold them within communities where those beliefs signal membership, loyalty, and shared values.
When a challenge to a belief comes in a context where it reads as tribal attack — where the person presenting the correction is from the opposing group, where the correction was made publicly in a way that seems designed to humiliate — the social stakes make the threat worse. The person isn't just defending a claim about reality; they're defending their identity and their belonging.
This is why corrections delivered with condescension almost never work, and sometimes produce exactly the entrenchment they're trying to break. The condescension activates threat response. The threat response shuts down the reasoning system. And the reasoning system is what you needed to evaluate the correction.
Conversely, corrections delivered in a context that doesn't threaten identity — by someone the person trusts and sees as part of their community, in private rather than publicly, framed as "here's something that might be useful" rather than "here's why you're wrong" — have a better chance.
The Distinction Between Three Things
Belief perseverance is often conflated with two related but different phenomena. They need to be separated:
Belief perseverance is the general tendency for beliefs to resist change in proportion to their prior establishment and identity-linkage. This is robust and well-documented.
The backfire effect (as originally defined) is the specific claim that corrections can increase belief in false claims. This appears to be rarer and more context-specific than initially claimed.
Motivated skepticism is applying high scrutiny to evidence that challenges your beliefs and low scrutiny to evidence that confirms them. This is extremely well-documented and probably the most practically important of the three. People don't ignore disconfirming evidence — they examine it more carefully and find more reasons to doubt it.
Motivated skepticism is where most of the action is in everyday cognitive life. You don't need to believe a correction actually backfired to explain why it had little effect. It's enough that you evaluated the source more harshly, found methodological problems you wouldn't have looked for in a confirming study, or decided this one study doesn't overturn a body of evidence that supports your view.
The Corrective: De-Linking Belief from Identity
The intervention that works is not better evidence. It's changing the context in which the evidence is evaluated.
Several approaches have some empirical support:
Affirmation of unrelated values. Research by Claude Steele and colleagues showed that threatening a person's sense of self about one belief becomes less damaging if they're first asked to reflect on values in an unrelated domain. The threat to identity is reduced because the identity has been reinforced in another direction. People become more able to acknowledge evidence against their belief when they don't feel their entire self-concept is at stake.
Curiosity framing. Inducing an exploratory mindset before presenting challenging evidence changes the evaluation mode from defensive to curious. Asking "what would I want to know if I were trying to figure this out?" before evaluating evidence shifts the orientation. This requires some distance from the heat of the challenge.
Separating belief from self-narrative. This is the long-game intervention. Building the habit of holding your beliefs as provisional models rather than identity commitments — through practices like steel-manning objections, deliberately revisiting positions you're confident about, and tracking your own prediction record — reduces the general intensity of identity-protective cognition over time.
Practical Self-Application
You cannot identify your own identity-linked beliefs in real time, in the moment someone is challenging them. The threat response happens before the reasoning, and it's faster than your ability to catch it. What you can do is identify them in retrospect.
Questions that help:
- Which positions do I defend automatically, before I've evaluated whether the challenge has merit? - Which topics produce a physical change in my response (elevated heart rate, increased intensity) before I've thought about the argument? - Which beliefs, if I changed them, would change my relationship to my social group, my self-narrative, or my sense of who I am?
The answers to those questions are not necessarily wrong beliefs. Some of them may be correct. But they're the ones to examine most carefully, precisely because the identity-linkage means the standard quality-control process (evaluate evidence, update accordingly) is being bypassed.
The goal is not to hold no strong views. It's to hold your views the way a scientist holds a hypothesis — provisionally, subject to revision, separating your self-worth from the truth of the claim. That separation is not natural. It requires deliberate cultivation.
One useful practice: preemptively articulate the conditions under which you'd change your mind. "I would update this belief if I encountered evidence of X, or if someone showed me Y." Doing this before the challenge comes means you've defined what counts as a good reason to update, which makes it harder to dismiss any specific reason post hoc.
The Deeper Issue: Confidence as a False Signal
The final thing to understand about belief perseverance is that high confidence is not evidence of correctness. After an identity challenge, your confidence in your belief may increase regardless of whether the evidence supported that increase.
We use confidence as a proxy for accuracy because usually they correlate. But when identity is involved, the correlation breaks. You can become more certain and less accurate at the same time. The certainty was generated by the threat response, not by the evidence.
This is one of the most important facts about human cognition in the context of clear thinking: the strength of a conviction is not reliable evidence of the quality of the reasoning that produced it. Especially for the beliefs you hold most strongly, in the domains where your identity is most invested.
Examine those first. Gently. Persistently. Without threatening your sense of self — just with honest curiosity about whether they hold up.
Comments
Sign in to join the conversation.
Be the first to share how this landed.