How to Kill Your Own Bad Ideas
The psychology of idea attachment is one of the most consequential and underexamined aspects of human cognition. We tend to frame the problem of bad decisions as a matter of initial judgment — people made the wrong choice. But a large proportion of the damage from bad ideas comes not from the initial decision but from the maintenance of bad decisions long past the point where evidence of their failure was available. The medical literature calls this prolonged commitment; the organizational behavior literature calls it escalation of commitment. Both describe the same phenomenon: continued investment in a failing course of action in the face of mounting evidence that it is failing.
The canonical laboratory demonstration is the Iowa Gambling Task, a card-selection experiment where subjects must learn through experience which decks of cards produce net gains and which produce net losses. Most subjects continue to draw from disadvantageous decks long after the pattern of loss is statistically evident — their behavior persists in the wrong direction even as accumulated experience should be updating their approach. The explanation involves multiple cognitive systems: the somatic marker system (gut feeling, which updates slowly), the explicit reasoning system (which can identify patterns but is then subject to motivated reasoning that defends existing choices), and the identity system (which treats changing course as a threat to coherent self-conception).
The identity dimension of bad ideas deserves its own analysis. When a belief or approach becomes incorporated into identity — not just "I do X" but "I am the kind of person who does X" — the cost of abandoning it is no longer just the loss of the idea's benefits. It is also a kind of self-dissolution, a small death of the person who held the idea. This is why challenges to deeply held beliefs feel threatening rather than informative. The threat is real, even if what is threatened is a constructed identity rather than a physical self. The mind defends against this threat with the same tools it uses to defend against external threats: aggression, dismissal, rationalization, and selective attention.
The organizational equivalent is the sunk cost fallacy: continuing a project because of resources already spent rather than because of the expected value of future spending. Personal sunk costs are psychological as well as financial — you have also spent identity, social capital (you told people this was a good idea), and cognitive effort constructing the justification for the idea in the first place. These non-financial sunk costs may actually be harder to write off than financial ones, because they involve admitting more than a bad investment: they involve admitting a bad self-presentation and a failed act of reasoning.
The techniques for overcoming this are structural rather than merely motivational. Willpower is not sufficient because the biases that protect bad ideas are not matters of will — they are automatic cognitive processes that operate below the level of conscious choice. Structural interventions work by changing the decision environment rather than demanding more willpower within the same environment.
The pre-mortem technique, developed by psychologist Gary Klein, is one of the most effective structural interventions for killing bad ideas early. Before committing to an approach, you run an imaginary post-mortem: assume the project has failed completely. What caused the failure? This reframing — asking not "will this work?" but "how will this fail?" — activates different cognitive processes than optimistic forward projection. It surfaces concerns that are suppressed during normal planning, because the "assume failure" framing eliminates the social awkwardness of pessimism. In a pre-mortem, identifying failure modes is the task, not a disloyalty. Running a pre-mortem before adopting a significant idea generates the list of kill criteria that the later review will check against.
The concept of asymmetric payoffs is useful for understanding which ideas to invest most in killing quickly versus which to allow more time. An idea with low downside and high upside can be maintained with looser kill criteria — if it fails, the failure is survivable, and early killing might abort something that needed more time to mature. An idea with high downside — one where failure is catastrophic or irreversible — requires stricter kill criteria and faster action on evidence of failure. The asymmetry of outcomes should drive the asymmetry of rigor in review.
The social commitment dimension of bad ideas creates a specific killing problem: the idea has been publicly endorsed. You told people it was a good idea. You recruited others to it. Reversing it is not just a private cognitive update — it is a public admission that you were wrong, in front of the people you convinced. The social cost of this is real, and it explains a significant portion of why bad ideas persist at organizational and personal levels. The corrective requires separating the act of public reversal from the failure narrative. The framing matters: "I was wrong" produces defensiveness and costs social capital. "The conditions have changed, and so has the best approach" is accurate and preserves relationship. Both are honest; only one is strategically useful.
The question of which bad ideas to prioritize killing is a resource allocation problem. Not all bad ideas deserve equal urgency of elimination. The prioritization framework considers three factors: how resource-intensive is the bad idea (how much time, money, attention, or identity is it consuming?), how much is it blocking better alternatives (does keeping it prevent you from pursuing the better approach?), and how embedded is it (how much harder will it be to kill in six months versus now?). The ideas that are most expensive, most blocking, and most likely to become more embedded score highest. Kill those first.
The emotional experience of killing a well-embedded bad idea deserves honest acknowledgment rather than instrumental minimization. There is usually grief, even when the decision is rationally clear. You are not just ending an approach — you are ending the person you were when you held it, the social identity it supported, the future you imagined it would produce. This grief is legitimate. The error is allowing grief to function as a reversal signal — to conclude, from the pain of abandonment, that the abandonment is wrong. Grief is evidence that you were invested, not that the investment should continue. The practical discipline is to feel the grief while executing the change — not to wait for the grief to pass before acting, because it may not pass quickly, and delay is expensive.
The long-run skill this builds is a lower attachment threshold — the ability to hold ideas provisionally rather than definitionally, to try on beliefs rather than becoming them, to pursue approaches until the evidence says otherwise rather than until the identity demands otherwise. This is epistemically virtuous and also practically efficient. The person who can update quickly never spends years on a wrong approach; their revision cycle is faster than the damage cycle. The person who cannot update eventually builds their entire life around ideas that stopped working years ago and cannot understand why nothing feels right.
Killing your own bad ideas is not a negation. It is the active operation of the system of revision. Every killed bad idea creates space for a better one, restores resources to productive use, and proves that your beliefs are responsive to evidence rather than armored against it. The willingness to do it, reliably, on a schedule, before the embedding goes too deep — that is one of the most important personal capacities Law 5 requires.
Comments
Sign in to join the conversation.
Be the first to share how this landed.