Think and Save the World

Why Team Retrospectives Work Better When Blame Is Removed

· 5 min read

The System Failed. Now What?

When the Challenger space shuttle broke apart 73 seconds after launch in 1986, NASA's initial institutional response was to look for who violated protocol. The Rogers Commission investigation — the real one, pushed by Richard Feynman — found something more disturbing: the system had been normalizing anomalies for years. Engineers knew about the O-ring issues. They'd raised concerns. Those concerns were processed through a bureaucratic structure that discounted inconvenient information and rewarded schedule adherence over safety. Nobody was purely "to blame." The system had failure modes built into its culture.

This is almost always how large failures work. But the human mind wants a face.

Why Blame Feels Right and Ruins Everything

Blame serves a psychological function. It is meaning-making under distress. When something fails, we experience a gap between what should have happened and what did — and our brains are uncomfortable with that gap. Identifying a cause, especially a human cause, closes the gap quickly. The story becomes: "X did Y, and that's why Z happened." Done. We can move on.

The problem is that this closure is false. It substitutes a satisfying narrative for an accurate one.

Research on organizational learning consistently shows that blame-oriented post-mortems produce several predictable dysfunctions:

Survivorship reporting: People report what they believe will be received well, not what is true. Studies of medical error reporting — a domain where this has been extensively researched — show that adverse event reporting drops dramatically in high-blame cultures. The events still happen; they just stop being reported. You can't learn from what you don't know.

The scapegoat equilibrium: Once blame lands on a person or a team, pressure releases and investigation stops. The system never gets examined. The same failure conditions remain, waiting for the next person to step into them.

Talent optimization for self-protection: High performers in blame cultures develop sophisticated impression management. They become skilled at positioning themselves before problems happen, creating paper trails, and managing optics. This uses cognitive resources that would otherwise go toward actual work.

Risk aversion that kills innovation: If failure = punishment, people stop taking risks. This is lethal in any organization that depends on experimentation, iteration, or adaptation.

The Blameless Post-Mortem: Where It Came From

The blameless post-mortem was formalized in DevOps culture, particularly by Site Reliability Engineering (SRE) teams at Google, and documented in the SRE Book (2016). The core premise is what Google called the "Just Culture" principle, originally developed by Sidney Dekker in the field of aviation safety: people are almost always doing the most reasonable thing they could with the information, tools, and pressures they had at the time.

This isn't naive. It doesn't pretend bad actors don't exist. But it recognizes that most failures in complex systems aren't caused by malice or stupidity — they're caused by systems with inadequate safeguards, unclear interfaces, misaligned incentives, or information gaps.

The blameless post-mortem investigates the system, not the individual. Key questions:

- What was the person or team trying to accomplish? - What information did they have? What information were they missing? - What conditions made the failure path easier than the safe path? - Where did the system fail to catch or prevent the issue? - What would need to be true for someone to make this same mistake again?

That last question is the most important one. If the answer is "nothing would need to change — this environment still produces the conditions for this failure," then you haven't learned anything actionable yet.

Psychological Safety: The Prerequisite

You cannot have a blameless retrospective without psychological safety. These aren't two separate practices — one enables the other.

Amy Edmondson's research at Harvard Business School across medical, manufacturing, and technology teams consistently finds that psychological safety — the belief that you will not be punished for raising concerns, admitting mistakes, or offering dissenting views — is the single strongest predictor of team learning and performance in complex work environments.

Psychological safety is not about being nice. Teams with high psychological safety still have hard conversations; they have them more effectively because people bring real information.

Building it requires consistent leadership behavior over time: - Leaders model fallibility (admit when they were wrong, made a bad call, or didn't know something) - Failure is treated as information, not as evidence of character - The first questions after a failure are curious, not accusatory - People who surface problems are visibly valued, not quietly penalized

Without this foundation, the format of your retrospective doesn't matter. You can follow every blameless post-mortem template and still get theater if the underlying culture punishes honesty.

What a Real Retrospective Looks Like

A functional retrospective has a few non-negotiable elements:

A neutral facilitator. Ideally someone not directly implicated in the failure. Their job is to ask questions and keep the inquiry honest, not to reach a verdict.

A shared timeline. Build a factual, chronological account of what happened — including the reasoning at each decision point. This often reveals that what looked like a bad decision was actually a reasonable decision given what was known at the time.

Contributing factors, not root causes. Most failures don't have a single root cause. They have contributing factors: technical, organizational, human, environmental. List them all.

Specific, owned action items. Not "we need to communicate better." Specific: "By [date], [person] will create a pre-deployment checklist for Friday releases and get it reviewed by [person]." Retrospectives that produce only vague commitments produce nothing.

Written artifacts. Google publishes some internal post-mortems broadly. Transparency about failures communicates culture more powerfully than any stated value.

Why This Threatens Conventional Management

Traditional management is built on accountability as a control mechanism. If I hold you responsible, you'll be more careful next time. This is the theory.

The problem is that accountability-as-punishment conflates two different things: accountability (owning outcomes, being responsible for fixing problems) and blame (being punished for failure). You can have robust accountability without blame. In fact, blameless cultures often have more accountability — people step forward, name problems early, and take ownership of fixes — because it's safe to do so.

Managers who rely on blame as a control mechanism often feel threatened by blameless retrospectives because they remove a lever of power. If blame isn't the consequence of failure, what keeps people in line? The answer is: clarity of expectation, genuine feedback, and the intrinsic motivation of working in a system where your honest contributions are valued. That requires more sophisticated management, not less.

The Community and Civilizational Scale

This principle doesn't stop at team retrospectives. The same dynamic plays out in schools (zero-tolerance policies that punish students without investigating systemic causes), in healthcare (malpractice cultures that drive error underground), in governments (political blame games that prevent genuine policy learning), and in international relations (punitive post-war settlements that breed resentment and future conflict).

Communities that practice blameless inquiry — that ask "how did our system produce this?" rather than "who can we punish for this?" — build the capacity to actually improve. They learn. They adapt. They generate fewer repeat failures.

If every community operated retrospectives this way — school boards after educational failures, city councils after infrastructure breakdowns, neighborhood organizations after community conflicts — the aggregate improvement in human organization would be enormous. Not because we'd become soft on accountability. Because we'd become precise about what actually causes things to go wrong.

The opposite of blame isn't amnesty. It's accuracy. And accuracy is how you get better.

Cite this:

Comments

·

Sign in to join the conversation.

Be the first to share how this landed.