Think and Save the World

How To Build A Decision Journal And Learn From Past Choices

· 8 min read

Why Experience Doesn't Automatically Teach

There's a common assumption that experienced people make better decisions because they've seen more. Sometimes this is true. Often it isn't.

The research is sobering. Philip Tetlock's multi-decade study of expert political forecasters found that most experts performed barely better than chance on long-range predictions, and that experience often made this worse rather than better — experienced experts were more confident in wrong predictions. The problem isn't that they were unintelligent or uninformed. It was that they weren't getting clean feedback on their predictions.

Feedback is the mechanism by which experience teaches. And feedback gets corrupted in several well-documented ways:

Outcome bias. We evaluate the quality of past decisions based on how they turned out. A doctor who prescribed the right treatment but the patient died feels they made a bad call. A doctor who prescribed the wrong treatment but the patient recovered feels fine. The patient's death or survival depended on many things besides the treatment decision, but the doctor's emotional feedback — and their memory of the decision — is contaminated by the outcome.

Hindsight bias. After an event occurs, we systematically overestimate how predictable it was beforehand. "I knew this would happen" is almost always a retroactive construction. This means that when we review past decisions without documentation, we're not seeing what we actually believed — we're seeing a revised version that has been updated to match what we now know.

Credit misattribution. When a decision works out, we attribute it to skill. When it fails, we attribute it to circumstances. Without a record of what we actually expected and why, there's no way to verify this attribution. Both good outcomes and bad ones teach us whatever we want them to teach us.

Selection bias in what we remember. We tend to remember our successes more vividly than our failures, and we tend to remember our failures as less foreseeable. Memory is reconstructive, not archival.

The decision journal works precisely because it creates an archival record that can be compared to the reconstructive memory. It forces confrontation between what you believed and what actually happened, in a way that your unaided memory never does.

The Farnam Street Framework in Detail

Shane Parrish's decision journal approach, developed at Farnam Street and detailed in his writing over many years, is probably the most widely circulated version of this tool. The core insight is that most people evaluate decisions based on outcomes, when what you actually want to evaluate is process.

The framework Parrish recommends records:

The situation and context. What is going on in your life, your work, your environment that makes this decision necessary? What are the relevant constraints? This gives future-you the context to understand why you were facing this choice at all.

The decision. State it precisely. Not "I'm going to try harder with my business" but "I'm going to cut project X, hire a designer, and pivot to customer segment Y by end of Q3." Vague decisions can't be evaluated later.

The mental models and reasoning you're using. This is where you explain your actual thought process — not what sounds smart, but what you actually think. Which facts are you relying on? What's your theory of how this will play out? What causal chain do you believe will operate?

The outcome you expect. Be specific and time-bound. Not "I think this will work out" but "I expect revenue to grow by 20% in six months" or "I expect my stress level to be lower in three months." Falsifiable predictions only.

The range of outcomes you consider plausible. What's the best case? What's the worst case? What's the most likely case? What probability do you assign to each?

Your emotional state. Are you excited? Anxious? Pressured by someone else's timeline? Tired? This is the hardest part to write honestly but often the most informative at review time.

What would change your mind. What evidence, if it appeared, would tell you that you were wrong? This forces you to define falsifiability in advance rather than shifting goalposts after the fact.

The review process is where the learning happens. Parrish recommends reviewing decisions at a defined lag — six months is common for career and business decisions; shorter for faster-cycling domains. At review:

- What actually happened? - Was my prediction correct? - If the outcome was good, was it because my reasoning was good or because I got lucky? - If the outcome was bad, was it because my reasoning was bad or because I got unlucky? - What was my emotional state at decision time, and did it distort my judgment? - What would I do differently?

The last question is almost never "try harder" — it's usually something structural about the type of information I was using, or a recurring bias in how I reason about a specific domain.

Hindsight Bias: Why You Need the Record

Hindsight bias — the "I knew it all along" effect — is one of the most robust findings in judgment and decision-making research. Baruch Fischhoff demonstrated it in 1975 in a now-classic study: people told the outcome of historical events consistently overestimate how likely they would have said the outcome was before it happened.

The mechanism is that knowledge of the outcome changes the way you reconstruct the path to it. Once you know that company X went bankrupt, you reinterpret the signs that were "obvious in retrospect" — the management churn, the accounting irregularities, the market headwinds — as things you would have noticed and weighed heavily. You probably wouldn't have. But you can't check, because you don't have the record of what you actually thought.

This is damaging in two ways. First, it makes you think you understand the past better than you do, which inflates your confidence in predicting the future. Second, it prevents real learning because you're learning from a falsified version of your past beliefs.

The decision journal defeats hindsight bias by locking in your pre-outcome beliefs before the outcome occurs. The record can't be revised. When you go back and read what you actually wrote, you see what you actually thought — sometimes with embarrassment, sometimes with pride, always with more information than you'd have without the record.

What Patterns Emerge Over a Year

This is the payoff. A single decision journal entry is useful. A year of them is a map of your cognitive tendencies.

Overconfidence domains. Most people are systematically overconfident in some domains and appropriately calibrated or even underconfident in others. The journal shows you where your confidence consistently outruns your accuracy. For most people this shows up in predictions about other people's behavior and about timelines.

Emotion signatures. You start to see which emotional states correlate with bad decisions. Some people make worse decisions when excited — they underweight downside. Some make worse decisions when anxious — they overweight risk and choose suboptimally conservative options. Some make worse decisions when under social pressure — they defer to the room rather than their own analysis. You can't catch this without data.

Model failures. Every significant decision is based on a theory of how the world works. The journal lets you test your theories. If you consistently believed that a certain type of marketing would work, or that a certain type of person would behave in a certain way, or that certain market conditions would hold — and consistently found otherwise — you now have evidence to update the model.

Decision types where you're actually good. This part is underappreciated. The journal also shows where you have genuine skill. If your predictions in a certain domain are consistently accurate, that's useful information. You're probably drawing on real pattern recognition in that domain. Trust that more.

Recurring mistakes. Some errors are not domain-specific — they're structural. Maybe you consistently underestimate how long projects take. Maybe you consistently overestimate how much other people care about a problem you care about. Maybe you consistently make bad decisions in the two weeks after major stressful events. The journal reveals the structure.

The Honest Problem with Most People's Version of This

The failure mode of the decision journal is the same as any journaling: people write what sounds good rather than what's true.

The emotional state entry is the one most people falsify. They write "I felt calm and deliberate" when they were actually panicked and running out of runway. They write "I carefully considered the alternatives" when they actually decided in the first five minutes and spent the rest of the time finding reasons to confirm it.

The fix is to treat the emotional state entry as the most important entry, not the least. The point is not to document that you were a good rational agent. The point is to document your actual state so that when you review, you can identify the conditions under which your thinking degrades.

The second failure mode is never reviewing. You write entries faithfully and then never go back. This is worse than useless — you've done the work without getting the feedback. Put a recurring calendar reminder for your review sessions. Make it a ritual, not an optional task.

The third failure mode is only journaling winners. If you only record the decisions that turn out well, you're running a selected sample. Record the ones you're uncertain about, the ones that feel uncomfortable, the ones that you later regret. Those are exactly where the learning is.

Scale: This Principle Applied Beyond the Individual

The decision journal idea scales. Organizations that run systematic pre-mortems (imagine the project has failed; work backward to explain why) and post-mortems (what actually happened, separated from blame) are doing the institutional version of this. Intelligence agencies with good practice run structured analytic techniques that force explicit documentation of pre-decision reasoning precisely to defeat hindsight bias in evaluation.

The absence of decision records is why most organizations don't actually learn from their history. They have narrative histories ("we tried X and it failed") but not causal records ("we believed Y, which led us to try X, which failed because our belief about Y was wrong for reason Z"). The narrative collapses the decision process into the outcome and makes institutional learning nearly impossible.

A decision journal is the individual version of a culture that takes learning seriously — which is different from a culture that takes performance seriously. Performance cultures learn from success and hide failure. Learning cultures document both and look at the process, not just the outcome.

The individual who builds this habit for a decade has something rare: genuine calibration. They know where to trust their instincts and where to be skeptical of them. They know what emotional states compromise their reasoning. They know which of their models of the world have earned confidence through repeated accuracy.

That's not wisdom in the vague sense. It's an empirically-grounded self-knowledge that most people never achieve because they never built the feedback loop to get there.

Cite this:

Comments

·

Sign in to join the conversation.

Be the first to share how this landed.