Think and Save the World

Humility in Medicine — the Doctor Who Says "I Don't Know"

· 8 min read

The Confidence Problem

In 1999, the Institute of Medicine published "To Err is Human," estimating that medical errors killed between 44,000 and 98,000 Americans annually. The report was a shock to public consciousness. It was also, by most subsequent analyses, an undercount.

A 2016 study in the BMJ, led by Martin Makary and Michael Daniel at Johns Hopkins, recalculated the figure using a different methodology and arrived at 250,000 preventable deaths per year from medical error in the United States alone. If that estimate is approximately right — and it is disputed, but it is in the same order of magnitude as other estimates — medical error would be the third leading cause of death in America, behind heart disease and cancer.

Diagnostic error is the largest single contributor. Not treatment error, not procedural error — the act of deciding what is wrong with a patient.

The Society to Improve Diagnosis in Medicine estimates 12 million diagnostic errors per year in US outpatient settings. Of those, about half involve harm, and about 40,000 to 80,000 involve death. These numbers are consistent across multiple independent analyses.

The mechanism behind most diagnostic error is not ignorance. Doctors know an enormous amount. The mechanism is premature closure — the tendency to stop considering alternative diagnoses once a plausible one has been identified. It's an anchoring problem, a cognitive trap that affects everyone but that medical culture specifically exacerbates by treating clinical confidence as a professional virtue.

The doctor who walks into a room already thinking "this looks like X" filters the patient's presentation through that lens. Symptoms that fit X get noted. Symptoms that don't fit X get minimized, attributed to anxiety, or simply overlooked. The patient who says "but it also feels like Y" gets a reassuring nod and a prescription for X.

This is not malpractice. It is the normal functioning of an overconfident system.

What Medical Training Does to Uncertainty

Medical training is a long exercise in learning to perform certainty. Not to achieve it — that is not possible in a field where the body frequently does not read the textbook. To perform it. To walk into a room, receive fragmentary data, and emerge with a diagnosis and plan that projects authority.

This begins in medical school with the Socratic method, where students who say "I don't know" in rounds are exposed to academic humiliation. The lesson is not subtle: ignorance is a deficit to conceal. It continues through residency, where the expectation is that you function at the edge of your competence without admitting that is what you're doing. By the time a physician is in practice, the reflex is deep: uncertainty is managed, not communicated.

The reflex is reinforced by structural pressures. A physician who says "I'm not sure what this is, I'd like to run more tests" is exposed to two simultaneous risks. One is financial: payer systems often push back on additional testing, and the path of least resistance is a diagnosis that justifies a standard treatment. The other is legal: documentation of diagnostic uncertainty can be used in malpractice proceedings. The safest thing to write, from a liability standpoint, is a confident diagnosis. The safest thing to communicate to a patient is a confident diagnosis. The truth sometimes isn't one.

This is not a small distortion. It shapes clinical practice at scale, systematically in the direction of false certainty.

What the Alternatives Look Like

There are pockets of medical practice that have developed functional cultures around uncertainty. They're worth understanding because they show this is not an impossible standard — it's a design choice.

Palliative care is the most consistent example. Palliative specialists talk about uncertainty constantly, by necessity. Their patients have diagnoses but unclear trajectories. Prognosis is genuinely uncertain in ways that can't be papered over. Palliative medicine has developed language, practices, and training for communicating uncertainty to patients and families. Research on patient satisfaction in palliative settings consistently shows that patients prefer honest uncertainty to false confidence, including when the uncertainty is about whether they will survive.

Infectious disease went through a reckoning during COVID-19 that was partly forced. Publicly visible experts had to say, repeatedly, "the evidence is evolving" and "we don't know yet." This was distressing to a public that had been trained to expect medical certainty. But the physicians who communicated uncertainty clearly and updated their positions as evidence changed — Francis Collins, Anthony Fauci despite the political noise — generally maintained credibility with people tracking the science. The ones who overclaimed early and had to walk back quietly lost it.

Aviation medicine and emergency medicine have borrowed heavily from aviation's systems-thinking approach to error, which treats uncertainty as a signal to be communicated rather than a weakness to be concealed. Crew Resource Management — the training that teaches airplane crews to speak up when they're uncertain, including challenging a senior pilot — has a direct analog in medical team dynamics, and institutions that implement it show measurable reductions in adverse events.

Diagnostic Excellence programs, now running at institutions like the Society to Improve Diagnosis in Medicine's affiliated centers, teach clinicians to explicitly generate differential diagnoses, flag their own anchoring, and treat closure as a hypothesis rather than a conclusion. These programs are not yet standard — they're pockets of intentional practice against the dominant culture.

The Patient Side of the Equation

When a physician admits uncertainty, patients have a choice to make about their own epistemic position. Most of them are not trained for this either.

The cultural contract in medicine, until recently, was paternalistic: you come to the doctor, the doctor knows, you follow the advice. That contract is breaking down in complicated ways — some patients are now arriving with WebMD printouts or Reddit diagnoses, which creates a different set of problems. But the underlying issue is that neither doctors nor patients have been well-equipped for collaborative uncertainty.

Shared decision-making — a clinical framework that explicitly incorporates patient values, preferences, and tolerances for risk into medical decisions — is the best available alternative. It requires the physician to say things like: "There are two options here. Option A has a 70% chance of resolving this and a 15% chance of these side effects. Option B has a 40% chance but no side effects. Given what you've told me about your life, I want to understand what matters more to you." That conversation can only happen if the physician is willing to hold uncertainty openly rather than resolve it unilaterally.

Studies on shared decision-making consistently show that patients who participate in decisions have better outcomes — partly because they're more likely to adhere to treatments they chose, and partly because their values sometimes correctly override physician assumptions about what they want.

The Global Stakes

Scale this problem to a global civilization and the numbers become almost impossible to hold in mind.

The World Health Organization estimates that 4 in 10 patients are harmed in primary and outpatient care globally. Most of this harm is preventable. Diagnostic error is a leading cause. The problem is most acute in low- and middle-income countries where specialist consultation is rare, diagnostic technology is limited, and a single clinician may be the only point of contact for thousands of patients.

In high-income countries, medicine is moving toward precision and personalization — genomics, biomarkers, AI-assisted diagnostics. These tools can reduce certain kinds of error. They can also amplify overconfidence if the results of an algorithm are treated as certainty when they represent probability.

The AI diagnostic systems now being deployed in radiology, pathology, and primary care achieve impressive accuracy on test datasets and sometimes outperform human specialists on specific tasks. What they do not do is naturally communicate uncertainty. An AI model that assigns 73% probability to one diagnosis and returns a confident output trains users to treat that output as a conclusion. This is the same problem in a new form.

A civilization-scale culture of medical humility requires embedding uncertainty communication into every layer of the system: training, licensing, workflow design, patient education, regulatory expectation, and the AI tools being woven into all of those.

Why This Is Civilizational

Here is the hard version of this argument.

Epistemic humility in medicine is not just about preventing individual deaths, as important as that is. It is about whether the field of medicine can learn faster than the diseases it is trying to treat.

Medicine advances through accumulating knowledge, refining practice, and updating beliefs when evidence contradicts them. A culture that is overconfident resists updating. This is the story behind the slow adoption of handwashing in the 19th century after Semmelweis demonstrated its importance — the established practitioners were too confident in existing theory to accept an anomalous finding. It is the story behind the decades-long delay in recognizing the harms of certain widely-used treatments: hormone replacement therapy at high doses, routine episiotomy, prolonged bed rest after cardiac events. In each case, the system was too certain to notice it was wrong.

The same dynamic plays out with every emerging pathogen, every new class of drug, every treatment that gets adopted based on plausible mechanism before controlled trial evidence exists. Overconfidence costs time. In medicine, time costs lives.

A global medical culture that treats uncertainty as information rather than failure would, over a generation, produce a faster-learning system. It would catch errors sooner. It would update protocols based on evidence rather than authority. It would generate better data, because clinicians who are tracking their own uncertainty are more likely to document equivocal findings, which are exactly the data that could reveal the next pattern.

Across eight billion people, over a generation, the compounding effect of that is not marginal. It is the difference between a civilization that is systematically more alive to what it doesn't know, and one that keeps killing people with confidence.

The Human Requirement

This ultimately returns to something Law 0 says directly: you are human. So is the doctor.

The doctor who cannot say "I don't know" is not protecting their patient. They are protecting themselves from the exposure of being fallible. That protection is understandable — the exposure is real, and the culture around it is punishing. But it transfers the cost of their discomfort onto the patient.

A civilization where doctors can say "I don't know" is a civilization that has decided doctors are allowed to be human. Where uncertainty is not a failure of competence but an honest representation of a complex reality. Where the patient's body is taken seriously as a source of evidence, not overridden by the clinician's working theory.

That civilization produces better outcomes. Not because the doctors know more. Because they're honest about how much they don't.

Exercise: The Last Time You Deferred

Think about the last time you received a medical opinion and accepted it without question. Not because it was obviously right — but because questioning the authority in the room felt impossible.

Now reverse it: think about a time when you had a strong sense something was wrong with your body that the system dismissed, explained away, or attributed to anxiety.

What would a different kind of system have looked like in that moment? What would you have needed the doctor to say? And what would you have needed to say back?

That conversation, multiplied across every patient-physician interaction on earth, is what changes medicine.

Cite this:

Comments

·

Sign in to join the conversation.

Be the first to share how this landed.