What Happens When Every Military Engagement Requires a Public After-Action Review
The after-action review has a precise genealogy. The U.S. Army developed it formally in the 1970s at the National Training Center, where units rotating through structured combat simulations were subjected to systematic debriefs — honest, rank-blind assessments of what happened during the exercise versus what was intended. The AAR was not a critique session. It had a specific architecture: What was the mission? What actually happened? Why did the gaps occur? What should we sustain or improve? That architecture made it generative rather than punitive, and the results were measurable. Units that went through rigorous AARs improved. Units that didn't, repeated errors.
The military AAR spread. It became a staple of management consulting, healthcare improvement, emergency response, and organizational learning. Patrick Lagadec's work on crisis management, Weick and Sutcliffe's research on high-reliability organizations, and Amy Edmondson's studies of psychological safety all converged on the same conclusion: structured post-event review is one of the most powerful organizational learning mechanisms available. The AAR is Law 5 in its most operational form.
But these are internal processes. The question of making them public — mandatory, structured, accessible — is a different kind of intervention. It is not just about improving military performance. It is about repositioning war within democratic governance.
What "Public" Actually Means
A public AAR does not mean a press conference or a political statement. It means a structured, documented analysis produced by the engaged institution, reviewed by independent parties, and released in sufficiently complete form that external observers can evaluate its integrity.
The British have the closest working model. Public inquiries — the Chilcot Inquiry into the Iraq War being the most extensive recent example — produce vast, detailed, primary-source-rich documents that name decisions, name decision-makers, trace chains of reasoning, and reach specific findings. The Chilcot Report ran to 2.6 million words. It found that the case for war was presented with "unwarranted certainty," that post-war planning was "wholly inadequate," and that the decision-making process bypassed proper Cabinet procedure. These are not vague criticisms. They are precise findings about specific acts and actors.
The effects of the Chilcot Report were imperfect but real. It shifted public understanding of the Iraq War from contested political territory to documented institutional failure. It created a historical record that subsequent policy debates could not ignore. It informed the debate about Syria in 2013, when Parliament voted against military action — the first such defeat in modern British history — partly because members explicitly invoked Iraq and demanded a higher evidence standard.
That is the transmission mechanism. Public AARs do not prevent wars. They raise the epistemic and political cost of starting them without adequate justification and ending them without accounting for the results.
The Accountability Inversion Problem
The standard argument against public military AARs is operational security. Some of that argument is legitimate. Information about intelligence sources, ongoing operations, technical vulnerabilities, and diplomatic relationships can cause genuine harm if disclosed prematurely or completely. No serious framework for public AARs dismisses this concern.
But the operational security argument has been systematically abused. In the United States, classification authority is vast, loosely defined, and frequently used to shield embarrassment rather than to protect capability. The Pentagon Papers revealed systematic deception of the public and Congress about the Vietnam War's trajectory. The Abu Ghraib torture photographs were suppressed through classification mechanisms. SIGAR's Afghanistan reports documented years of deliberate optimism in military briefings that contradicted ground reality — a phenomenon that SIGAR's quarterly reports explicitly named "data driven decision making" that excluded inconvenient data.
The problem is not that classification exists. The problem is that the default posture is opacity, with disclosure as the exception requiring special justification. A mandatory public AAR framework would invert this default: transparency is the expectation, and classification of specific elements requires specific justification applied element by element, not category by category.
This is not theoretical. The Freedom of Information Act, oversight bodies like the Government Accountability Office, congressional intelligence committees, and independent inspectors general all operate on partial versions of this principle. The question is whether those partial mechanisms are sufficient, or whether something more systematic and structured is needed.
What the Data From Partial Transparency Shows
The SIGAR reports on Afghanistan are the most instructive data point available. The Special Inspector General for Afghanistan Reconstruction produced quarterly reports from 2008 onward that systematically documented the gap between official claims and documented reality. These reports were partially public — available to Congress, to journalists, to researchers. They revealed that:
- Metrics used to demonstrate progress were defined to exclude contrary evidence. - Contracts were awarded and renewed based on political considerations rather than performance. - Afghan security force capacity was consistently overstated. - Drug production expanded dramatically despite billions in counter-narcotics spending.
These findings were available to decision-makers throughout the period. They did not stop the errors from continuing. But they did create a documentary record that, when the Washington Post obtained the underlying interview transcripts in 2019 and published what became known as the "Afghanistan Papers," produced a significant public reckoning. The lesson is not that transparency failed. It is that incomplete, partial, difficult-to-access transparency is insufficient. The data existed. The mechanism for it to matter was inadequate.
A structured mandatory public AAR would have different properties. It would be produced on a defined timeline. It would follow a defined framework. It would be released in a form designed for public consumption rather than buried in quarterly reports that require specialists to parse. It would be subject to independent review before release, not just classification review.
The Chilling Effect Argument
Critics raise a legitimate second concern: mandatory public accountability will change military behavior in ways that are not uniformly beneficial. Commanders will become risk-averse if they know their decisions will be publicly scrutinized. Honest internal deliberation will be suppressed if participants know their words will eventually be disclosed. Allies will be less willing to share intelligence or cooperate in sensitive operations if they cannot trust information confidentiality.
These are real effects. They deserve serious engagement rather than dismissal.
On risk aversion: the empirical record does not show that accountability produces uniformly risk-averse behavior. Surgeons who know their outcomes are tracked do not become more conservative — they become more careful. Police departments with robust civilian oversight do not become less effective at law enforcement — those with effective oversight tend to be more trusted, and public trust enables more effective policing. The mechanism is not that accountability paralyzes action. It is that accountability changes the incentive structure around which actions are taken. Reckless, politically-motivated military actions become more costly. Careful, well-planned operations become relatively more attractive. That is the desired effect, not an unintended consequence.
On internal deliberation: the model of public AARs most consistent with good deliberation is time-delayed disclosure. Medical incident reviews are often protected from immediate disclosure but eventually become part of institutional learning databases. Legal deliberations in appellate courts are eventually public. Financial regulators often impose disclosure delays to prevent market disruption. A similar framework for military AARs — full internal process, external independent review, disclosure after a defined period (five or ten years for sensitive material, immediately for general findings) — would protect deliberation without enabling indefinite opacity.
On allied intelligence: this concern is most serious for technical intelligence — SIGINT, HUMINT sources, technical collection methods. A well-designed framework would create specific carve-outs for this material, subject to independent oversight to ensure the carve-outs are not being abused. The Afghanistan experience is instructive: the broader failures of strategy, governance, contracting, and assessment were not intelligence secrets. They were known to multiple agencies, documented in internal reports, and kept from the public by bureaucratic inertia and political incentive, not by legitimate security classification.
The Civilizational Stakes
The deeper question is what kind of institution military force is, and what role it plays in democratic civilization.
The dominant framing treats war as a domain of technical expertise where civilian judgment is inherently inferior. Generals know things civilians don't. Operational complexity exceeds public comprehension. Democratic deliberation about military matters is therefore at best uninformed and at worst dangerous.
This framing has some validity at the tactical level. The decision about which route a convoy takes is not a democratic decision. But at the strategic and political level, this framing is an accountability evasion mechanism. The decision to invade Iraq, to continue the Afghan mission for twenty years, to define success in ways that prevented learning — these are not technical decisions beyond civilian comprehension. They are political decisions dressed in technical language to insulate them from democratic scrutiny.
A mandatory public AAR framework treats military engagement as what it actually is: a consequential public act taken by agents of democratic governance, using resources provided by citizens, producing consequences borne by those citizens and by others. On that framing, public accounting is not a courtesy. It is an obligation.
The civilizational effect of normalizing this obligation would be gradual but profound. Military cultures that know their decisions will be reviewed develop different decision-making norms. Governments that know wars will be publicly examined make different choices about which wars to start and how to conduct them. Publics that have access to honest post-engagement analysis develop more sophisticated frameworks for evaluating future decisions. Allied coalitions built on shared accountability norms are more cohesive and more credible than those built on mutual opacity.
None of this is utopian. Mandatory public AARs would not end war. They would not eliminate strategic error. They would not perfectly deter political manipulation of military decisions. What they would do is shift the operating environment of military decision-making toward one in which learning is structurally incentivized and deception is structurally costly. That is not a small thing. Across the scale of civilizational time, it may be one of the most important institutional changes democratic societies could make.
The AAR started as a training tool for infantry units in a California desert. The question is whether it can scale to become the epistemic infrastructure of democratic military governance. The answer depends not on whether the mechanism is sufficient, but on whether the political will to demand it can be generated and sustained.
Comments
Sign in to join the conversation.
Be the first to share how this landed.