How Community Response Teams Iterate on Emergency Protocols
The Problem With Static Emergency Protocols
Emergency protocols are almost always written before the emergency they're meant to address. They are authored by people imagining a scenario based on historical data, professional training, and educated projection. This is unavoidable — you cannot write a protocol during a crisis. But it creates a structural problem: the protocol encodes assumptions about conditions, resources, communication channels, and human behavior that may not survive contact with the actual event.
A protocol written for a Category 2 hurricane does not automatically cover a Category 4. A plan that assumes radio communication fails when cell towers go down. A protocol built for a daytime disaster doesn't account for the half the population that is asleep or dispersed at 3 AM. A response plan created for a single-point emergency — one structure fire, one car accident — encounters its limits when multiple simultaneous crises compete for the same finite pool of responders.
The gap between the written protocol and the real emergency is not a failure of planners. It is a structural feature of planning under uncertainty. The only mechanism that closes this gap over time is rigorous after-action review, systematically translated into protocol revision, tested before it matters again.
The Architecture of the Review Cycle
High-functioning community emergency response teams structure their iteration cycle in four phases that cannot be skipped or compressed without losing the value of the process.
Phase 1: Real-Time Field Notes
During an emergency, designated team members capture what is happening as it happens — not a narrative, but sparse observational notes. Who arrived when. What resources were available versus what was expected. Which decision was made at what point and on whose authority. What communications actually occurred versus what the protocol specified. This real-time capture is imperfect under stress, but it provides a factual substrate that prevents the debrief from becoming purely reconstructed memory, which is unreliable and subject to hindsight bias.
Some teams assign a specific role — the "observer" — whose only job during a response is documentation, not action. This person does not deploy, does not treat, does not fight the fire. They watch and record. This looks expensive in terms of person-power. It is cheap in terms of learning yield.
Phase 2: The Hot Wash
Within hours of a response ending — while memory is fresh but emotion has stabilized somewhat — the team convenes a brief, structured conversation called a hot wash. The agenda is deliberately narrow: What happened? What was supposed to happen? Where did those two diverge? The hot wash is not a debate about what should have been done. It is a rapid reconstruction of what actually was done. Twenty to thirty minutes is usually sufficient.
The output of the hot wash is a list of divergence points: moments where the protocol and the actual response differed. These divergences are not yet labeled as errors or successes — they are simply noted as deviations that require further examination.
Phase 3: The Full After-Action Review
Within days of the event, before normal routines fully reestablish and memory degrades further, the team conducts a full after-action review. This is a longer meeting — ninety minutes to three hours depending on the scale of the event — that systematically examines each divergence point identified in the hot wash.
For each divergence, the team asks: Was the protocol correct, and the responder deviated incorrectly? Was the protocol incorrect, and the responder improvised correctly? Was the protocol correct in most circumstances but wrong for this specific scenario? Was the protocol silent — did the situation present conditions the protocol never anticipated?
These four categories have different implications. The first suggests a training gap. The second suggests a protocol revision. The third suggests conditional protocol modification. The fourth suggests a protocol addition. Getting these categories right matters because the intervention differs. Sending people back to training when the protocol needs rewriting is expensive and useless. Rewriting the protocol when responders simply need reinforcement training wastes both the protocol's credibility and the team's time.
The full after-action review should be documented in a standard format: the event description, the protocol section under review, the observed deviation, the category of divergence, and the proposed revision. This document becomes the input to the next phase.
Phase 4: Protocol Revision and Redistribution
The proposed revisions from the after-action review go to whoever holds authority over the protocol — typically a small team of experienced responders and one or two subject-matter experts who were not in the field during the event. Fresh eyes matter here. People who were in the field are sometimes too close to their experience to evaluate whether their improvised decision reflects a generalizable improvement or an individual judgment that worked for idiosyncratic reasons.
The revised protocol is then distributed to all team members with a clear change log. This is not bureaucratic formalism. People need to know what changed and why, not merely that it changed. A change log with rationale teaches the principles behind the protocol, not just the rules. Responders who understand why a rule exists can better improvise correctly when the next novel situation forces deviation.
The revised protocol requires a training cycle before the team can be considered ready to deploy it. Tabletop exercises, scenario walkthroughs, or field drills — scaled to the significance of the change — complete the revision loop. Until the team has practiced the new protocol, it exists only as text.
Near-Miss Culture as the Core Competency
The single most powerful predictor of a community response team's learning capability is not the quality of their after-action review forms. It is the culture around near-misses.
A near-miss is an event where serious harm was possible but did not occur. A responder who almost tripped a gas line while searching a structure. A team that nearly sent a diabetic volunteer into a heat-stress environment without checking their medical conditions. A communication failure that nearly sent two teams to the same location while leaving a third location unaddressed.
In teams with weak safety culture, near-misses are invisible. No one reports them because reporting feels like admitting fault. The team is unaware that the protocol allowed for the near-miss possibility, and when the same scenario recurs with different luck, the outcome is harm.
In teams with strong safety culture, near-misses generate the same structured review process as actual incidents. The reasoning is straightforward: a near-miss contains the full information of an actual failure — the protocol gap, the environmental condition, the decision point — without the cost. Treating near-misses seriously is the highest-leverage form of protocol revision because it closes gaps before they produce harm.
Building near-miss culture requires explicit institutional commitments, not just cultural norms. Many high-performing response organizations use anonymous near-miss reporting systems modeled on aviation's Aviation Safety Reporting System. Anonymity removes the personal risk of self-reporting. The organization still gets the information. The learning still happens.
Who Holds the Protocol
Community response teams face a specific governance challenge that distinguishes them from professional emergency services: they are often volunteer-based, geographically dispersed, and loosely organized. The protocol exists in some combination of a binder, a shared digital folder, and the heads of the most experienced members. When experienced members leave — age out, move, burn out — institutional knowledge can leave with them if it was never properly encoded.
This argues for a formal role: the protocol steward. A protocol steward is not necessarily the team leader. They are the person responsible for the written protocol's accuracy, currency, and accessibility. They maintain the change log. They track which reviews generated which revisions. They ensure that when a member joins the team, they receive the current protocol, not a two-year-old version.
Protocol stewardship is an unglamorous job. It requires more administrative discipline than field heroism. It is the job most likely to go unfilled in a volunteer organization. It is also the job that makes all the other learning visible and durable. Teams that have no protocol steward tend to repeat the same errors across different personnel generations, because each generation encounters the same gap, patches it informally, and never encodes the fix in a way the next generation inherits.
Revision Across Different Emergency Types
One underappreciated complexity of community response protocol revision is that different emergency types produce different types of learning. The patterns are worth naming.
Sudden-onset events — earthquakes, structural collapses, acute weather events — reveal gaps in initial response capacity: communication, triage, resource mobilization. After-action reviews of sudden-onset events tend to generate revisions around speed, sequencing, and the first sixty minutes of response.
Slow-onset events — prolonged heat waves, multi-day flooding, extended power outages — reveal gaps in sustained response capacity: responder rotation, supply chain, coordination with external agencies, mental health support for both victims and responders. Reviews of slow-onset events tend to generate revisions around endurance, coordination, and resource management over days rather than hours.
Cascading events — where one emergency triggers another — reveal gaps in protocol interaction. The flood that breaks the power grid that disables the communication system that prevents mutual aid coordination is not addressed by three separate single-hazard protocols. It requires a meta-protocol for multi-system failures. Most community response teams have no such document until they've lived through a cascading event and done the review.
The Relationship Between Protocol and Improvisation
There is a tension in emergency response between protocol adherence and adaptive improvisation that after-action reviews cannot fully resolve — but can productively manage.
Protocols exist because improvisation under extreme stress is unreliable. When cognitive load is high, options are unclear, and time is short, people make systematic errors. Protocols offload the cognitive work of decision-making onto a pre-established decision tree, freeing attention for execution. This is the case for protocol adherence.
But protocols are finite. They cannot anticipate every combination of conditions. Responders who follow a protocol robotically into a situation the protocol wasn't designed for can cause harm by the protocol's blind application. This is the case for adaptive improvisation.
The resolution the best teams arrive at: protocols define the default. Improvisation is permitted and expected when conditions clearly differ from the protocol's assumptions. But every improvisation is logged, reviewed in the after-action process, and either endorsed (generating a protocol addition) or analyzed (generating a training intervention). Improvisation without review is just luck. Improvisation that feeds back into protocol revision is the engine of adaptive capability.
A community response team that runs this cycle well — deploying, reviewing, revising, training, deploying again — becomes over time an organization with layered, field-tested competence that no initial training program can substitute for. The emergency itself, processed through disciplined reflection, becomes the teacher. The protocol is the record of what was learned.
Comments
Sign in to join the conversation.
Be the first to share how this landed.