How experienced leaders actually make decisions under extreme time pressure — and why classical rationality gets it wrong
It is January 15, 2009, and Captain Chesley "Sully" Sullenberger has just lost both engines on US Airways Flight 1549 at an altitude of 2,818 feet over one of the most densely populated places on Earth. He has 155 souls aboard and roughly 208 seconds before the aircraft hits something. Air traffic control offers him two runways — Teterboro to the west, LaGuardia behind him. The rational thing to do, according to classical decision theory, would be to evaluate each option against weighted criteria: distance, glide ratio, wind conditions, obstacle clearance, passenger survival probability. Sullenberger does none of this. Within seconds, he makes a decision that no simulator had ever trained him for. "We're gonna be in the Hudson," he tells the controller. His voice is flat, certain. It is not the voice of a man running probability calculations. It is the voice of a man who has seen the answer.
How did he see it? And what, precisely, was happening inside his mind that made that seeing possible — or that might, under slightly different conditions, have made it catastrophically wrong?
For most of the twentieth century, the dominant model of good decision-making was the rational choice model: identify the problem, generate a comprehensive set of options, evaluate each option against defined criteria, select the optimal one. This model, rooted in economic theory and refined in operations research, works beautifully in boardrooms, planning committees, and academic examinations. It is how we teach MBAs to think. It is how we structure strategic plans. And it is almost entirely useless when the building is on fire.
Herbert Simon saw the problem decades before anyone studied fire commanders. In Administrative Behavior, Simon (1947) introduced the concept of bounded rationality — the recognition that human decision-makers cannot evaluate all alternatives because they lack the time, the information, and the cognitive capacity. Instead, they satisfice: they set an aspiration level for what constitutes an acceptable outcome and choose the first option that meets it. Good enough, fast enough. This was heresy in an era that worshipped optimization, but Simon won a Nobel Prize for it, because he was describing how humans actually behave rather than how economists wished they would.
Yet even Simon's model implies a degree of deliberation — setting criteria, scanning options, evaluating whether each meets the threshold. What happens when you have neither the time to scan nor the cognitive bandwidth to evaluate? What happens when the decision must be made in seconds, the information is ambiguous or contradictory, and the consequences of getting it wrong are measured in human lives?
That question drove a young cognitive psychologist named Gary Klein into fire stations, military command posts, and neonatal intensive care units in the early 1980s. What he found there overturned nearly everything the decision sciences thought they knew.
Klein's original study was elegantly simple in design and revolutionary in findings. He and his colleagues interviewed 26 experienced fireground commanders — professionals with an average of 23 years of service — about 156 critical decisions made during actual fires (Klein et al., 2010). The research question was straightforward: when you face a life-or-death decision under extreme time pressure, how do you choose what to do?
Classical decision theory predicted that commanders would generate several courses of action and compare them. They did not. In 80 to 90 percent of the decisions Klein studied, commanders reported considering only a single option. They did not compare. They recognised (Klein, 1998).
The model Klein developed from these findings — the Recognition-Primed Decision (RPD) model — describes a three-stage process that experienced professionals use under pressure:
This is not guessing. It is not recklessness. It is what Klein (2008) calls naturalistic decision making — the study of how experienced people make decisions in real-world settings characterised by time pressure, high stakes, ambiguous information, and dynamic conditions. The NDM framework emerged precisely because laboratory studies of decision-making — which typically gave subjects unlimited time, clear options, and defined probabilities — were describing a world that crisis leaders never inhabit.
Recall a time you made a rapid, high-stakes decision — it need not be life-or-death. Did you compare multiple options systematically, or did you "just know" what to do? If the latter, can you trace that knowing back to a pattern you had seen before? What you experienced may well have been recognition-primed decision making in action.
The critical insight of RPD is that expert intuition is not mystical. It is compressed experience. A fireground commander with 23 years of service has seen thousands of fires. Those experiences are stored not as explicit rules but as patterns — perceptual configurations that bundle together cues, expectations, and actions. When a new situation arrives, the commander's brain does what brains do extraordinarily well: it matches the incoming information to a stored pattern, often before the commander can articulate why. Klein calls this "seeing the invisible" — experienced professionals literally perceive features of a situation that novices cannot, because their perceptual systems have been trained by years of pattern exposure.
Recognition-primed decision making is powerful, but it is not invulnerable. It requires cognitive resources — specifically, it requires that the decision-maker's working memory be available for pattern matching and mental simulation. When that working memory is overwhelmed, the entire system degrades. And it does not degrade gently.
Research on cognitive load consistently demonstrates that decision quality does not decline in a smooth, linear fashion as demands increase. Instead, performance holds relatively steady across a range of increasing load, and then drops sharply — a threshold collapse. Allen and colleagues (2014) found that when participants were placed under cognitive load by memorising an eight-digit number while making decisions, their ability to extract basic information remained intact, but their capacity to optimise choices — to think strategically — was severely suppressed. The implication is stark: under high cognitive load, you can still read the gauges, but you can no longer figure out what they mean together.
Deck and Jahedi (2015) extended these findings, showing that cognitive load increases risk aversion, reduces numerical reasoning, and makes decision-makers more susceptible to anchoring — fixating on the first piece of information they encounter, even when it is irrelevant. Under load, the analytical system that might catch errors or question initial impressions is effectively taken offline, leaving the faster, more automatic system to operate without supervision.
For crisis leaders, this threshold effect has two critical implications. First, you must manage your own cognitive load. Every additional demand — a ringing phone, a side conversation, an unresolved question held in working memory — pushes you closer to the threshold. Second, and perhaps more importantly, you must manage the cognitive load of your team. A leader who piles simultaneous demands on a subordinate during a crisis is not merely being inconsiderate; she is systematically degrading the quality of decisions across her entire operation.
How do you recognise cognitive overload in yourself or others? The research identifies several warning signs that experienced leaders learn to watch for:
Return to the cockpit of Flight 1549. The National Transportation Safety Board's investigation (2010) revealed something remarkable about Sullenberger's decision-making. In post-accident simulations, pilots who attempted an immediate return to LaGuardia — without any delay for assessment or decision-making — were able to land safely. But when the simulations added a realistic 35-second delay for the pilots to assess the situation and decide on a course of action, every single attempt to return to the airport failed. The plane crashed into buildings.
Sullenberger's decision to ditch in the Hudson was, in Klein's framework, a textbook recognition-primed decision. He did not compare LaGuardia versus Teterboro versus the Hudson using weighted criteria. He recognised the situation — dual engine failure at low altitude over an urban area — and mentally simulated the return to LaGuardia. In his mind, the simulation broke. He could see they would not make it. The Hudson appeared not as an optimised choice but as the only course of action that survived mental simulation. Critically, he also made several expert decisions that overrode standard procedure: he activated the auxiliary power unit immediately (ahead of checklist sequence) and selected flaps 2 instead of the standard flaps 3, a choice that reduced drag and extended glide distance. These were not analytical calculations. They were the products of decades of accumulated flight experience expressed as pattern recognition.
In Chapter 1, we examined the Mann Gulch disaster as a case of organisational collapse. Now we return to it through a different lens: the decision-making of foreman Wag Dodge in the final minutes before the fire overtook his crew.
As Maclean (1992) reconstructs it, Dodge realised that the fire had jumped the gulch and was racing uphill toward his crew at a speed that made escape to the ridge impossible. He had perhaps ninety seconds. What Dodge did next was, in Klein's analysis, one of the purest examples of creative decision-making under lethal time pressure ever documented. He stopped running, bent down, and lit a match. He set fire to the grass in front of him, creating a small burned-out area. Then he lay down in the ashes of his own fire and let the main fire burn over him.
No one had ever done this before. There was no training for it, no protocol, no prior pattern to recognise. Dodge invented the escape fire in the moment. Klein has analysed this decision as a case where RPD's normal pattern-matching mechanism was unavailable — the situation was genuinely novel — and the decision-maker's deep understanding of the domain (how fire behaves, what it needs, what it leaves behind) enabled a creative leap. Dodge's crew, lacking this depth of understanding, saw only a madman lighting a fire in the path of a fire. They ran past him. Thirteen of them died.
Dodge's escape fire was creative genius. But his crew's refusal to follow him was not stupidity — it was also a form of pattern recognition. They had a pattern for what you do when fire chases you: you run. How should a leader communicate a genuinely novel course of action when their team's experience patterns are actively working against comprehension? What could Dodge have done differently in those ninety seconds?
The Fukushima Daiichi nuclear disaster of March 2011 offers a contrasting lesson. When the earthquake and tsunami struck, operators at the plant faced an unprecedented situation: the complete loss of electrical power, instrumentation, and cooling systems across multiple reactor units simultaneously. Emergency management plans had never contemplated this scenario (National Research Council Committee, 2014).
In the critical early hours, decision-makers at Fukushima attempted to use analytical processes — verifying plant status, consulting procedures, seeking authorisation through the chain of command — under conditions that demanded faster, more adaptive action. With no reliable instrument readings, operators could not confirm reactor status, which meant they could not match the situation to their trained procedures, which meant they could not act. The procedural framework that was designed to ensure careful decision-making became a paralysis mechanism. Hours were lost in attempts to gather information that simply was not available, while conditions inside the reactors deteriorated beyond recovery.
The lesson is not that analytical thinking is bad. It is that analytical thinking deployed in conditions that require recognition-based or creative action is a form of mode mismatch — and mode mismatch kills.
The central practical question of this chapter is not "which decision mode is best?" It is "which decision mode is appropriate right now?" The answer depends on a dynamic assessment of four factors:
The most dangerous moments in crisis leadership occur at the boundaries between these modes — when a situation looks familiar but is actually novel (leading to confident pattern matching onto the wrong template), or when novelty is present but time pressure triggers an automatic reversion to known patterns. Expert leaders develop what we might call mode awareness: the metacognitive ability to monitor not just the crisis but their own cognitive process, asking "Am I in the right mode for this situation?"
Simon's concept of satisficing gains new urgency in crisis conditions. In a crisis, the search for the optimal decision is not merely impractical — it is actively dangerous, because the time consumed searching for "best" allows the situation to deteriorate past the point where any good option remains. The satisficing leader sets a clear threshold: this action will be adequate to prevent the worst outcome and preserve options for future adjustment. Then she acts.
This requires a psychological discipline that many high-achievers find deeply uncomfortable. Leaders who have been rewarded throughout their careers for finding the best answer, the elegant solution, the thoroughly analysed recommendation, must learn to override that training in crisis. The perfect is genuinely the enemy of the good when the building is burning. A good-enough decision made now is almost always superior to an optimal decision made too late.
But satisficing has a cost. It requires cognitive resources to set appropriate aspiration levels, to monitor whether the "good enough" decision is actually performing adequately, and to adjust when it is not. In prolonged crises — those lasting days, weeks, or months — the cognitive resources that make satisficing possible are themselves eroded by fatigue, stress, and the cumulative weight of decision after decision. When those resources are finally exhausted, decision-makers do not satisfice; they default. They fall back on the most automatic, most habitual response available, regardless of whether it fits the situation. This dynamic — the erosion of satisficing capacity over time — is a critical vulnerability in extended crises, one we will examine in detail in Chapter 6.
Consider the difference between satisficing and giving up. A satisficing leader accepts a good-enough outcome intentionally, with continued monitoring. A depleted leader accepts whatever happens because they no longer have the capacity to evaluate. Can you think of a crisis example — from the news, from your own experience — where you suspect a leader crossed from satisficing to defaulting? What were the warning signs?
Understanding the cognitive science of crisis decision-making is valuable only if it translates into action. Drawing from the research reviewed in this chapter, several strategies emerge for leaders who must maintain decision quality under conditions designed to destroy it:
In Chapter 4, we move from the individual decision-maker to the communication systems that connect crisis leaders to their teams and their publics. Even the best decision is worthless if it cannot be communicated clearly, quickly, and credibly. We will examine how crisis communication operates under the same constraints we explored here — time pressure, information degradation, and cognitive overload — and why the message received is almost never the message sent.
Allen, P. M., Edwards, J. A., Snyder, F. J., Makinson, K. A., & Schneider, D. M. (2014). The effect of cognitive load on decision making with graphically displayed uncertainty information. Risk Analysis, 34(8), 1459–1474. https://doi.org/10.1111/risa.12161
Deck, C., & Jahedi, S. (2015). The effect of cognitive load on economic decision making: A survey and new experiments. European Economic Review, 78, 97–119. https://doi.org/10.1016/j.euroecorev.2015.05.004
Klein, G. (1998). Sources of power: How people make decisions. MIT Press. https://mitpress.mit.edu/books/sources-power
Klein, G. (2008). Naturalistic decision making. Human Factors, 50(3), 456–460. https://doi.org/10.1518/001872008X288385
Klein, G., Calderwood, R., & Clinton-Cirocco, A. (2010). Rapid decision making on the fire ground: The original study plus a postscript. Journal of Cognitive Engineering and Decision Making, 4(3), 186–209. https://doi.org/10.1518/155534310X12844000801203
Maclean, N. (1992). Young men and fire. University of Chicago Press. https://press.uchicago.edu/ucp/books/book/chicago/Y/bo3630528.html
National Research Council Committee. (2014). Lessons learned from the Fukushima nuclear accident for improving safety of U.S. nuclear plants. National Academies Press. https://doi.org/10.17226/18294
National Transportation Safety Board. (2010). Loss of thrust in both engines after encountering a flock of birds and subsequent ditching on the Hudson River, US Airways Flight 1549, Airbus A320-214, N106US, Weehawken, New Jersey, January 15, 2009 (Aircraft Accident Report NTSB/AAR-10/03). https://www.ntsb.gov/investigations/AccidentReports/Pages/AAR1003.aspx
Simon, H. A. (1947). Administrative behavior: A study of decision-making processes in administrative organizations. Macmillan. https://archive.org/details/administrativebe00simo