Why the opening phase of a crisis determines the trajectory of everything that follows
At 3:27 p.m. on March 11, 2011, the largest earthquake in Japan's recorded history had just triggered a massive tsunami racing toward the Fukushima Daiichi Nuclear Power Station. Inside the control rooms, operators watched as external power was lost. Emergency diesel generators kicked in — the system was performing as designed. Fourteen minutes later, a wall of water fourteen metres high overwhelmed the plant's seawall. The generators drowned. Batteries began to fail. In the critical minutes that followed, operators faced a cascade of contradictory instrument readings, severed communication lines, and a situation for which no procedure manual had been written. The question confronting the shift supervisors and plant management was not whether something was wrong — it was staggeringly, obviously wrong — but rather: how wrong is this, and what mode of response does it demand?
Seven thousand kilometres away and seven years later, a group of boys from a Thai football team and their coach wandered into the Tham Luang cave complex after practice. When monsoon rains flooded the entrance, the local governor activated emergency protocols within hours, and an international rescue coordination effort was underway before most of the world even knew the boys were missing. The contrast between these two cases — one defined by catastrophic delay, the other by remarkably swift mobilisation — illuminates something fundamental about crisis leadership: the first thirty minutes don't just shape the response. They are the response.
Every crisis has an inflection point — a moment when the situation crosses from manageable abnormality into something that demands a fundamentally different mode of operating. The distance between recognising that threshold and actually crossing it in organisational terms is what we might call the activation gap: the period between "something is wrong" and "we are now in crisis response mode." Research consistently shows that this gap is where the most consequential decisions are made, almost always with the least available information (Boin et al., 2016).
The activation gap is not primarily a problem of speed, though speed matters. It is a problem of recognition. Barry Turner's foundational work on man-made disasters identified what he called the incubation period — a phase in which warning signals accumulate but are systematically overlooked, misinterpreted, or rationalised away (Turner & Pidgeon, 1997). Turner identified four categories of information failure that characterise this period: information that is completely unknown; information that exists but is not fully appreciated; information that exists but is not correctly assembled; and information that does not fit existing mental models. All four of these failures are dramatically compressed during the first thirty minutes of an acute crisis, when the incubation period collapses and the event itself arrives.
What makes this opening phase so treacherous is not the absence of information — there is usually quite a lot of it — but rather the abysmal signal-to-noise ratio. Reports flood in from multiple sources, many of them contradictory, incomplete, or distorted by the stress and confusion of the moment. Leaders must make sense of this stream while simultaneously deciding whether to activate emergency protocols that, once triggered, carry significant organisational and sometimes political costs. This dual burden — sensemaking and deciding simultaneously — is what Weick (1988) identified as the central paradox of crisis: the actions you take to understand the situation often change the situation itself.
Consider an organisation you have been part of — a workplace, a sports team, a university department. If something went seriously wrong at 2 a.m. on a Sunday, who would be the first person to know? Who has the authority to activate an emergency response? How long would it take for those two people to connect? If you are unsure of the answers, you have just identified your organisation's activation gap.
An activation threshold is the point at which an organisation shifts from routine operations to crisis response mode. It sounds simple in theory — you set criteria, and when those criteria are met, you activate. In practice, activation thresholds are among the most poorly designed elements of organisational preparedness. The Institute of Medicine's toolkit on crisis standards distinguishes between indicators (data points that suggest a situation is developing) and triggers (specific thresholds that demand action), noting that the gap between the two is where most organisations struggle (Institute of Medicine, 2013). An indicator might be a sensor reading that is slightly above normal; a trigger is the determination that the reading represents a genuine emergency requiring immediate resource mobilisation.
The challenge is that activation carries costs. Declaring a crisis when one does not materialise — a false positive — wastes resources, disrupts operations, erodes credibility, and can create a "cry wolf" dynamic that makes future activation harder. Boin and colleagues (2020) have documented how this calculation produces a systematic bias toward under-activation. Organisations develop what might be called an institutional immune response that resists the disruption of emergency mobilisation. The more bureaucratic the organisation, the stronger this resistance tends to be.
The Fukushima Daiichi disaster illustrates this failure with devastating clarity. After the tsunami struck and backup power was lost, plant operators and management faced a cascading series of escalation decisions. The National Research Council's subsequent investigation revealed that decision-making was paralysed by the lack of reliable, real-time information on plant status (National Research Council, 2014). Instrument readings were unreliable or absent. Communication between the control room, plant management, and TEPCO's Tokyo headquarters was fragmentary. At each escalation point — from declaring an emergency at the plant level, to requesting external assistance, to ordering civilian evacuation — there was delay. Not because individuals were incompetent, but because the system was designed to process information through layers of confirmation and approval that were wholly inadequate for the speed of the unfolding event.
The report documented how emergency management plans were "inadequate to deal with the magnitude of the accident, requiring emergency responders to improvise" (National Research Council, 2014). This is a critical insight: when formal activation protocols fail, the quality of the response depends entirely on the ability of individuals to improvise — and improvisation without clear role authority creates its own cascading failures.
The Tham Luang cave rescue presents a striking counter-example. When the boys failed to return from practice, the alarm was raised quickly. Chiang Rai's provincial governor, Narongsak Osatanakorn, assumed incident command and activated emergency coordination protocols that rapidly scaled from local to national to international scope. The Australian Government's post-rescue analysis identified several factors that enabled this swift activation: pre-existing relationships between Thai emergency agencies, a cultural willingness to escalate without excessive procedural gatekeeping, and crucially, a single decision-maker with clear authority to activate and expand the response (Australian Government Department of Home Affairs, 2018).
What distinguished Tham Luang was not that the situation was simpler — a flooded cave system with thirteen lives at stake and no proven rescue methodology was extraordinarily complex — but that the activation architecture was clear. The governor did not need to convene a committee to decide whether this constituted a crisis. The threshold was unambiguous, the authority to activate was vested in a single role, and the escalation pathway from local to national to international resources was well-defined.
If timely activation saves lives and resources, why do organisations so often fail at it? Boin et al. (2016) identify several interacting barriers that operate at the institutional level, distinct from the cognitive biases we explored in Chapter 1. First, there is the problem of distributed information: the person who first detects an anomaly is rarely the person with the authority to activate a crisis response. Information must travel upward through organisational layers, and at each layer it is subject to filtering, reinterpretation, and delay. Second, organisations develop normalcy routines — deeply embedded patterns of behaviour that assume events fall within normal operating parameters. These routines are efficient under ordinary conditions but become actively dangerous when conditions are extraordinary.
Boin and colleagues' later work on "creeping crises" extends this analysis, documenting how threats that develop gradually can pass entirely through organisational detection filters (Boin et al., 2020). The psychological factors are formidable: the inconceivability of certain events, communication failures across organisational boundaries, and the challenge of recognising threats that do not conform to existing mental models. These are not failures of intelligence or diligence; they are structural features of how organisations process information.
Maitlis and Christianson (2010) build on Weick's framework to argue that crisis and change contexts are "especially likely to impede sensemaking processes" because they disrupt the shared meanings and emotional equilibrium that sensemaking depends upon. When the emotional temperature in an organisation spikes — when people are frightened, confused, or overwhelmed — the very cognitive processes needed to interpret the situation are degraded. This creates a vicious cycle: the more severe the crisis, the harder it is to recognise it as such, because the cognitive resources needed for recognition are consumed by the emotional demands of the situation.
"Organisations are not designed to look for crises. They are designed for efficiency, for routine, for the smooth processing of predictable inputs. The detection of crisis requires precisely the opposite orientation — a vigilance toward the anomalous, the unexpected, the signals that do not fit." — Boin et al., The Politics of Crisis Management (2016)
Turner (1997) identified four types of information failure in the pre-crisis incubation period. Which type do you think is most dangerous during the first thirty minutes of an acute crisis: information that is completely unknown, information that exists but is not appreciated, information that is not correctly assembled, or information that does not fit existing models? Why might your answer change depending on the type of crisis?
Even when the activation threshold is crossed, a second challenge immediately emerges: role clarity. Who is in charge? Who does what? In routine operations, role assignments are well-understood and largely automatic. In the opening minutes of a crisis, formal organisational structures are often suddenly insufficient. The normal chain of command may be disrupted — key personnel may be unreachable, the nature of the crisis may fall outside any single department's jurisdiction, or the scale of the event may overwhelm the resources assigned to normal emergency roles.
The Tham Luang rescue symposium report emphasised that one of the most critical success factors was the early establishment of clear command, control, and coordination structures that spanned strategic, operational, and tactical levels (Australian Government Department of Home Affairs, 2018). This was not accidental. Thai disaster management frameworks vest clear authority in provincial governors for events within their jurisdiction, which meant that the question "who is in charge?" had an immediate, unambiguous answer.
Contrast this with Fukushima, where command authority was fragmented between the plant operator (TEPCO), the nuclear regulator, the Prime Minister's office, and local government officials responsible for evacuation. The National Research Council report documented how this fragmentation produced conflicting directives, duplicated efforts, and critical gaps where no one believed they held responsibility (National Research Council, 2014).
Role clarity is not just about knowing who is in charge — it is about understanding the mobilisation sequence: the order in which roles are activated and the dependencies between them. A common failure mode is activating resources before the situation assessment that determines what resources are needed. Another is failing to activate communications capacity early enough, which means that subsequent mobilisation decisions cannot be effectively transmitted. The sequence matters because crisis response is not a parallel process where everything happens simultaneously — it is a cascading series of dependent actions where early decisions constrain later options.
Once activation has occurred and roles are being mobilised, the next critical challenge is information triage — the process of sorting, prioritising, and routing the flood of incoming data. In the opening phase of a crisis, information arrives from multiple sources simultaneously: automated sensor systems, eyewitness reports, social media posts, media inquiries, peer agency notifications, and internal status updates. Much of this information is incomplete, contradictory, or simply wrong. The leader's task is not to process all of it — that is impossible — but to identify the signals that matter most and route them to the people who can act on them.
Weick's (1988) concept of enacted sensemaking is particularly relevant here. Weick argued that sensemaking in crisis is not a passive process of receiving and interpreting information — it is an active process in which the actions you take to understand the situation shape what information becomes available and relevant. When a leader decides to focus attention on one data stream, they necessarily de-prioritise others. When they commit resources to investigating one hypothesis, they constrain their capacity to investigate alternatives. "Action precedes cognition and focuses it," Weick wrote, "emphasising that specific action renders many cues irrelevant and consolidates an otherwise unorganised set of environmental elements" (Weick, 1988, p. 307).
This creates a fundamental tension in information triage. Acting on early information is necessary to shape the response, but acting too quickly on unreliable information can commit the organisation to a course of action that becomes difficult to reverse. Maitlis and Christianson (2010) describe this as the challenge of maintaining "sensemaking fluidity" — the capacity to hold multiple interpretations simultaneously and revise them as new information arrives, rather than prematurely locking onto a single narrative.
Effective information triage depends on establishing what we will explore in Chapter 4 as an information hierarchy: a structured understanding of who needs to know what, and in what order. During the first thirty minutes, not everyone needs all the information. The incident commander needs situation awareness — a broad picture of what is happening and what resources are available. Operational leads need specific, actionable intelligence relevant to their function. External communications personnel need verified facts they can release without creating additional confusion. Political and senior leadership need enough context to make strategic decisions without being overwhelmed by operational detail.
Turner and Pidgeon's (1997) framework suggests that the most dangerous information failures are not missing data but misassembled data — information that exists within the system but is not correctly combined to reveal the true picture. In the first thirty minutes, this assembly function is perhaps the most critical and most difficult leadership task. It requires someone — usually the incident commander or a dedicated intelligence function — to hold the threads together and continually ask: "What picture does this information paint, and what are the most important things we still don't know?"
During the early hours of a crisis, social media often produces the fastest information — but also the least reliable. Automated sensors produce the most reliable data — but may be offline or misread. How would you design an information triage protocol that balances speed against reliability? What source would you trust most in the first ten minutes, and would your answer change by minute thirty?
The first thirty minutes of a crisis set a trajectory that is extraordinarily difficult to alter. Early activation decisions determine which resources are available and which are not. Early role assignments create command structures that persist even when they prove suboptimal. Early information triage decisions establish narratives that shape subsequent interpretation. Weick (1988) described this as the commitment dimension of sensemaking: once an organisation commits to a particular interpretation and course of action, the psychological and structural investments in that commitment make reversal costly and unlikely.
This does not mean that early decisions must be perfect — perfection is impossible with fragmentary information. It means that early decisions must be designed for revision. The most effective crisis leaders make initial decisions that preserve optionality: activating broadly rather than narrowly, establishing communication channels before they are needed, and explicitly flagging assumptions that need to be tested as more information arrives. They treat the first thirty minutes not as the period in which the right answer must be found, but as the period in which the capacity to find the right answer must be built.
The contrast between Fukushima and Tham Luang is ultimately a story about trajectory. At Fukushima, delayed activation, fragmented authority, and overwhelmed information systems set a trajectory toward cascading failure that brave individual actions could slow but not reverse. At Tham Luang, swift activation, clear authority, and effective coordination set a trajectory toward successful resolution despite enormous technical challenges. The boys were trapped for eighteen days — but the response architecture that would eventually save them was established in the first hours.
As Boin et al. (2016) remind us, crisis management is fundamentally a political activity, not merely a technical one. The decisions made in the first thirty minutes are shaped by institutional cultures, power structures, legal frameworks, and the individual courage of the people who happen to be on duty when the call comes in. Understanding these dynamics — and designing systems that account for them — is the difference between organisations that survive crises and organisations that are consumed by them.
In Chapter 3, we move from the opening phase into the sustained crisis environment and examine decision-making under deep uncertainty — what happens when the initial mobilisation is complete but the situation continues to evolve in unpredictable ways. We will explore how leaders make high-stakes choices when they cannot wait for complete information, including the use of decision frameworks, the role of intuition versus analysis, and the dangers of both paralysis and premature commitment. The information hierarchy concept introduced here will become the central focus of Chapter 4, where we examine how communication architectures determine what leaders know, when they know it, and what they can do with it.
Australian Government Department of Home Affairs, Emergency Management Australia. (2018). Thai cave rescue symposium 2018: Symposium report. https://www.homeaffairs.gov.au/emergency/files/thai-cave-rescue-report-web.pdf
Boin, A., Brown, C., & Lodge, M. (2020). Hiding in plain sight: Conceptualizing the creeping crisis. Risk, Hazards & Crisis in Public Policy, 12(2), 116–138. https://doi.org/10.1002/rhc3.12193
Boin, A., 't Hart, P., Stern, E., & Sundelius, B. (2016). The politics of crisis management: Public leadership under pressure (2nd ed.). Cambridge University Press. https://www.cambridge.org/core/books/politics-of-crisis-management/CA51C2B81E41D80B40CA451299975BF6
Institute of Medicine Committee on Guidance for Establishing Standards of Care for Use in Disaster Situations. (2013). Crisis standards of care: A toolkit for indicators and triggers. National Academies Press. https://doi.org/10.17226/18338
Maitlis, S., & Christianson, M. (2010). Sensemaking in crisis and change: Inspiration and insights from Weick (1988). Journal of Management Studies, 51(4), 551–580. https://doi.org/10.1111/j.1467-6486.2010.00908.x
National Research Council Committee on Lessons Learned from the Fukushima Nuclear Accident. (2014). Lessons learned from the Fukushima Nuclear Accident for improving safety of U.S. nuclear plants. National Academies Press. https://doi.org/10.17226/18358
Turner, B. A., & Pidgeon, N. F. (1997). Man-made disasters (2nd ed.). Butterworth-Heinemann.
Weick, K. E. (1988). Enacted sensemaking in crisis situations. Journal of Management Studies, 25(4), 305–317. https://doi.org/10.1111/j.1467-6486.1988.tb00039.x