THANK YOU FOR SUBSCRIBING
Chris Todd, Emergency Manager, Town of Los GatosWhen an emergency operations center activates, the first problem is rarely a lack of technology. More often, the problem is that the right people cannot use the right technology fast enough. In emergencies, capability is not defined only by what a platform can do; it is defined by what personnel and partners can do with it under stress. That is why technology decisions in emergency management should be guided less by novelty than by operational fit. The central question is not simply which system is most advanced, but whether an organization should spend scarce steady-state time adapting people to the system, or adapt the system to the people who will actually staff the incident.
This is fundamentally a question of how organizations use limited training time. One approach is to devote substantial effort to teaching a broad range of personnel how to operate specialized systems that may be used only during activations. Another is to design emergency workflows so they align more closely with the tools, access patterns and communication habits people already use in routine operations. This does not diminish the importance of training and exercises. It clarifies where they produce the greatest return.
The real objective is not software familiarity for its own sake, but operational confidence: the ability to coordinate, make decisions and recover from friction when pressure is high.
A recurring challenge I have seen is the integration of occasional users into non-native systems. During incidents, the response structure often includes private-sector partners, department representatives and personnel from different levels of government who are highly capable in their own roles but do not work every day inside emergency management platforms. Asking that broader network to step into unfamiliar systems during an activation usually demands more training time than organizations can realistically sustain. By contrast, adoption and performance improve markedly when emergency processes are built around tools and workflows that are already familiar to the staff and the organization.
“In emergencies, capability is not defined only by what a platform can do; it is defined by what personnel and partners can do with it under stress.”
This is why broadly familiar tools deserve more serious consideration than they sometimes receive. Collaboration suites, shared document environments, established GIS workflows and routine communication channels may appear less specialized than purpose-built emergency platforms, but they reduce hesitation. People know how to access them, where to find information, how to share updates and how to recover when something goes wrong. They are also more likely to work across mutual aid, partner and surge environments. At the same time, some specialized systems effectively become familiar because they are used often enough to be part of routine operations. Mass-notification tools are a clear example. The point is not that general-purpose tools are always better. It is that familiarity, routine use and organizational fit are themselves operational capabilities.
None of this is an argument against specialized technology. Advanced modeling, evacuation analysis, continuity platforms, damage assessment tools and predictive analytics can add real value when they solve problems that routine enterprise tools cannot. But their value is not established by sophistication alone. It depends on whether their outputs can be interpreted, communicated and acted upon across organizations with different roles, authorities and operating rhythms. More information does not automatically create better coordination. A platform may be analytically powerful for a small technical group and still be of limited value to the wider response network if it introduces friction at the point where coordination actually happens.
That leads to a practical test. Before adopting a new platform, leaders should ask: Does it solve a problem current systems cannot solve at an acceptable level? Who will need to use it and how often will those users encounter it before an incident? Can partner agencies and surge personnel work with it without significant friction? What steady-state training time will it consume and what other readiness priorities will that displace? What happens if the system, data feed, or trained specialist is unavailable at the wrong moment? And does the improvement in decision quality justify the full burden of governance, maintenance, access management and organizational attention?
These are not anti-technology questions. They are governance questions. The most resilient systems are not the ones optimized only for specialists. They are the ones the broader response network can access quickly, understand intuitively and use reliably together when the stakes are highest.
Read Also