Over the past week, several Republicans have defended the idea of avoiding panic as a justification for lying to the public in crisis situations, following the revelations in Bob Woodward’s book that Trump was well aware of the dangers of the coronavirus at the same time he was downplaying it. The problem, as the disaster science expert Samantha Montano noted on Twitter, is that mass panic should be the least of our worries in a disaster—because it doesn’t happen.
That may seem counterintuitive. After all, we’ve all seen screaming hordes of people fleeing disaster areas: It’s a staple trope of movies and TV shows. But there are more than 50 years of disaster studies demonstrating that people don’t do that in real life. As early as 1954, E.L. Quarantelli, who later founded the Disaster Research Center, had enough data to suggest that panic after a disaster was “uncommon.” Studies of disasters—from hurricanes to snowstorms isolating people in highway rest stops to the collapse of the World Trade Center on 9/11—show that the public does not panic, does not run screaming, and typically reacts in a reasonably rational way. In fact, studies show that people tend to react in highly social ways after a catastrophe; the first assistance to victims almost invariably comes from nonprofessionals, and affected people tend to come together and organize to improve their situation.
My own anecdotal experience as a disaster responder supports this conclusion. I have a clear memory of a late-night motorcycle ride in from the airport in Padang, West Sumatra, some 24 hours after a strong earthquake had collapsed buildings in the city, and seeing snaky lines of people on foot waiting to buy gas. Uh oh, I thought, but I needn’t have worried: People waited calmly in line, and there were no major incidents.
Or take another late night, in northeastern Japan, when a strong aftershock to the 2011 quake that triggered a tsunami rousted us from our hotel rooms and knocked out grid electricity to the whole region. I expected chaos; instead, I found people in reflective gear directing traffic in place of nonfunctioning traffic lights and polite hotel staff warning us to be quick as we cleaned out our tumbled rooms, just in case.
So why does this myth about public panic persist? Part of it, of course, is the aforementioned movies and TV shows, which in their uniformity create a kind of vicious cycle of what I call narrative disorder. When we see the behavior often enough, we start to believe it—even knowing that movies aren’t real—to the point that we may expect that kind of behavior and to the point of even seeing it where it isn’t happening. Since we accept that that is what happens, more scriptwriters and filmmakers, who typically don’t know more about disasters than the general public, put it into their movies, and we become more and more convinced that it is true.
While this is understandable for people who aren’t in charge of setting policy for disasters or leading responses, it’s a sorry excuse for elected officials who should have access to emergency management experts. However, disaster studies have found a more insidious reason that leaders cling to such ideas. While it is the well-evidenced conclusion that the masses do not panic in disasters, the literature does point to another pattern: Elites, who have the most to lose in the case of disruption, do.
Lee Clarke and Caron Chess, two researchers at Rutgers University who are usually credited with coining the phrase “elite panic,” argue with persuasive examples that elites fear panic, cause panic, and often panic themselves, reacting disproportionately to threats. (Clarke has created a set of presentation slides about social behavior in disasters; the final slide, titled “Modeling official behavior,” gives us the evocative bullet points: “Ignorance,” “Arrogance,” “Hubris,” and “Officials can cause ‘panic.’”)
Kathleen Tierney of the University of Colorado looked at elite panic in the aftermath of Hurricane Katrina, such as when then-Louisiana Gov. Kathleen Blanco first called off the emergency response and then threatened shoot-to-kill orders in response to the problem of looting, which as later reporting showed was highly exaggerated. Similarly, police from a neighboring parish blocked evacuees from New Orleans from entering, firing shotguns over their heads, because of their fears of the “criminal element” based on those same exaggerated reports of looting and mayhem. The impacts of this, as Tierney notes, include the direct damage to people who were frightened or unable to scavenge for food. Tierney asks: “How much resident-to-resident helping behavior was prevented or suppressed because people were afraid to venture out to help their neighbors out of fear of being killed or arrested?”
Clarke and Chess write: “Planners and policy makers sometimes act as if the human response to threatening conditions is more dangerous than the threatening conditions themselves.” That false claim becomes a way for elites to maintain control and protect their own interests. Typically shielded from the immediate impacts of a disaster, elites tend to be much more worried about disruption to the status quo and the loss of political capital and power. That they use a long-discredited fallacy about the public behaving irrationally to do this adds insult to injury.
It’s also a dereliction of duty, because while lies and exaggerations about the potential for panic worsen a crisis response, working with the public to deal with the disaster tends to improve it. For example, rational behaviors like evacuating in the face of a hurricane or stocking up on essential goods can cause problems when a lot of people try to do them at the same time. But planning and communications by leaders can mitigate that. One of the largely unsung successes of Hurricane Katrina was the counterflow approach to evacuation, which made highways unidirectional and asked the public to respect staggered time frames to allow the municipalities closer to landfall to escape before the larger cities clogged the roads.
Disaster studies also show that, given enough time and ability to communicate, communities will come together even without official leadership (as in this wonderful study of a group of strangers stranded at a highway rest stop in 1958). Faced with a dearth of official information and planning during the COVID-19 pandemic, communities in many places in the United States set up mutual aid systems to shop for older people or check in on people living alone. Apartment buildings and shops took it on themselves to adopt policies based on the information they could access, taking steps like mask requirements and curbside pickup in order to mitigate the danger, rather than panicking.
In a 2002 article about managing panic in the case of bioterrorism, Thomas Glass and Monica Schoch-Spana of Johns Hopkins University write that a primary guideline should be to “treat the public as a capable ally in the response to an epidemic,” because everything we know about disasters tells us that that is exactly what the public can be. When officials lie to the public about an epidemic, they are not only putting people in danger—they are also giving up one of their best strategies for an effective response.