Panic, Warning, and National Security – Art Botterell Interview

Art Botterell is one of the main subjects of Reinventing 911. Botterell is an extremely interesting person, whom I met at a panel discussion honoring the publication of Safe: The Race to Protect Ourselves in a Newly Dangerous World. The conversation that evening had been relentlessly discouraging. The panelists, representing a wide range of technologists and analysts, agreed that federal security policy bordered on hopeless. This was before Hurricane Katrina, but the Department of Homeland Security was already being derided as a bureaucratic morass of the worst type.

Although he makes a cameo appearance in the book, Botterell was not on the panel, but near the end of discussion he was invited to stand and introduce himself. When Botterell spoke up, I lifted my head from the desk on which I’d been banging it and started to feel better even before I completely understood his point. For one thing, the man seemed cheerful. Moreover, he had that rational, cold-eyed, somewhat sarcastic version of cheerfulness that I deeply appreciate. Botterell suggested that we begin, rather than end, with the notion that the federal government can’t protect us. The federal failure should be the starting point, and, he suggested, it could be a liberating starting point. Once free of the illusion that Mommy and Daddy are going to make it all better, we can ask smarter questions about what it will take to protect ourselves.

Botterell is not a naÔve libertarian, and he was not talking about buying assault weapons and hunkering down in the basement. Instead, he was talking about identifying the native strengths of our communities, and reinforcing these strengths with technology. In the Wired story, I describe the important inventions Botterell has made or helped to make, including California’s Emergency Digital Information Service (EDIS), and the Common Alerting Protocol (CAP). I quote him a bit, but there was a lot of story to tell, and I left out many of his most interesting comments. Below is more from my interviews with Botterell.

You created a very efficient system to get warnings to the public over the Internet. Why hasn’t this been adopted nationally?

How does technology get transferred in to government? Almost entirely through the efforts of venders and contractors. I struggled for ten years to get emergency management folks to use the Internet protocols. But the problem was that there was no Internet Incorporated to take people to lunch! There was no mechanism to transfer open technology into the government. The government has great difficulty accepting anything for free. They are used to contractors selling them things. There is suspicion of free technologies on the part of the government, and there is opposition to them from venders. Free is the hardest price for the government to pay.

Do you run EDIS yourself?

EDIS by email is my own thing, but the EDIS system is part of the state. I built it when I worked for the state Office of Emergency Services, and I had to overcome a lot of opposition, because there was a big initiative at that time to do everything in Lotus Notes. I built my own gopher server, and of course that annoyed everybody with a stake in the Lotus notes project.

(The following question is for geeks only.) The Common Alerting Protocol – what kind of standard is it?

WC3 provides the technology platform. The ontology people provide the metaphysical context. In between are the applications standards. OASIS has a lightweight standards process, like the IETF’s, in that you have to have an implementation. After 9-11 the industry players suddenly got religion, and created a series of organizational structures to define the standards for warnings. The outcome was a technical committee within OASIS. However, it isn’t the emergency managers who are in OASIS. It’s the people selling software to the emergency managers.

What were the difficult issues that came up in the defining an open standard for warnings?

One of the questions is, how do you define a successful warning? Is it an information transfer question: if people got the warning, then the warning was successful? Or is it more of a social policy question: did the warning produce the behavior it was meant to produce? This is a philosophical question. Is it the government’s responsibility to warn people, pure and simple, and then what they do is up to them? Interestingly, both libertarians and bureaucrats hold this view. The libertarians, because they say ‘just give me the info and leave me alone.î The bureaucrats, because they can more easily be successful. If their job is simply to transmit the information, then it is easy to show that they are doing their job!

Aren’t they?

If you really want people to act on the warnings, then you want to create a corroborative environment, meaning the warning has to go out through several channels. One of the goals of CAP is to allow agency to have one initiation of the warning lead to distribution through multiple channels.

Where did the notion of multiple channels and a corroborative environment come from?

A lot of work has been done in Boulder at the National Hazards Research Center, and by Dennis Meliti. The social science shows that people almost never act on the first warning. In order to get people to act, you have to create a corroborative environment.

Isn’t it rational to seek corroboration? By multiplying the channels of distribution for a single warning, don’t you subvert this rational skepticism and increase the risk of panic?

That’s a fair question, but it is somewhat misguided. No matter how much research is done disproving their assumptions, people insist on believing in panic. Panic actually occurs only in specific circumstances – this is all pretty well understood. Some of the research goes back to the Second World War, when there was attention paid to the behavior of sailors trapped in submarines. The research shows that where there is a dreaded hazard shared equally, panic almost never occurs. Reasoned flight is not panic. When people were running away from the collapse of the World Trade Towers, they stopped to pick up other people who had fallen. That was not panic. Only perceived competition for the means to escape creates panic.

If panic is a myth, why is it mentioned so often in discussions of warning?

There’s a tendency to believe in the myth of panic because it reinforces a sense of bureaucratic elitism: we can’t trust the citizens with warning information, because they might panic.

You don’t think that more powerful warning systems have any risks?

Yes, more effective warnings mean that incorrect warnings become more risky, because they are more likely to be believed, but on the other hand public access to warnings also allows better mechanisms of repudiating or canceling warnings. The truth is that the public is not that brittle.

I was intrigued by your idea that machines, as well as people, need to get warnings. Can you give me an example?

For instance, fire alarms that receive news of nearby fire alarms going off can reset their sensitivity, or perhaps notify different fire stations. One of the influences on me and on CAP is Neil Stephenson’s Snow Crash, and the image of the cybernetic doggies barking to each other in the night. A public, distributed database of warning messages allows machines to receive them.

Can users filter CAP warning by priority?

Well, we divide priority up into urgency, severity, and certainty. Which has higher priority? It depends who you are. In CAP, we broke these out to improve the ability of smart receivers to filter for themselves. We don’t want to make a priori assumptions about what the values of the receivers are.

Some of this is familiar networking theory. Why hasn’t it already been done?

Public warning has an element of psychological transference that has to due with people’s reliance on the government. People have a parental rescue fantasy. To some extent they aren’t asking for the tools to help themselves, they’re asking for Daddy to make it all better. More importantly, there is a strong political and commercial incentive to play into that psychology.

I understand you were briefly part of a lobbying effort to get the federal government to fund this work.

In June every year there is a conference in Boulder at the National Hazards Research Center. At the June 2001 conference some people decided, let’s get an organization together. Then 9-11 occurred in November 2001, and that really motivated people to do it. We had everybody involved in warning there – about 135 people. The model was a group called ITS America – that’s the intelligent transportation people. We were called the Partnership for Public Warning.

What went wrong?

Well, the first thing we did was piss people off. First, we went to the DHS and said the rainbow of doom sucked. Then, we pointed out that in fact there was no national alert system and we needed one. Third, we said that the Emergency Alert System (formerly the Emergency Broadcast System) had a lot of problems. The EAS is the duck fart system – that’s Stan Harter’s term. The problem, really, was that the partnership for Public Warning had a Beltway model. We wanted a big federal sponsor to cough up the money for a project, and that was never going to happen. We hired an executive director at $100,000 plus, and the overhead ate us alive.

Would more money have made you more successful?

There was a difference in philosophy also. Homeland Security people are focused on reinforcing the beliefs that you don’t really believe, but you want to believe. Most of the effort goes into prevention, law enforcement, and the military. People want to see America as an invincible fortress. Those of us in emergency management tend to think, well, stuff happens, so what are you going to do about it? Most of us have been doing natural disasters, not all, but most, where there’s no political price to pay for admitting that something bad is eventually going to take place. But DHS is different. So mainly they’ve been doing deferred maintenance – boots and suits, we call it. Getting fire and police departments up to speed with equipment, etc. That’s good to do, it’s necessary. But it doesn’t address a lot of things that could be happening and that should happen. The problem with emergency management, from a politician’s perspective, is that in the whole arc of emergency management there’s only one moment of good news. That’s where you get the press together and announce you’ve just purchased and installed a new gee whiz system that solves some big problem and so now we can all applaud and change the subject. That’s the only positive thing you can do. The rest of it is all bad news.

But would a hundred million dollars from the government solve the problem?

Effective warning systems will not be built by corporate or government bureaucracies because they can’t redefine the problem, they can’t restate the problem in interesting ways. That is not their job. For instance, what if I turn the idea of warning on its head? Instead of a warning system, maybe it’s possible to build a reassurance system. A ‘five bells and all is well,î system. Something based not on fear, but confidence. This is something you can only think about on your off hours!

If this isn’t mainly a government job, who should send and receive public safety warnings?

Ultimately, I’d like to see a warning Internet. You could take a lot of gatekeepers out of the package. We need to solve the problems of identity management and further engage the problem of reputation and credibility of the source. But now credibility is simply attributed to authority – and then doubted! The warning system of the future should be one half EAS and one half blog. Now, to what extent will communities use this type of system to mitigate risk? I suspect this will be highly variable. In the end, the warning system mirrors the society. This is a good thing, in the sense that warning should be understood as a general part of the social fabric, not as a function of specialized roles.

What about using an old-fashioned phone tree?

Nobody knows what the total telecom capacity is. The system is not linear. If you have n+ calls pouring into n-calls worth of infrastructure, you are going to overload the system. Government agencies have been trying to get network information out of the telcos, without success. There is a system called the National Communication System, the NCS, it was originally staffed by the Army, now it is nested somewhere inside DHS. They have been looking at this for a long time. They have had a telephone contact system for responders, called GETS – the Government Emergency Telephone System. There is a special area code, and if you’ve got the wallet card of power, you can call a secret number and get access to a special set of long distance trunk lines. Of course, that assumes that you can get a local dial tone. Lately they have been working on a cell phone system. If you have ‘the cell phone of power,î your calls go through.

Somehow that sounds both inefficient and scary.

When I left the advertising agency I sent out my goodbye spam saying: I am joining the battle between sinister conspiracy and bureaucratic incompetence.

You really think that a good warning system would not only be open to all recipients, but also open to all senders?

Look at the USGS (United States Geological Survey) ‘did you feel itî link on their Web site. They built it to corroborate their earthquake data. But everybody loves it. People want to participate, to punch their ticket and become a member of the group. This is primal, primate stuff. I want warning to be a participatory activity. If people are doing that, then they aren’t running out into the street, and sending off packages of their old clothes, and generally doing more harm than good.

Right now, it seems that emergency communications are going the other way, and some first responders are even encrypting their radio communications so that scanners can’t listen in.

Well, what are networks made out of? Networks are made out of trust. Once you have the trust, you can use any technology you want for the network, you can use string and tin cans. But we are going through a phase in our culture of distrust. The people currently responsible for emergency response don’t want you to monitor their communications because they don’t trust you to use it appropriately. There are people who have found ways to capitalize on mistrust. You can argue that the goal of terrorism is to corrupt trust, and everybody’s capacity for trust, in this country, took a hit on 9-11. But with no trust, ultimately you have no community. And no community means – no warning!

Botterell has a Web site and blog at incident.com.

Comments are closed.