It’s another dangerous day in America. Bird flu is spreading, the North Koreans have a nuclear bomb, and Osama bin Laden is still at large. The federal security threat-warning system points to “elevated.” Citizens nationwide have been told to be extra vigilant against new terror attacks.
Meanwhile, in the midsize city of Portland, Oregon, the authorities have other things on their minds. A little before 6 pm on this ordinary Saturday evening, there is a hit-and-run in the city’s western suburbs. A moment later, a silent alarm goes off in a building near downtown. At 6:03, there’s trouble with a drunk on the north side, and at almost the same time there’s a report of a disturbance at a Home Depot. Three quiet minutes go by, and then at 6:07 comes news of another hit-and-run.
From a room on the 10th floor of the old Heathman Hotel downtown, I follow the action as it scrolls across the screen of my laptop, little exclamation points popping up on a detailed satellite photo of the town. Each alert is attached to a short bit of text. I can zoom out, watching multiple traumas light up across the whole metropolitan area of 1.7 million people, or zoom in, finding nearly silent places where nothing that requires attention from the police happens for a long time. The resolution is so good, I can pick out individual buildings.
At 7:38 pm there’s news of a robbery downtown. At 7:53 pm another robbery occurs, across the Willamette River. Between these incidents, there’s a motorist in distress, an audible burglar alarm, and a problem with an “unwanted person” serious enough for the police to dispatch three units.
I stay in front of my map for hours, watching a swift, unceasing flow of local problems. While there is an undeniable voyeuristic appeal to a real-time data feed of break-ins, auto thefts, fisticuffs, and public drunkenness, the true value of this experimental system lies elsewhere. For several months, I’ve been talking with security experts about one of the thorniest problems they face: How can we protect our complex society from massive but unpredictable catastrophes? The homeland security establishment has spent an immeasurable fortune vainly seeking an answer, distributing useless, highly specialized equipment, and toggling its multicolored Homeland Security Advisory System back and forth between yellow, for elevated, and orange, for high. Now I’ve come to take a look at a different set of tools, constructed outside the control of the federal government and based on the notion that the easier it is for me to find out about a loose dog tying up traffic, the safer I am from a terrorist attack.
Art Botterell is a 51-year-old former bureaucrat whose outwardly earnest, well-formulated sentences have just the degree of excessive precision that functions among technical people as sarcasm. At one time, Botterell worked for the State of California, in the Governor’s Office of Emergency Services. But he quit that job in 1995. Today, Botterell is supported by his wife, a teacher, leaving him time to save America.
I first met Botterell earlier this year at a discussion of the book Safe: The Race to Protect Ourselves in a Newly Dangerous World. (Safe has four authors, including Katrina Heron, the former editor in chief of this magazine, and Evan Ratliff, whose story, “Fear, Inc.,” appears in this issue.) He caught my attention because, in an evening of discouraging commentary on the security establishment, he alone expressed optimism. There are enormous public safety resources that remain untapped, Botterell argued. “The focus in homeland security is on the idea of America as an invincible fortress,” he told me later. “Most of the effort goes into prevention, law enforcement, and the military. But those of us in emergency management tend to think, ‘Well, stuff happens. So, what are you going to do about it?'”
In the world of disaster management, here is some of the stuff that happens: Levees burst, power grids go dark, oil tankers run aground, railcars full of toxic chemicals tumble off their tracks, tornadoes sweep houses into the sky. In dealing with such catastrophes, emergency managers have experience in the cascade of consequences: Phone service vanishes, hospitals are jammed, highways slow to a crawl, shelters overflow. No matter how much advance planning may have been done, disaster response becomes an improvisation, and society eventually rights itself through the cumulative effect of many separate acts of intelligence.
Obviously, if you want citizens to improvise intelligently, it is wise to let them know as soon as possible when something goes wrong. Back in 1989, when he was working for the state of California, Botterell started creating an innovative warning system called the Emergency Digital Information Service. Botterell’s system – still in use – aggregates weather alerts, natural disaster information, and other official warnings into a common database, then makes them available through multiple media: pager, email, the Web, and digital radio broadcast. Because EDIS warnings are picked up by television newsrooms, local police, school principals, building management firms – anybody who wants them – the system injects massive redundancy into the public warning system and ensures that any serious news will immediately be bouncing around multiple communication channels.
EDIS was designed to fix two flaws in traditional warnings like tsunami sirens, telephone trees, and old-fashioned broadcast alerts. The first problem is that specialized warning systems are infrequently used, and usually fail under stress. But the second problem is more serious: Humans are encoded with a tendency to pause. When we receive new information that requires urgent action, we hesitate, testing the reality of the news and thinking about what to do. Emergency managers are all too familiar with this feature of human nature. They call it milling.
Milling is rational – and dangerous. Even when a warning is successfully delivered, there are deadly delays before people respond. What are they doing in these minutes, hours, and even days? They are talking to friends and family, watching the news, listening to the radio, calling the police, counting their money, and trying to balance the costs of leaving against the risks of staying. When alerts are given through rarely used pipelines, milling increases. And when the information distributed by hard-pressed government officials is confusing or contradictory, milling increases even more.
During a large disaster, like Hurricane Katrina, warnings get hopelessly jumbled. The truth is that, for warnings to work, it’s not enough for them to be delivered. They must also overcome that human tendency to pause; they must trigger a series of effective actions, mobilizing the informal networks that we depend on in a crisis.
To understand the true nature of warnings, it helps to see them not as single events, like an air-raid siren, but rather as swarms of messages racing through overlapping social networks, like the buzz of gossip. Residents of New Orleans didn’t just need to know a hurricane was coming. They also needed to be informed that floodwaters were threatening to breach the levees, that not all neighborhoods would be inundated, that certain roads would become impassible while alternative evacuation routes would remain open, that buses were available for transport, and that the Superdome was full.
No central authority possessed this information. Knowledge was fragmentary, parceled out among tens of thousands of people on the ground. There was no way to gather all these observations and deliver them to where they were needed. During Hurricane Katrina, public officials from top to bottom found themselves locked within conventional channels, unable to receive, analyze, or redistribute news from outside. In the most egregious example, Homeland Security secretary Michael Chertoff said in a radio interview that he had not heard that people at the New Orleans convention center were without food or water. At that point they’d been stranded two days.
By contrast, in the system Botterell created for California, warnings are sucked up from an array of sources and sent automatically to users throughout the state. Messages are squeezed into a standard format called the Common Alerting Protocol, designed by Botterell in discussion with scores of other disaster experts. CAP gives precise definitions to concepts like proximity, urgency, and certainty. Using CAP, anyone who might respond to an emergency can choose to get warnings for their own neighborhood, for instance, or only the most urgent messages. Alerts can be received by machines, filtered, and passed along. The model is simple and elegant, and because warnings can be tagged with geographical coordinates, users can customize their cell phones, pagers, BlackBerries, or other devices to get only those relevant to their precise locale. The EDIS system proved itself in the 1994 Northridge earthquake, carrying more than 2,000 news releases and media advisories, and it has only grown more robust in the decade since.
Anyone who has paid close attention to the evolution of the Internet will recognize the underlying power of the Common Alerting Protocol. Good standards and widespread access, not hardware or software, bring social networks to life. CAP provided the first proven warning standard, but when it comes to participation, California’s EDIS remained strikingly primitive. To this day only certain agencies – like the US Geological Survey, law enforcement and fire departments, and the National Weather Service – are permitted to send out information. This increases trust, but at the expense of scope.
Until recently, CAP was like the markup languages that existed before the invention of the Web – a useful set of technical rules whose potential to change society looked like nothing more than the exaggerated enthusiasm of a few geeks. Open data standards aren’t sexy. You can’t sell them to the government for a pile of cash. And it’s hard to pose in front of them for celebratory photographs.
On May 11, 2005, a small plane took off from an airfield in Pennsylvania, wandered around for a bit, then aimed straight for the Capitol. This was the type of incident the homeland security establishment had been preparing for ever since 9/11. An evacuation began, and reporters caught sight of members of Congress rushing down the steps of the Capitol. Just over half an hour later, the plane was on the ground. As the pilot explained that he was merely lost, rounds of congratulations began to circulate; the government’s quick reaction had proven that new investments in public safety were paying off. Then the DC mayor, Anthony Williams, told reporters that nobody had alerted his administration to the threat until after the all-clear was sounded. There are more than half a million civilians in the District of Columbia. Wasn’t anybody thinking about them?
Washington’s emergency protocols, it turned out, were a jumble after all. And the same is true across the nation. Thousands of vulnerable targets have been identified, but there is no credible plan for protecting them. The reason is simple: Any plan would be inherently incomplete. The possibilities for disruption are too numerous. You could plan forever and still not account for all of them.
The word that security experts use to describe simple threats to complicated systems is asymmetry. As Stephen Jay Gould pointed out in his essay “The Great Asymmetry,” catastrophe is favored by nature. Species diversity increases for millennia, and then an asteroid extinguishes many forms of life; a skyscraper that takes years to build can be destroyed in an hour. The wreck of a city by a hurricane is an example of asymmetry. So is terrorism – the relative ease of destruction is the edge terrorists use to compensate for their small numbers.
On the other hand, software designers have gotten pretty good at increasing resistance to asymmetrical threats. The principles are well known: Use uncomplicated parts, encourage redundancy, and open the system to public examination so flaws can be discovered and fixed before they become catastrophic. The key is not to anticipate every problem, but to create flexible networks that can route around failure. Yet ever since 9/11, the security establishment has gone in the opposite direction, building highly specialized tools, centralizing control, and increasing secrecy.
After the debacle of the errant Cessna, federal officials pointed out that a system to coordinate response to aerial attacks had already been installed. The system, called the Domestic Events Network, involves an always-open conference call. A dedicated speakerphone sits in the DC police headquarters. In this case, a human error had occurred – some idiot hung up the line.
But of course the problem goes deeper than that. Such rarely used systems actually produce idiocy. Who could remain ready to act on a signal that seldom, if ever, comes through? Eventually, people zone out. They stop paying attention. They become idiots.
Real reactions to real threats take an entirely different form. In the case of the Cessna flyover, plenty of citizens knew that there was an evacuation, even those with no special access to government communications. Why? Because as soon as the evacuation of the Capitol began, it was noted by reporters and bystanders. Within minutes, it was on the Internet. Wherever they occur, major threats nearly always trigger instant ripples through electronic networks. Bursts of communication are unleashed as witnesses spread the word.
This is the raw material of warning. The good thing is that the signal is immediate. The bad thing is that it comes with a lot of noise. A formal structure for warnings, like Art Botterell’s Common Alerting Protocol, eases transmission but doesn’t make the information more reliable. We still need a way to analyze the warnings, to sort the raw cries of amazement and confusion, the requests for aid, and the coolly professional descriptions of experts, and assemble these records into a real-time portrait of a bad event. We need a system to boost intelligence everywhere, providing the kind of distributed, networked resistance crucial for surviving asymmetrical attacks. Such work could hardly be performed by machines. Operators would have to take calls from people on the ground, separate out the cranks, dampen the hysteria, and keep a precise record. In theory, all that information could then easily be pushed back out to the public. Such a system would be expensive, difficult to build, and extremely valuable. Fortunately, in most cities, it already exists.
“A 911 call center is a resource of awesome power,” says Carl Simpson, the director of emergency communications for the city of Portland, “because when something goes wrong, everybody dials 911.”
I was talking with Simpson at the entrance to the metropolitan area’s hypermodern Bureau of Emergency Communications. He led me up to the call center, a large, theatrical, open space where dozens of operators were taking incoming emergency calls and dispatching police, fire, or medical response teams.
Being a 911 operator means balancing seemingly contradictory skills. On one hand, operators have to be fanatically precise and well-organized. On the other, they must be able to establish rapport with panicky callers. Operators need excellent spatial memories so that they can keep a map of an ongoing crisis clear in their minds. But they cannot be wedded to an old picture of reality, because the city is constantly changing. It takes more hours to become a fully trusted operator in Simpson’s center than it does to become a licensed helicopter pilot. The washout rate during training is 40 percent.
I spent most of the day listening to calls, hearing how the narratives of people in distress are taken in, rearranged, stripped of irrelevancies, compared to known data (“There’s a parking lot on the north side there, ma’am, is that where you are?”), and coded for urgency. Simpson pointed out that most people think of a 911 call center only in terms of the data coming in. Very few people have considered what would happen if, after collecting all those public cries of alarm, you extracted the essentials, tagged them for easy distribution, then reversed the flow and pumped that information back out.
In 2002, Simpson went to lunch with a Portland businessman named Charles Jennings. A serial entrepreneur, Jennings made his first product 28 years ago; it was a little booklet called Drought Gardening that included a back-cover photo of the author in a full hippy beard. Later, he helped run his wife’s company, which sold pastries at a street market under one of Portland’s downtown bridges. After stints as a news-paper columnist, a comic strip writer, and a film and television producer, Jennings got into the software business, creating three companies in 10 years.
After September 11, Jennings pulled together several large public meetings in Portland to discuss how the local tech community could help out. Counterterrorism expert Richard Clarke appeared at one of them and spoke about one of the biggest but least-glamorous public safety problems: Emergency personnel – police, firefighters, paramedics – can’t share information easily in a crisis. A handful of projects emerged around that time, including a nonprofit founded by Jennings called the Regional Alliances for Infrastructure and Network Security, and a private software firm called Swan Island Networks (Jennings is CEO).
Their goal was to create a system linking public safety agencies. Jennings’ engineers discovered Art Botterell’s CAP standard, in which they saw a lingua franca of emergency communication. They added mapping, messaging, and security features and set out to license the package to public safety agencies for a fee under the name Connect & Protect. But this plan, seemingly straightforward, included a twist that turned out to be a radical breakthrough. The twist was in the very definition of a public safety agency.
What is a public safety agency? Obviously, the police count, but what about, say, hospitals? If hospitals are included, then why not clinics? If clinics, why not schools, senior housing, and neighborhood groups? Connect & Protect was designed to link people who need to share information in a crisis. But this turned out to be a lot of people.
During that lunch in 2002, Jennings pitched Carl Simpson the idea of capturing all the cries of distress pouring into the 911 center and using them to warn the public. He wanted to use Connect & Protect to give his swarm of public agencies a real-time picture of the region’s emergency activity. At first, Simpson was dubious, but a few weeks later, after a visit to a local school, he changed his mind. Simpson had been standing out on the grounds with the principal when a teacher walked up and asked where the kids would be having lunch that day. The principal squinted up at the clouds and said, “Outside.” Simpson, whose job puts him in the middle of a complex, highly effective sensor network, found this style of information gathering unimpressive. “His sole basis for deciding whether to put his kids inside or outside is a glance at the sky?” he told me later. “What if there was a chemical fire nearby? What if the police were combing the neighborhood for a criminal?” Such emergencies are rare, but when they happen the principal ought to know. Simpson called Jennings back and offered access to the 911 data on two conditions. First, there had to be no additional effort on the part of the dispatchers. And second, it had to be offered to the public schools for free.
Jennings’ company automated the process of reformatting the 911 call records into the CAP standard, and he and Simpson started inviting people to sign on. The schools got access, of course. They invited the security officers at the Oregon Zoo to join the network – it gets 1.3 million visitors a year. The county parole officers got access so they could keep an eye on incidents that might lead them to violators. Then they went further. They provided the 911 data to a private property manager responsible for three high-rises on the east side of the Willamette River, and they also gave access to the management of Lloyd Center, Portland’s biggest shopping complex. The public libraries and the county transportation officials and even the dogcatchers got the warnings.
Meanwhile, the evangelists at the nonprofit that Jennings had founded were out peddling the idea that Connect & Protect wasn’t just for receiving alerts, it was for sending them, too. The raw data of warning and public safety didn’t have to come from 911 alone. Almost everyone receiving information could contribute information.
Network effects began to take hold, and by late 2005 recipients of the 911 alerts were sending warnings directly to one another every day. Messages about auto break-ins at the mall went to high-rises across the street, where the security office had 32 guards on staff. Parole officers sent alerts to the schools. On the Oregon coast, hotel managers used Connect & Protect to pass along news of storm threats. During a recent tsunami warning for the West Coast, Connect & Protect beat the beach siren in one coastal town by 24 minutes.
Connect & Protect is now a large conglomeration of overlapping alerts stretching across nine Oregon counties. Each stream of warnings is controlled by the agency that issues it. Fairly strict security features attempt to limit abuse of the warnings – certain categories of calls, such as reports of sexual crimes, are not transmitted publicly, the alerts can’t easily be copied or pasted, anonymity is forbidden.
Despite these controls, Connect & Protect blatantly undermines privacy. Pick up the phone and call 911, and your address flashes across screens around the city – maybe even your neighbor’s. Then again, if you have a real need for help, your neighbor might be just the person you want to know about it.
Like a charcoal rubbing that reveals the pattern of a relief, the spread of Connect & Protect exposes the region’s real security network, a ubiquitous but previously hidden tangle of private and public groups. The lines of authority through which the alerts travel on Connect & Protect do not form a simple pyramid, but extend in a mycelial net that grows thicker in some places, thinner in others. The network copies – but also broadens and blurs – the existing web of governance. Eventually, most people may be touched by such a network, but the origin and route of any message is unpredictable and constantly changing.
Many of the important nodes of this network are run by people like Derek Bliss. The tall, skinny 36-year-old is the regional manager of First Response, the largest private security firm in the Northwest. “Let’s say there’s a high school football game that doesn’t go so well,” Bliss says, noting that he has security contracts with 10 percent of the Portland schools. “Remarks are made, and our guys have to keep people apart. We send out an alert to all the other schools.” Bliss plays no official role in his region’s crisis management bureaucracy. Yet his office takes about 16,000 calls per year. The 15 cars he has on duty, his secure dispatch center equipped with a generator, his contacts with property owners around the city – none of these count as public resources, even though his team would almost certainly be active in any emergency. Nationally, the employees of private security firms like First Response outnumber public law enforcement officers four to one.
The traditional way to tap into such private security firms – and the rest of the unseen resources that might help in a disaster – is by staging elaborate drills. But you can’t drill for every type of threat, and you can’t drill all the time. Everybody has better things to do. Laborious training sessions are forgotten during the long stretches when everything’s fine. That is the true nature of citizens. Even with constant propaganda, it’s impossible to keep us safe by keeping us scared. Weeks, months, and years pass, and we insist on living normally again.
If national safety – the ability to respond to hurricanes, terrorist attacks, earthquakes – depends on the execution of explicit plans, on soldierly obedience, and on showy security drills, then a decentralized security scheme is useless. But if it depends on improvised reactions to unknown threats, that’s a different story. A deeply textured, unmapped system is hard to bring down. A system that encourages improvisation is quick to recover. Ubiquitous networks of warning may constitute our own asymmetrical advantage, and, like the terrorist networks that occasionally carry out spectacular attacks, their power remains obscure until they’re called into action.
Wired, Issue 13.12, December 2005