Ray Kurzweil – Becoming more human

Ray Kurzweil, the famous inventor, is trim, balding, and not very tall. With his perfect posture and narrow black glasses, he would look at home in an old documentary about Cape Canaveral, but his mission is bolder than any mere voyage into space. He is attempting to travel across a frontier in time, to pass through the border between our era and a future so different as to be unrecognizable. He calls this border the singularity. Kurzweil is 60, but he intends to be no more than 40 when the singularity arrives.

Kurzweil’s notion of a singularity is taken from cosmology, in which it signifies a border in spacetime beyond which normal rules of measurement do not apply (the edge of a black hole, for example). The word was first used to describe a crucial moment in the evolution of humanity by the great mathematician John von Neumann. One day in the 1950s, while talking with his colleague Stanislaw Ulam, von Neumann began discussing the ever-accelerating pace of technological change, which, he said, “gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs as we know them could not continue.”

Many years later, this idea was picked up by another mathematician, the professor and science fiction writer Vernor Vinge, who added an additional twist. Vinge linked the singularity directly with improvements in computer hardware. This put the future on a schedule. He could look at how quickly computers were improving and make an educated guess about when the singularity would arrive. “Within 30 years, we will have the technological means to create superhuman intelligence,” Vinge wrote at the beginning of his 1993 essay The Coming Technological Singularity: How to Survive in the Post-Human Era. “Shortly after, the human era will be ended.” According to Vinge, superintelligent machines will take charge of their own evolution, creating ever smarter successors. Humans will become bystanders in history, too dull in comparison with their devices to make any decisions that matter.

Kurzweil transformed the singularity from an interesting speculation into a social movement. His best-selling books The Age of Spiritual Machines and The Singularity Is Near cover everything from unsolved problems in neuroscience to the question of whether intelligent machines should have legal rights. But the crucial thing that Kurzweil did was to make the end of the human era seem actionable: He argues that while artificial intelligence will render biological humans obsolete, it will not make human consciousness irrelevant. The first AIs will be created, he says, as add-ons to human intelligence, modeled on our actual brains and used to extend our human reach. AIs will help us see and hear better. They will give us better memories and help us fight disease. Eventually, AIs will allow us to conquer death itself. The singularity won’t destroy us, Kurzweil says. Instead, it will immortalize us.

There are singularity conferences now, and singularity journals. There has been a congressional report about confronting the challenges of the singularity, and late last year there was a meeting at the NASA Ames Research Center to explore the establishment of a singularity university. The meeting was called by Peter Diamandis, who established the X Prize. Attendees included senior government researchers from NASA, a noted Silicon Valley venture capitalist, a pioneer of private space exploration, and two computer scientists from Google.

At this meeting, there was some discussion about whether this university should avoid the provocative term singularity, with its cosmic connotations, and use a more ordinary phrase, like accelerating change. Kurzweil argued strongly against backing off. He is confident that the word will take hold as more and more of his astounding predictions come true.

Kurzweil does not believe in half measures. He takes 180 to 210 vitamin and mineral supplements a day, so many that he doesn’t have time to organize them all himself. So he’s hired a pill wrangler, who takes them out of their bottles and sorts them into daily doses, which he carries everywhere in plastic bags. Kurzweil also spends one day a week at a medical clinic, receiving intravenous longevity treatments. The reason for his focus on optimal health should be obvious: If the singularity is going to render humans immortal by the middle of this century, it would be a shame to die in the interim. To perish of a heart attack just before the singularity occurred would not only be sad for all the ordinary reasons, it would also be tragically bad luck, like being the last soldier shot down on the Western Front moments before the armistice was proclaimed.

Photo: Garry McLeod

In his childhood, Kurzweil was a technical prodigy. Before he turned 13, he’d fashioned telephone relays into a calculating device that could find square roots. At 14, he wrote software that analyzed statistical deviance; the program was distributed as standard equipment with the new IBM 1620. As a teenager, he cofounded a business that matched high school students with colleges based on computer evaluation of a mail-in questionnaire. He sold the company to Harcourt, Brace & World in 1968 for $100,000 plus royalties and had his first small fortune while still an undergraduate at MIT.

Though Kurzweil was young, it would have been a poor bet to issue him life insurance using standard actuarial tables. He has unlucky genes: His father died of heart disease at 58, his grandfather in his early forties. He himself was diagnosed with high cholesterol and incipient type 2 diabetes — both considered to be significant risk factors for early death — when only 35. He felt his bad luck as a cloud hanging over his life.

Still, the inventor squeezed a lot of achievement out of these early years. In his twenties, he tackled a science fiction type of problem: teaching computers to decipher words on a page and then read them back aloud. At the time, common wisdom held that computers were too slow and too expensive to master printed text in all its forms, at least in a way that was commercially viable.

But Kurzweil had a special confidence that grew from a habit of mind he’d been cultivating for years: He thought exponentially. To illustrate what this means, consider the following quiz: 2, 4, ?, ?.

What are the missing numbers? Many people will say 6 and 8. This suggests a linear function. But some will say the missing numbers are 8 and 16. This suggests an exponential function. (Of course, both answers are correct. This is a test of thinking style, not math skills.)

Human minds have a lot of practice with linear patterns. If we set out on a walk, the time it takes will vary linearly with the distance we’re going. If we bill by the hour, our income increases linearly with the number of hours we work. Exponential change is also common, but it’s harder to see. Financial advisers like to tantalize us by explaining how a tiny investment can grow into a startling sum through the exponential magic of compound interest. But it’s psychologically difficult to heed their advice. For years, an interest-bearing account increases by depressingly tiny amounts. Then, in the last moment, it seems to jump. Exponential growth is unintuitive, because it can be imperceptible for a long time and then move shockingly fast. It takes training and experience, and perhaps a certain analytical coolness, to trust in exponential curves whose effects cannot be easily perceived.

Moore’s law — the observation by Intel cofounder Gordon Moore that the number of transistors on an integrated circuit doubles roughly every 18 months — is another example of exponential change. For people like Kurzweil, it is the key example, because Moore’s law and its many derivatives suggest that just about any limit on computing power today will be overcome in short order. While Kurzweil was working on his reading machine, computers were improving, and they were indeed improving exponentially. The payoff came on January 13, 1976, when Walter Cronkite’s famous sign-off — “and that’s the way it is” — was read not by the anchorman but by the synthetic voice of a Kurzweil Reading Machine. Stevie Wonder was the first customer.

The original reader was the size of a washing machine. It read slowly and cost $50,000. One day late last year, as a winter storm broke across New England, I stood in Kurzweil’s small office suite in suburban Boston, playing with the latest version. I hefted it in my hand, stuck it in my pocket, pulled it out again, then raised it above a book flopped open on the table. A bright light flashed, and a voice began reading aloud. The angle of the book, the curve of its pages, the uneven shadows — none of that was a problem. The mechanical voice picked up from the numerals on the upper left corner — … four hundred ten. The singularity is near. The continued opportunity to alleviate human distress is one key motivation for continuing technological advancement — and continued down the page in an artificial monotone. Even after three decades of improvement, Kurzweil’s reader is a dull companion. It expresses no emotion. However, it is functionally brilliant to the point of magic. It can handle hundreds of fonts and any size book. It doesn’t mind being held at an angle by an unsteady hand. Not only that, it also makes calls: Computers have become so fast and small they’ve nearly disappeared, and the Kurzweil reader is now just software running on a Nokia phone.

In the late ’70s, Kurzweil’s character-recognition algorithms were used to scan legal documents and articles from newspapers and magazines. The result was the Lexis and Nexis databases. And a few years later, Kurzweil released speech recognition software that is the direct ancestor of today’s robot customer service agents. Their irritating mistakes taking orders and answering questions would seem to offer convincing evidence that real AI is still many years away. But Kurzweil draws the opposite conclusion. He admits that not everything he has invented works exactly as we might wish. But if you will grant him exponential progress, the fact that we already have virtual robots standing in for retail clerks, and cell phones that read books out loud, is evidence that the world is about to change in even more fantastical ways.

Look at it this way: If the series of numbers in the quiz mentioned earlier is linear and progresses for 100 steps, the final entry is 200. But if progress is exponential, then the final entry is upwards of a nonillion, computers will soon be smarter than humans, and nobody has to die.

In a small medical office on the outskirts of Denver, with windows overlooking the dirty snow and the golden arches of a fast-food mini-mall, one of the world’s leading longevity physicians, Terry Grossman, works on keeping Ray Kurzweil alive. Kurzweil is not Grossman’s only client. The doctor charges $6,000 per appointment, and wealthy singularitarians from all over the world visit him to plan their leap into the future.

Grossman’s patient today is Matt Philips, 32, who became independently wealthy when Yahoo bought the Internet advertising company where he worked for four years. A young medical technician is snipping locks of his hair, and another is extracting small vials of blood. Philips is in good shape at the moment, but he is aware that time marches on. “I’m dying slowly. I can’t feel it, but I know its happening, little by little, cell by cell,” he wrote on his intake questionnaire. Philips has read Kurzweil’s books. He is a smart, skeptical person and accepts that the future is not entirely predictable, but he also knows the meaning of upside. At worst, his money buys him new information about his health. At best, it makes him immortal.

“The normal human lifespan is about 125 years,” Grossman tells him. But Philips wasn’t born until 1975, so he starts with an advantage. “I think somebody your age, and in your condition, has a reasonable chance of making it across the first bridge,” Grossman says.

According to Grossman and other singularitarians, immortality will arrive in stages. First, lifestyle and aggressive antiaging therapies will allow more people to approach the 125-year limit of the natural human lifespan. This is bridge one. Meanwhile, advanced medical technology will begin to fix some of the underlying biological causes of aging, allowing this natural limit to be surpassed. This is bridge two. Finally, computers become so powerful that they can model human consciousness. This will permit us to download our personalities into nonbiological substrates. When we cross this third bridge, we become information. And then, as long as we maintain multiple copies of ourselves to protect against a system crash, we won’t die.

Kurzweil himself started across the first bridge in 1988. That year, he confronted the risk that had been haunting him and began to treat his body as a machine. He read up on the latest nutritional research, adopted the Pritikin diet, cut his fat intake to 10 percent of his calories, lost 40 pounds, and cured both his high cholesterol and his incipient diabetes. Kurzweil wrote a book about his experience, The 10% Solution for a Healthy Life. But this was only the beginning.

Kurzweil met Grossman at a Foresight Nanotech Institute meeting in 1999, and they became research partners. Their object of investigation was Kurzweil’s body. Having cured himself of his most pressing health problems, Kurzweil was interested in adopting the most advanced medical and nutritional technologies, but it wasn’t easy to find a doctor willing to tolerate his persistent questions. Grossman was building a new type of practice, focused not on illness but on the pursuit of optimal health and extreme longevity. The two men exchanged thousands of emails, sharing speculations about which cutting-edge discoveries could be safely tried.

Though both Grossman and Kurzweil respect science, their approach is necessarily improvisational. If a therapy has some scientific promise and little risk, they’ll try it. Kurzweil gets phosphatidylcholine intravenously, on the theory that this will rejuvenate all his body’s tissues. He takes DHEA and testosterone. Both men use special filters to produce alkaline water, which they drink between meals in the hope that negatively charged ions in the water will scavenge free radicals and produce a variety of health benefits. This kind of thing may seem like quackery, especially when promoted by various New Age outfits touting the “pH miracle of living.” Kurzweil and Grossman justify it not so much with scientific citations — though they have a few — but with a tinkerer’s shrug. “Life is not a randomized, double-blind, placebo-controlled study,” Grossman explains. “We don’t have that luxury. We are operating with incomplete information. The best we can do is experiment with ourselves.”

Obviously, Kurzweil has no plan for retirement. He intends to sustain himself indefinitely through his intelligence, which he hopes will only grow. A few years ago he deployed an automated system for making money on the stock market, called FatKat, which he uses to direct his own hedge fund. He also earns about $1 million a year in speaking fees.

Meanwhile, he tries to safeguard his well-being. As a driver he is cautious. He frequently bicycles through the Boston suburbs, which is good for physical conditioning but also puts his immortality on the line. For most people, such risks blend into the background of life, concealed by a cheerful fatalism that under ordinary conditions we take as a sign of mental health. But of course Kurzweil objects to this fatalism. He wants us to try harder to survive.

His plea is often ignored. Kurzweil has written about the loneliness of being a singularitarian. This may seem an odd complaint, given his large following, but there is something to it. A dozen of his fans may show up in Denver every month to initiate longevity treatments, but many of them, like Matt Philips, are simply hedging their bets. Most health fanatics remain agnostic, at best, on the question of immortality.

Kurzweil predicts that by the early 2030s, most of our fallible internal organs will have been replaced by tiny robots. We’ll have “eliminated the heart, lungs, red and white blood cells, platelets, pancreas, thyroid and all the hormone-producing organs, kidneys, bladder, liver, lower esophagus, stomach, small intestines, large intestines, and bowel. What we have left at this point is the skeleton, skin, sex organs, sensory organs, mouth and upper esophagus, and brain.”

In outlining these developments, Kurzweil’s tone is so calm and confident that he seems to be describing the world as it is today, rather than some distant, barely imaginable future. This is because his prediction falls out cleanly from the equations he’s proposed. Knowledge doubles every year, Kurzweil says. He has estimated the number of computations necessary to simulate a human brain. The rest is simple math.

But wait. There may be something wrong. Kurzweil’s theory of accelerating change is meant to be a universal law, applicable wherever intelligence is found. It’s fine to say that knowledge doubles every year. But then again, what is a year? A year is an astronomical artifact. It is the length of time required by Earth to make one orbit around our unexceptional star. A year is important to our nature, to our biology, to our fantasies and dreams. But it is a strange unit to discover in a general law.

“Doubling every year,” I say to Kurzweil, “makes your theory sound like a wish.”

He’s not thrown off. A year, he replies, is just shorthand. The real equation for accelerating world knowledge is much more complicated than that. (In his book, he gives it as: .)

He has examined the evidence, and welcomes debate on the minor details. If you accept his basic premise of accelerating growth, he’ll yield a little on the date he predicts the singularity will occur. After all, concede accelerating growth and the exponential fuse is lit. At the end you get that big bang: an explosion in intelligence that yields immortal life.

Despite all this, people continue to disbelieve. There is a lively discussion among experts about the validity of Moore’s law. Kurzweil pushes Moore’s law back to the dawn of time, and forward to the end of the universe. But many computer scientists and historians of technology wonder if it will last another decade. Some suspect that the acceleration of computing power has already slowed.

There are also philosophical objections. Kurzweil’s theory is that super-intelligent computers will necessarily be human, because they will be modeled on the human brain. But there are other types of intelligence in the world — for instance, the intelligence of ant colonies — that are alien to humanity. Grant that a computer, or a network of computers, might awaken. The consciousness of the this fabulous AI might remain as incomprehensible to us as we are to the protozoa.

Other pessimists point out that the brain is more than raw processing power. It also has a certain architecture, a certain design. It is attached to specific type of nervous system, it accepts only particular kinds of inputs. Even with better computational speed driving our thoughts, we might still be stuck in a kind of evolutionary dead end, incapable of radical self-improvement.

And these are the merely intellectual protests Kurzweil receives. The fundamental cause for loneliness, if you are a prophet of the singularity, is probably more profound. It stems from the simple fact that the idea is so strange. “Death has been a ubiquitous, ever-present facet of human society,” says Kurzweil’s friend Martine Rothblatt, founder of Sirius radio and chair of United Therapeutics, a biotech firm on whose board Kurzweil sits. “To tell people you are going to defeat death is like telling people you are going to travel back in time. It has never been done. I would be surprised if people had a positive reaction.”

To press his case, Kurzweil is writing and producing an autobiographical movie, with walk-ons by Alan Dershowitz and Tony Robbins. Kurzweil appears in two guises, as himself and as an intelligent computer named Ramona, played by an actress. Ramona has long been the inventor’s virtual alter ego and the expression of his most personal goals. “Women are more interesting than men,” he says, “and if it’s more interesting to be with a woman, it is probably more interesting to be a woman.” He hopes one day to bring Ramona to life, and to have genuine human experiences, both with her and as her. Kurzweil has been married for 32 years to his wife, Sonya Kurzweil. They have two children — one at Stanford University, one at Harvard Business School. “I don’t necessarily only want to be Ramona,” he says. “It’s not necessarily about gender confusion, it’s just about freedom to express yourself.”

Kurzweil’s movie offers a taste of the drama such a future will bring. Ramona is on a quest to attain full legal rights as a person. She agrees to take a Turing test, the classic proof of artificial intelligence, but although Ramona does her best to masquerade as human, she falls victim to one of the test’s subtle flaws: Humans have limited intelligence. A computer that appears too smart will fail just as definitively as one that seems too dumb. “She loses because she is too clever!” Kurzweil says.

The inventor’s sympathy with his robot heroine is heartfelt. “If you’re just very good at doing mathematical theorems and making stock market investments, you’re not going to pass the Turing test,” Kurzweil acknowledged in 2006 during a public debate with noted computer scientist David Gelernter. Kurzweil himself is brilliant at math, and pretty good at stock market investments. The great benefits of the singularity, for him, do not lie here. “Human emotion is really the cutting edge of human intelligence,” he says. “Being funny, expressing a loving sentiment — these are very complex behaviors.”

One day, sitting in his office overlooking the suburban parking lot, I ask Kurzweil if being a singularitarian makes him happy. “If you took a poll of primitive man, happiness would be getting a fire to light more easily,” he says. “But we’ve expanded our horizon, and that kind of happiness is now the wrong thing to focus on. Extending our knowledge and casting a wider net of consciousness is the purpose of life.” Kurzweil expects that the world will soon be entirely saturated by thought. Even the stones may compute, he says, within 200 years.

Every day he stays alive brings him closer to this climax in intelligence, and to the time when Ramona will be real. Kurzweil is a technical person, but his goal is not technical in this respect. Yes, he wants to become a robot. But the robots of his dreams are complex, funny, loving machines. They are as human as he hopes to be.

Gary Wolf (gary@antephase.com) wrote about productivity guru David Allen in issue 15.10.

Supermemo – Piotr Wozniak and Spaced Repetition

The winter sun sets in mid-afternoon in Kolobrzeg, Poland, but the early twilight does not deter people from taking their regular outdoor promenade. Bundled up in parkas with fur-trimmed hoods, strolling hand in mittened hand along the edge of the Baltic Sea, off-season tourists from Germany stop openmouthed when they see a tall, well-built, nearly naked man running up and down the sand.

“Kalt? Kalt?” one of them calls out. The man gives a polite but vague answer, then turns and dives into the waves. After swimming back and forth in the 40-degree water for a few minutes, he emerges from the surf and jogs briefly along the shore. The wind is strong, but the man makes no move to get dressed. Passersby continue to comment and stare. “This is one of the reasons I prefer anonymity,” he tells me in English. “You do something even slightly out of the ordinary and it causes a sensation.”

Piotr Wozniak’s quest for anonymity has been successful. Nobody along this string of little beach resorts recognizes him as the inventor of a technique to turn people into geniuses. A portion of this technique, embodied in a software program called SuperMemo, has enthusiastic users around the world. They apply it mainly to learning languages, and it’s popular among people for whom fluency is a necessity — students from Poland or other poor countries aiming to score well enough on English-language exams to study abroad. A substantial number of them do not pay for it, and pirated copies are ubiquitous on software bulletin boards in China, where it competes with knockoffs like SugarMemo.

SuperMemo is based on the insight that there is an ideal moment to practice what you’ve learned. Practice too soon and you waste your time. Practice too late and you’ve forgotten the material and have to relearn it. The right time to practice is just at the moment you’re about to forget. Unfortunately, this moment is different for every person and each bit of information. Imagine a pile of thousands of flash cards. Somewhere in this pile are the ones you should be practicing right now. Which are they?

Fortunately, human forgetting follows a pattern. We forget exponentially. A graph of our likelihood of getting the correct answer on a quiz sweeps quickly downward over time and then levels off. This pattern has long been known to cognitive psychology, but it has been difficult to put to practical use. It’s too complex for us to employ with our naked brains.

Twenty years ago, Wozniak realized that computers could easily calculate the moment of forgetting if he could discover the right algorithm. SuperMemo is the result of his research. It predicts the future state of a person’s memory and schedules information reviews at the optimal time. The effect is striking. Users can seal huge quantities of vocabulary into their brains. But for Wozniak, 46, helping people learn a foreign language fast is just the tiniest part of his goal. As we plan the days, weeks, even years of our lives, he would have us rely not merely on our traditional sources of self-knowledge — introspection, intuition, and conscious thought — but also on something new: predictions about ourselves encoded in machines.

Given the chance to observe our behaviors, computers can run simulations, modeling different versions of our path through the world. By tuning these models for top performance, computers will give us rules to live by. They will be able to tell us when to wake, sleep, learn, and exercise; they will cue us to remember what we’ve read, help us track whom we’ve met, and remind us of our goals. Computers, in Wozniak’s scheme, will increase our intellectual capacity and enhance our rational self-control.

The reason the inventor of SuperMemo pursues extreme anonymity, asking me to conceal his exact location and shunning even casual recognition by users of his software, is not because he’s paranoid or a misanthrope but because he wants to avoid random interruptions to a long-running experiment he’s conducting on himself. Wozniak is a kind of algorithmic man. He’s exploring what it’s like to live in strict obedience to reason. On first encounter, he appears to be one of the happiest people I’ve ever met.

In the late 1800s, a German scientist named Hermann Ebbinghaus made up lists of nonsense syllables and measured how long it took to forget and then relearn them. (Here is an example of the type of list he used: bes dek fel gup huf jeik mek meun pon daus dor gim ke4k be4p bCn hes.) In experiments of breathtaking rigor and tedium, Ebbinghaus practiced and recited from memory 2.5 nonsense syllables a second, then rested for a bit and started again. Maintaining a pace of rote mental athleticism that all students of foreign verb conjugation will regard with awe, Ebbinghaus trained this way for more than a year. Then, to show that the results he was getting weren’t an accident, he repeated the entire set of experiments three years later. Finally, in 1885, he published a monograph called Memory: A Contribution to Experimental Psychology. The book became the founding classic of a new discipline.

Ebbinghaus discovered many lawlike regularities of mental life. He was the first to draw a learning curve. Among his original observations was an account of a strange phenomenon that would drive his successors half batty for the next century: the spacing effect.

Ebbinghaus showed that it’s possible to dramatically improve learning by correctly spacing practice sessions. On one level, this finding is trivial; all students have been warned not to cram. But the efficiencies created by precise spacing are so large, and the improvement in performance so predictable, that from nearly the moment Ebbinghaus described the spacing effect, psychologists have been urging educators to use it to accelerate human progress. After all, there is a tremendous amount of material we might want to know. Time is short.

How Supermemo Works
SuperMemo is a program that keeps track of discrete bits of information you’ve learned and want to retain. For example, say you’re studying Spanish. Your chance of recalling a given word when you need it declines over time according to a predictable pattern. SuperMemo tracks this so-called forgetting curve and reminds you to rehearse your knowledge when your chance of recalling it has dropped to, say, 90 percent. When you first learn a new vocabulary word, your chance of recalling it will drop quickly. But after SuperMemo reminds you of the word, the rate of forgetting levels out. The program tracks this new decline and waits longer to quiz you the next time.
How Supermemo Works

However, this technique never caught on. The spacing effect is “one of the most remarkable phenomena to emerge from laboratory research on learning,” the psychologist Frank Dempster wrote in 1988, at the beginning of a typically sad encomium published in American Psychologist under the title “The Spacing Effect: A Case Study in the Failure to Apply the Results of Psychological Research.” The sorrrowful tone is not hard to understand. How would computer scientists feel if people continued to use slide rules for engineering calculations? What if, centuries after the invention of spectacles, people still dealt with nearsightedness by holding things closer to their eyes? Psychologists who studied the spacing effect thought they possessed a solution to a problem that had frustrated humankind since before written language: how to remember what’s been learned. But instead, the spacing effect became a reminder of the impotence of laboratory psychology.

As a student at the Poznan University of Technology in western Poland in the 1980s, Wozniak was overwhelmed by the sheer number of things he was expected to learn. But that wasn’t his most troubling problem. He wasn’t just trying to pass his exams; he was trying to learn. He couldn’t help noticing that within a few months of completing a class, only a fraction of the knowledge he had so painfully acquired remained in his mind. Wozniak knew nothing of the spacing effect, but he knew that the methods at hand didn’t work.

The most important challenge was English. Wozniak refused to be satisfied with the broken, half-learned English that so many otherwise smart students were stuck with. So he created an analog database, with each entry consisting of a question and answer on a piece of paper. Every time he reviewed a word, phrase, or fact, he meticulously noted the date and marked whether he had forgotten it. At the end of the session, he tallied the number of remembered and forgotten items. By 1984, a century after Ebbinghaus finished his second series of experiments on nonsense syllables, Wozniak’s database contained 3,000 English words and phrases and 1,400 facts culled from biology, each with a complete repetition history. He was now prepared to ask himself an important question: How long would it take him to master the things he wanted to know?

The answer: too long. In fact, the answer was worse than too long. According to Wozniak’s first calculations, success was impossible. The problem wasn’t learning the material; it was retaining it. He found that 40 percent of his English vocabulary vanished over time. Sixty percent of his biology answers evaporated. Using some simple calculations, he figured out that with his normal method of study, it would require two hours of practice every day to learn and retain a modest English vocabulary of 15,000 words. For 30,000 words, Wozniak would need twice that time. This was impractical.

Wozniak’s discouraging numbers were roughly consistent with the results that Ebbinghaus had recorded in his own experiments and that have been confirmed by other psychologists in the decades since. If students nonetheless manage to become expert in a few of the things they study, it’s not because they retain the material from their lessons but because they specialize in a relatively narrow subfield where intense practice keeps their memory fresh. When it comes to language, the received wisdom is that immersion — usually amounting to actual immigration — is necessary to achieve fluency. On one hand, this is helpful advice. On the other hand, it’s an awful commentary on the value of countless classroom hours. Learning things is easy. But remembering them — this is where a certain hopelessness sets in.

As Wozniak later wrote in describing the failure of his early learning system: “The process of increasing the size of my databases gradually progressed at the cost of knowledge retention.” In other words, as his list grew, so did his forgetting. He was climbing a mountain of loose gravel and making less and less progress at each step.
Photo: Patrick Voigt

The problem of forgetting might not torment us so much if we could only convince ourselves that remembering isn’t important. Perhaps the things we learn — words, dates, formulas, historical and biographical details — don’t really matter. Facts can be looked up. That’s what the Internet is for. When it comes to learning, what really matters is how things fit together. We master the stories, the schemas, the frameworks, the paradigms; we rehearse the lingo; we swim in the episteme.

The disadvantage of this comforting notion is that it’s false. “The people who criticize memorization — how happy would they be to spell out every letter of every word they read?” asks Robert Bjork, chair of UCLA’s psychology department and one of the most eminent memory researchers. After all, Bjork notes, children learn to read whole words through intense practice, and every time we enter a new field we become children again. “You can’t escape memorization,” he says. “There is an initial process of learning the names of things. That’s a stage we all go through. It’s all the more important to go through it rapidly.” The human brain is a marvel of associative processing, but in order to make associations, data must be loaded into memory.

Once we drop the excuse that memorization is pointless, we’re left with an interesting mystery. Much of the information does remain in our memory, though we cannot recall it. “To this day,” Bjork says, “most people think about forgetting as decay, that memories are like footprints in the sand that gradually fade away. But that has been disproved by a lot of research. The memory appears to be gone because you can’t recall it, but we can prove that it’s still there. For instance, you can still recognize a ‘forgotten’ item in a group. Yes, without continued use, things become inaccessible. But they are not gone.”

After an ichthyologist named David Starr Jordan became the first president of Stanford University in the 1890s, he bequeathed to memory researchers one of their favorite truisms: Every time he learned the name of a student, Jordan is said to have complained, he forgot the name of a fish. But the fish to which Jordan had devoted his research life were still there, somewhere beneath the surface of consciousness. The difficulty was in catching them.

During the years that Wozniak struggled to master English, Bjork and his collaborator, Elizabeth Bjork (she is also a professor of psychology; the two have been married since 1969), were at work on a new theory of forgetting. Both were steeped in the history of laboratory research on memory, and one of their goals was to get to the bottom of the spacing effect. They were also curious about the paradoxical tendency of older memories to become stronger with the passage of time, while more recent memories faded. Their explanation involved an elegant model with deeply counterintuitive implications.

Long-term memory, the Bjorks said, can be characterized by two components, which they named retrieval strength and storage strength. Retrieval strength measures how likely you are to recall something right now, how close it is to the surface of your mind. Storage strength measures how deeply the memory is rooted. Some memories may have high storage strength but low retrieval strength. Take an old address or phone number. Try to think of it; you may feel that it’s gone. But a single reminder could be enough to restore it for months or years. Conversely, some memories have high retrieval strength but low storage strength. Perhaps you’ve recently been told the names of the children of a new acquaintance. At this moment they may be easily accessible, but they are likely to be utterly forgotten in a few days, and a single repetition a month from now won’t do much to strengthen them at all.

The Bjorks were not the first psychologists to make this distinction, but they and a series of collaborators used a broad range of experimental data to show how these laws of memory wreak havoc on students and teachers. One of the problems is that the amount of storage strength you gain from practice is inversely correlated with the current retrieval strength. In other words, the harder you have to work to get the right answer, the more the answer is sealed in memory. Precisely those things that seem to signal we’re learning well — easy performance on drills, fluency during a lesson, even the subjective feeling that we know something — are misleading when it comes to predicting whether we will remember it in the future. “The most motivated and innovative teachers, to the extent they take current performance as their guide, are going to do the wrong things,” Robert Bjork says. “It’s almost sinister.”

The most popular learning systems sold today — for instance, foreign language software like Rosetta Stone — cheerfully defy every one of the psychologists’ warnings. With its constant feedback and easily accessible clues, Rosetta Stone brilliantly creates a sensation of progress. “Go to Amazon and look at the reviews,” says Greg Keim, Rosetta Stone’s CTO, when I ask him what evidence he has that people are really remembering what they learn. “That is as objective as you can get in terms of a user’s sense of achievement.” The sole problem here, from the psychologists’ perspective, is that the user’s sense of achievement is exactly what we should most distrust.

The battle between lab-tested techniques and conventional pedagogy went on for decades, and it’s fair to say that the psychologists lost. All those studies of human memory in the lab — using nonsense syllables, random numbers, pictures, maps, foreign vocabulary, scattered dots — had so little influence on actual practice that eventually their irrelevance provoked a revolt. In the late ’70s, Ulric Neisser, the pioneering researcher who coined the term cognitive psychology, launched a broad attack on the approach of Ebbinghaus and his scientific kin.

“We have established firm empirical generalizations, but most of them are so obvious that every 10-year-old knows them anyway,” Neisser complained. “We have an intellectually impressive group of theories, but history offers little confidence that they will provide any meaningful insight into natural behavior.” Neisser encouraged psychologists to leave their labs and study memory in its natural environment, in the style of ecologists. He didn’t doubt that the laboratory theories were correct in their limited way, but he wanted results that had power to change the world.

Many psychologists followed Neisser. But others stuck to their laboratory methods. The spacing effect was one of the proudest lab-derived discoveries, and it was interesting precisely because it was not obvious, even to professional teachers. The same year that Neisser revolted, Robert Bjork, working with Thomas Landauer of Bell Labs, published the results of two experiments involving nearly 700 undergraduate students. Landauer and Bjork were looking for the optimal moment to rehearse something so that it would later be remembered. Their results were impressive: The best time to study something is at the moment you are about to forget it. And yet — as Neisser might have predicted — that insight was useless in the real world. Determining the precise moment of forgetting is essentially impossible in day-to-day life.

Obviously, computers were the answer, and the idea of using them was occasionally suggested, starting in the 1960s. But except for experimental software, nothing was built. The psychologists were interested mainly in theories and models. The teachers were interested in immediate signs of success. The students were cramming to pass their exams. The payoff for genuine progress was somehow too abstract, too delayed, to feed back into the system in a useful way. What was needed was not an academic psychologist but a tinkerer, somebody with a lot of time on his hands, a talent for mathematics, and a strangely literal temperament that made him think he should actually recall the things he learned.

The day I first meet Wozniak, we go for a 7-mile walk down a windy beach. I’m in my business clothes and half comatose from jet lag; he’s wearing a track suit and comes toward me with a gait so buoyant he seems about to take to the air. He asks me to walk on the side away from the water. “People say that when I get excited I tend to drift in their direction, so it is better that I stand closer to the sea so I don’t push you in,” he says.

Wozniak takes an almost physical pleasure in reason. He loves to discuss things with people, to get insight into their personalities, and to give them advice — especially in English. One of his most heartfelt wishes is that the world have one language and one currency so this could all be handled more efficiently. He’s appalled that Poland is still not in the Eurozone. He’s baffled that Americans do not use the metric system. For two years he kept a diary in Esperanto.

Although Esperanto was the ideal expression of his universalist dreams, English is the leading real-world implementation. Though he has never set foot in an English-speaking country, he speaks the language fluently. “Two words that used to give me trouble are perspicuous and perspicacious,” he confessed as we drank beer with raspberry syrup at a tiny beachside restaurant where we were the only customers. “Then I found a mnemonic to enter in SuperMemo: clear/clever. Now I never misuse them.”

Wozniak’s command of English is the result of a series of heroic experiments, in the tradition of Ebbinghaus. They involved relentless sessions of careful self-analysis, tracked over years. He began with the basic conundrum of too much to study in too little time. His first solution was based on folk wisdom. “It is a common intuition,” Wozniak later wrote, “that with successive repetitions, knowledge should gradually become more durable and require less frequent review.”

This insight had already been proven by Landauer and Bjork, but Wozniak was unaware of their theory of forgetting or of any of the landmark studies in laboratory research on memory. This ignorance was probably a blessing, because it forced him to rely on pragmatic engineering. In 1985, he divided his database into three equal sets and created schedules for studying each of them. One of the sets he studied every five days, another every 18 days, and the third at expanding intervals, increasing the period between study sessions each time he got the answers right.

This experiment proved that Wozniak’s first hunch was too simple. On none of the tests did his recall show significant improvement over the naive methods of study he normally used. But he was not discouraged and continued making ever more elaborate investigations of study intervals, changing the second interval to two days, then four days, then six days, and so on. Then he changed the third interval, then the fourth, and continued to test and measure, measure and test, for nearly a decade. His conviction that forgetting could be tamed by following rules gave him the intellectual fortitude to continue searching for those rules. He doggedly traced a matrix of paths, like a man pacing off steps in a forest where he is lost.

All of his early work was done on paper. In the computer science department at the Poznan University of Technology, “we had a single mainframe of Polish-Russian design, with punch cards,” Wozniak recalls. “If you could stand in line long enough to get your cards punched, you could wait a couple of days more for the machine to run your cards, and then at last you got a printout, which was your output.”

The personal computer revolution was already pretty far along in the US by the time Wozniak managed to get his hands on an Amstrad PC 1512, imported through quasi-legal means from Hamburg, Germany. With this he was able to make another major advance in SuperMemo — computing the difficulty of any fact or study item and adjusting the unique shape of the predicted forgetting curve for every item and user. A friend of Wozniak’s adapted his software to run on Atari machines, and as access to personal computers finally spread among students, so did SuperMemo.

After the collapse of Polish communism, Wozniak and some fellow students formed a company, SuperMemo World. By 1995, their program was one of the most successful applications developed by the country’s fledgling software industry, and they were searching for funding that would allow them to relocate to Silicon Valley. That year, at Comdex in Las Vegas, 200,000 people got a look at Sony’s new DVD technology, prototypes of flatscreens, and Wozniak’s SuperMemo, which became the first Polish product shown at the great geek carnival, then at the height of its influence. In Europe, the old communist experiment in human optimization had run its course. Wozniak believed that in a world of open competition, where individuals are rewarded on merit, a scientific tool that accelerated learning would find customers everywhere.

Wozniak’s chief partner in the campaign to reprogram the world’s approach to learning through SuperMemo was Krzysztof Biedalak, who had been his classmate at the University of Technology. The two men used to run 6 miles to a nearby lake for an icy swim. Biedalak agrees with Wozniak that winter swimming is good for mental health. Biedalak also agrees with Wozniak that SuperMemo produces extreme learning. But Biedalak does not agree with Wozniak about everything. “I don’t apply his whole technique,” he says. “In my context, his technique is inapplicable.”

What Biedalak means by Wozniak’s technique is the extension of algorithmic optimization to all dimensions of life. Biedalak is CEO of SuperMemo World, which sells and licenses Wozniak’s invention. Today, SuperMemo World employs just 25 people. The venture capital never came through, and the company never moved to California. About 50,000 copies of SuperMemo were sold in 2006, most for less than $30. Many more are thought to have been pirated.

Biedalak and I meet and talk in a restaurant in downtown Warsaw where the shelves are covered in gingham and the walls are lined with jars of pickled vegetables. He has an intelligent, somewhat hangdog expression, like a young Walter Matthau, and his tone is as measured as Wozniak’s is impulsive. Until I let the information slip, he doesn’t even know the exact location of his partner and friend.

“Piotr would never go out to promote the product, wouldn’t talk to journalists, very rarely agreed to meet with somebody,” Biedalak says. “He was the driving force, but at some point I had to accept that you cannot communicate with him in the way you can with other people.”

The problem wasn’t shyness but the same intolerance for inefficient expenditure of mental resources that led to the invention of SuperMemo in the first place. By the mid-’90s, with SuperMemo growing more and more popular, Wozniak felt that his ability to rationally control his life was slipping away. “There were 80 phone calls per day to handle. There was no time for learning, no time for programming, no time for sleep,” he recalls. In 1994, he disappeared for two weeks, leaving no information about where he was. The next year he was gone for 100 days. Each year, he has increased his time away. He doesn’t own a phone. He ignores his email for months at a time. And though he holds a PhD and has published in academic journals, he never attends conferences or scientific meetings.

Instead, Wozniak has ridden SuperMemo into uncharted regions of self-experimentation. In 1999, he started making a detailed record of his hours of sleep, and now he’s working to correlate that data with his daily performance on study repetitions. Psychologists have long believed there’s a correlation between sleep and memory, but no mathematical law has been discovered. Wozniak has also invented a way to apply his learning system to his intake of unstructured information from books and articles, winnowing written material down to the type of discrete chunks that can be memorized, and then scheduling them for efficient learning. He selects a short section of what he’s reading and copies it into the SuperMemo application, which predicts when he’ll want to read it again so it sticks in his mind. He cuts and pastes completely unread material into the system, assigning it a priority. SuperMemo shuffles all his potential knowledge into a queue and presents it to him on a study screen when the time is right. Wozniak can look at a graph of what he’s got lined up to learn and adjust the priority rankings if his goals change.

These techniques are designed to overcome steep learning curves through automated steps, like stairs on a hill. He calls it incremental reading, and it has come to dominate his intellectual life. Wozniak no longer wastes time worrying that he hasn’t gotten to some article he wants to read; once it’s loaded into the system, he trusts his algorithm to apportion it to his consciousness at the appropriate time.

The appropriate time, that is, for him. Having turned over his mental life to a computerized system, he refuses to be pushed around by random inputs and requests. Naturally, this can be annoying to people whose messages tend to sift to the bottom. “After four months,” Biedalak says sadly, “you sometimes get a reply to some sentence in an email that has been scrambled in his incremental reading process.”

For Wozniak, these misfires were less a product of scrambling than of an inevitable clash of goals. A person who understands the exact relationship between learning and time is forced to measure out his hours with a certain care. SuperMemo was like a genie that granted Wozniak a wish: unprecedented power to remember. But the value of what he remembered depended crucially on what he studied, and what he studied depended on his goals, and the selection of his goals rested upon the efficient acquisition of knowledge, in a regressive function that propelled him relentlessly along the path he had chosen. The guarantee that he would not forget what he learned was both a gift and a demand, requiring him to sacrifice every extraneous thing.

From the business side of SuperMemo, Wozniak’s priorities can sometimes look selfish. Janusz Murakowski, one of Wozniak’s friends who worked as a manager at the company during its infancy, thinks that Wozniak’s focus on his own learning has stunted the development of his invention. “Piotr writes this software for himself,” says Murakowski, now a professor of electrical engineering at the University of Delaware. “The interface is just impossible.” This is perhaps a bit unfair. SuperMemo comes in eight flavors, some of which were coded by licensees: SuperMemo for Windows, for Palm devices, for several cell phones, even an Internet version. It’s true that Wozniak is no Steve Jobs, and his software has none of the viral friendliness of a casual game like Brain Age for Nintendo DS. Still, it can hardly be described as the world’s most difficult program. After all, photographers can learn to produce the most arcane effects in Photoshop. Why shouldn’t more people be able to master SuperMemo?

“It was never a feel-good product,” Murakowski says, and here he may be getting closer to the true conflict that lies at the heart of the struggle to optimize intelligence, a conflict that transcends design and touches on some curious facts about human nature. We are used to the idea that normal humans can perform challenging feats of athleticism. We all know someone who has run a marathon or ridden a bike cross-country. But getting significantly smarter — that seems to be different. We associate intelligence with pure talent, and academic learning with educational experiences dating far back in life. To master a difficult language, to become expert in a technical field, to make a scientific contribution in a new area — these seem like rare things. And so they are, but perhaps not for the reason we assume.

The failure of SuperMemo to transform learning uncannily repeats the earlier failures of cognitive psychology to influence teachers and students. Our capacity to learn is amazingly large. But optimal learning demands a kind of rational control over ourselves that does not come easily. Even the basic demand for regularity can be daunting. If you skip a few days, the spacing effect, with its steady march of sealing knowledge in memory, begins to lose its force. Progress limps. When it comes to increasing intelligence, our brain is up to the task and our technology is up to the task. The problem lies in our temperament.

The Baltic Sea is dark as an unlit mirror. Wozniak and I walk along the shore, passing the wooden snack stands that won’t be open until spring, and he tells me how he manages his life. He’s married, and his wife shares his lifestyle. They swim together in winter, and though Polish is their native language, they communicate in English, which she learned with SuperMemo. Wozniak’s days are blocked into distinct periods: a creative period, a reading and studying period, an exercise period, an eating period, a resting period, and then a second creative period. He doesn’t get up at a regular hour and is passionate against alarm clocks. If excitement over his research leads him to work into the night, he simply shifts to sleeping in the day. When he sits down for a session of incremental reading, he attends to whatever automatically appears on his computer screen, stopping the instant his mind begins to drift or his comprehension falls too low and then moving on to the next item in the queue. SuperMemo graphs a distribution of priorities that he can adjust as he goes. When he encounters a passage that he thinks he’ll need to remember, he marks it; then it goes into a pattern of spaced repetition, and the information it contains will stay in his brain indefinitely.

“Once you get the snippets you need,” Wozniak says, “your books disappear. They gradually evaporate. They have been translated into knowledge.”

As a science fiction fan, I had always assumed that when computers supplemented our intelligence, it would be because we outsourced some of our memory to them. We would ask questions, and our machines would give oracular — or supremely practical — replies. Wozniak has discovered a different route. When he entrusts his mental life to a machine, it is not to throw off the burden of thought but to make his mind more swift. Extreme knowledge is not something for which he programs a computer but for which his computer is programming him.

I’ve already told Wozniak that I am not optimistic about my ability to tame old reading habits in the name of optimized knowledge. Books, for me, are not merely sources of information I might want to load into memory but also subjective companions, almost substitute people, and I don’t see why I would want to hold on to them in fragments. Still, I tell him I would like to give it a shot.

“So you believe in trying things for yourself?” he asks.


This provides his opening. “In that case, let’s go swimming.”

At the edge of the sea, I become afraid. I’m a strong swimmer, but there’s something about standing on the beach in the type of minuscule bathing suit you get at the gift shop of a discount resort in Eastern Europe, and watching people stride past in their down parkas, that smacks of danger.

“I’m already happy with anticipation,” Wozniak says.

“Will I have a heart attack?”

“There is less risk than on your drive here,” he answers.

I realize he must be correct. Poland has few freeways, and in the rural north, lines of cars jockey behind communist-era farm machinery until they defy the odds and try to pass. There are spectacular wrecks. Wozniak gives close attention to the qualitative estimate of fatal risks. By graphing the acquisition of knowledge in SuperMemo, he has realized that in a single lifetime one can acquire only a few million new items. This is the absolute limit on intellectual achievement defined by death. So he guards his health. He rarely gets in a car. The Germans on the beach are staring at me. I dive in.

Philosopher William James once wrote that mental life is controlled by noticing. Climbing out of the sea and onto the windy beach, my skin purple and my mind in a reverie provoked by shock, I find myself thinking of a checklist Wozniak wrote a few years ago describing how to become a genius. His advice was straightforward yet strangely terrible: You must clarify your goals, gain knowledge through spaced repetition, preserve health, work steadily, minimize stress, refuse interruption, and never resist sleep when tired. This should lead to radically improved intelligence and creativity. The only cost: turning your back on every convention of social life. It is a severe prescription. And yet now, as I grin broadly and wave to the gawkers, it occurs to me that the cold rationality of his approach may be only a surface feature and that, when linked to genuine rewards, even the chilliest of systems can have a certain visceral appeal. By projecting the achievement of extreme memory back along the forgetting curve, by provably linking the distant future — when we will know so much — to the few minutes we devote to studying today, Wozniak has found a way to condition his temperament along with his memory. He is making the future noticeable. He is trying not just to learn many things but to warm the process of learning itself with a draft of utopian ecstasy.

[First published in Wired 16.05, 4.21.08]

The Church of the Non-Believers – Richard Dawkins, Sam Harris, and Daniel Dennett


It’s a question you may prefer not to be asked. But I’m afraid I have no choice. We find ourselves, this very autumn, three and a half centuries after the intellectual martyrdom of Galileo, caught up in a struggle of ultimate importance, when each one of us must make a commitment. It is time to declare our position.

This is the challenge posed by the New Atheists. We are called upon, we lax agnostics, we noncommittal nonbelievers, we vague deists who would be embarrassed to defend antique absurdities like the Virgin Birth or the notion that Mary rose into heaven without dying, or any other blatant myth; we are called out, we fence-sitters, and told to help exorcise this debilitating curse: the curse of faith.

The New Atheists will not let us off the hook simply because we are not doctrinaire believers. They condemn not just belief in God but respect for belief in God. Religion is not only wrong; it’s evil. Now that the battle has been joined, there’s no excuse for shirking.

Three writers have sounded this call to arms. They are Richard Dawkins, Sam Harris, and Daniel Dennett. A few months ago, I set out to talk with them. I wanted to find out what it would mean to enlist in the war against faith.

OXFORD IS THE CAPITAL of reason, its Jerusalem. The walls glint gold in the late afternoon, as waves or particles of light scatter off the ancient bricks. Logic Lane, a tiny road under a low, right-angled bridge, cuts sharply across to the place where Robert Boyle formulated his law on gases and Robert Hooke first used a microscope to see a living cell. A few steps away is the memorial to Percy Bysshe Shelley. Here he lies, sculpted naked in stone, behind the walls of the university that expelled him almost 200 years ago – for atheism.

Richard Dawkins, the leading light of the New Atheism movement, lives and works in a large brick house just 20 minutes away from the Shelley memorial. Dawkins, formerly a fellow at New College, is the Charles Simonyi Professor of the Public Understanding of Science. He is 65 years old, and the book that made him famous, The Selfish Gene, dates from well back in the last century. The opposition it earned from rival theorizers and popularizers of Charles Darwin, such as Stephen Jay Gould, is fading into history. Gould died in 2002, and Dawkins, while acknowledging their battles, praised his influence on scientific culture. They were allies in the battle against creationism. Dawkins, however, has been far more belligerent in counterattack. His most recent book is called The God Delusion.

Dawkins’ style of debate is as maddening as it is reasonable. A few months earlier, in front of an audience of graduate students from around the world, Dawkins took on a famous geneticist and a renowned neurosurgeon on the question of whether God was real. The geneticist and the neurosurgeon advanced their best theistic arguments: Human consciousness is too remarkable to have evolved; our moral sense defies the selfish imperatives of nature; the laws of science themselves display an order divine; the existence of God can never be disproved by purely empirical means.

Dawkins rejected all these claims, but the last one – that science could never disprove God – provoked him to sarcasm. “There’s an infinite number of things that we can’t disprove,” he said. “You might say that because science can explain just about everything but not quite, it’s wrong to say therefore we don’t need God. It is also, I suppose, wrong to say we don’t need the Flying Spaghetti Monster, unicorns, Thor, Wotan, Jupiter, or fairies at the bottom of the garden. There’s an infinite number of things that some people at one time or another have believed in, and an infinite number of things that nobody has believed in. If there’s not the slightest reason to believe in any of those things, why bother? The onus is on somebody who says, I want to believe in God, Flying Spaghetti Monster, fairies, or whatever it is. It is not up to us to disprove it.”

Science, after all, is an empirical endeavor that traffics in probabilities. The probability of God, Dawkins says, while not zero, is vanishingly small. He is confident that no Flying Spaghetti Monster exists. Why should the notion of some deity that we inherited from the Bronze Age get more respectful treatment?

Dawkins has been talking this way for years, and his best comebacks are decades old. For instance, the Flying Spaghetti Monster is a variant of the tiny orbiting teapot used by Bertrand Russell for similar rhetorical duty back in 1952. Dawkins is perfectly aware that atheism is an ancient doctrine and that little of what he has to say is likely to change the terms of this stereotyped debate. But he continues to go at it. His true interlocutors are not the Christians he confronts directly but the wavering nonbelievers or quasi believers among his listeners – people like me, potential New Atheists who might be inspired by his example.

“I’m quite keen on the politics of persuading people of the virtues of atheism,” Dawkins says, after we get settled in one of the high-ceilinged, ground-floor rooms. He asks me to keep an eye on his bike, which sits just behind him, on the other side of a window overlooking the street. “The number of nonreligious people in the US is something nearer to 30 million than 20 million,” he says. “That’s more than all the Jews in the world put together. I think we’re in the same position the gay movement was in a few decades ago. There was a need for people to come out. The more people who came out, the more people had the courage to come out. I think that’s the case with atheists. They are more numerous than anybody realizes.”

Dawkins looks forward to the day when the first US politician is honest about being an atheist. “Highly intelligent people are mostly atheists,” he says. “Not a single member of either house of Congress admits to being an atheist. It just doesn’t add up. Either they’re stupid, or they’re lying. And have they got a motive for lying? Of course they’ve got a motive! Everybody knows that an atheist can’t get elected.”

When atheists finally begin to gain some power, what then? Here is where Dawkins’ analogy breaks down. Gay politics is strictly civil rights: Live and let live. But the atheist movement, by his lights, has no choice but to aggressively spread the good news. Evangelism is a moral imperative. Dawkins does not merely disagree with religious myths. He disagrees with tolerating them, with cooperating in their colonization of the brains of innocent tykes.

“How much do we regard children as being the property of their parents?” Dawkins asks. “It’s one thing to say people should be free to believe whatever they like, but should they be free to impose their beliefs on their children? Is there something to be said for society stepping in? What about bringing up children to believe manifest falsehoods?”

Dawkins is the inventor of the concept of the meme, that is, a cultural replicator that spreads from brain to brain, like a virus. Dawkins is also a believer in democracy. He understands perfectly well that there are practical constraints on controlling the spread of bad memes. If the solution to the spread of wrong ideas and contagious superstitions is a totalitarian commissariat that would silence believers, then the cure is worse than the disease. But such constraints are no excuse for the weak-minded pretense that religious viruses are trivial, much less benign. Bad ideas foisted on children are moral wrongs. We should think harder about how to stop them.

It is exactly this trip down Logic Lane, this conscientious deduction of conclusions from premises, that makes Dawkins’ proclamations a torment to his moderate allies. While frontline warriors against creationism are busy reassuring parents and legislators that teaching Darwin’s theory does not undermine the possibility of religious devotion, Dawkins is openly agreeing with the most stubborn fundamentalists that evolution must lead to atheism. I tell Dawkins what he already knows: He is making life harder for his friends.

He barely shrugs. “Well, it’s a cogent point, and I have to face that. My answer is that the big war is not between evolution and creationism, but between naturalism and supernaturalism. The sensible” – and here he pauses to indicate that sensible should be in quotes – “the ‘sensible’ religious people are really on the side of the fundamentalists, because they believe in supernaturalism. That puts me on the other side.”

THREE YEARS AGO, Dawkins adopted a new word to demarcate the types of things he couldn’t believe in. The word is bright, a noun. Coined by Sacramento, California, educators Paul Geisert and Mynga Futrell to designate a person with a naturalistic worldview, bright was designed to be broader than the atheist movement; it is not merely God that is untenable, but superstition, credulity, and magical thinking in general. Dawkins happened to be present in the spring of 2003 when Geisert and Futrell unveiled their proposal at an atheist conference in Florida, and he subsequently issued a public call in The Guardian and in Wired urging its use. The monthly Brights meetup in London is among the largest. The main organizer, Glen Slade, is a 41-year-old entrepreneur who studied computer science at the University of Cambridge and management at Insead, Europe’s leading business school. Slade points out that political developments in Europe and the US have created new opportunities for consciousness-raising. “The war on terror wakes people up to the fact that there is more than one religion in the world,” Slade says. “I think we’re at a crucial point, when we admit that certain types of religion are incompatible with certain rights. At what point does society say, ‘Hey, that’s insane’?”

Like Dawkins, Slade rejects those who might once have been his allies: agnostics and liberal believers, the type of people who may go to church but who are skeptical of doctrine. “Moderates give a power base to extremists,” Slade says. “A lot of Catholics use condoms, a lot of Catholics are divorced, and a lot don’t have a particular opinion about whether you are homosexual. But when the Pope stands up and says, ‘This is what Catholics believe,’ he still gets credit for speaking for more than a billion people.”

Now that people are more worried about the fatwas of Muslim clerics, Slade says, this concern could spread, become more general, and wake people up to damage caused by the Pope.

For the New Atheists, the problem is not any specific doctrine, but religion in general. Or, as Dawkins writes in The God Delusion, “As long as we accept the principle that religious faith must be respected simply because it is religious faith, it is hard to withhold respect from the faith of Osama bin Laden and the suicide bombers.”

The New Atheist insight is that one might start anywhere – with an intellectual argument, with a visceral rejection of Islamic or Christian fundamentalism, with political disgust – and then, by relentless and logical steps, renounce every supernatural crutch.

I RETURN FROM OXFORD enthusiastic for argument. I immediately begin trying out Dawkins’ appeal in polite company. At dinner parties or over drinks, I ask people to declare themselves. “Who here is an atheist?” I ask.

Usually, the first response is silence, accompanied by glances all around in the hope that somebody else will speak first. Then, after a moment, somebody does, almost always a man, almost always with a defiant smile and a tone of enthusiasm. He says happily, “I am!”

But it is the next comment that is telling. Somebody turns to him and says: “You would be.”


“Because you enjoy pissing people off.”

“Well, that’s true.”

This type of conversation takes place not in central Ohio, where I was born, or in Utah, where I was a teenager, but on the West Coast, among technical and scientific people, possibly the social group that is least likely among all Americans to be religious. Most of these people call themselves agnostic, but they don’t harbor much suspicion that God is real. They tell me they reject atheism not out of piety but out of politeness. As one said, “Atheism is like telling somebody, ‘The very thing you hinge your life on, I totally dismiss.'” This is the type of statement she would never want to make.

This is the statement the New Atheists believe must be made – loudly, clearly, and before it’s too late. I continue to invite my friends for a nice, invigorating stroll down Logic Lane. For the most part, they just laugh and wave me on.

AS I TEST OUT the New Atheist arguments, I realize that the problem with logic is that it doesn’t quicken the blood sufficiently – even my own. But if logic by itself won’t do the trick, how about the threat of apocalypse? The apocalyptic argument for atheism is the province of Sam Harris, who released a book two years ago called The End of Faith: Religion, Terror, and the Future of Reason.

Harris argues that, unless we renounce faith, religious violence will soon bring civilization to an end. Between 2004 and 2006, his book sold more than a quarter million copies.

This autumn, Harris has a new book out, Letter to a Christian Nation. In it, he demonstrates the behavior he believes atheists should adopt when talking with Christians. “Nonbelievers like myself stand beside you,” he writes, addressing his imaginary opponent, “dumbstruck by the Muslim hordes who chant death to whole nations of the living. But we stand dumbstruck by you as well – by your denial of tangible reality, by the suffering you create in service to your religious myths, and by your attachment to an imaginary God.”

In midsummer, Harris and I overlap for a few days in Southern California, so we arrange to meet for lunch. I am not looking for more atheist arguments. I am already steeped in them. I have by now read my David Hume, my Bertrand Russell, even my Shelley. I want to talk to Harris about emotion, about politics, about his conviction that the days of civilization are numbered unless we renounce irrational belief. Given the way things are going, I want to know if he is depressed. Is he preparing for the end?

He is not. “Look at slavery,” he says. We are at a beautiful restaurant in Santa Monica, near the public lots from which Americans – nearly 80 percent of whom believe the Bible is the true word of God, if polls are correct – walk happily down to the beach in various states of undress. “People used to think,” Harris says, “that slavery was morally acceptable. The most intelligent, sophisticated people used to accept that you could kidnap whole families, force them to work for you, and sell their children. That looks ridiculous to us today. We’re going to look back and be amazed that we approached this asymptote of destructive capacity while allowing ourselves to be balkanized by fantasy. What seems quixotic is quixotic – on this side of a radical change. From the other side, you can’t believe it didn’t happen earlier. At some point, there is going to be enough pressure that it is just going to be too embarrassing to believe in God.”

Suddenly I notice in myself a protective feeling toward Harris. Here is a man who believes that a great global change, perhaps the most important cultural change in the history of humanity, will occur out of sheer intellectual embarrassment.

We discuss what it might look like, this world without God. “There would be a religion of reason,” Harris says. “We would have realized the rational means to maximize human happiness. We may all agree that we want to have a Sabbath that we take really seriously – a lot more seriously than most religious people take it. But it would be a rational decision, and it would not be just because it’s in the Bible. We would be able to invoke the power of poetry and ritual and silent contemplation and all the variables of happiness so that we could exploit them. Call it prayer, but we would have prayer without bullshit.”

I do call it prayer. Here is the atheist prayer: that our reason will subjugate our superstition, that our intelligence will check our illusions, that we will be able to hold at bay the evil temptation of faith.

THAT WEEK in Los Angeles it is very hot. Temperatures in the San Fernando Valley, where I’m staying, set a record at 119. Intermittent power outages kill the lights, and the region is bathed in an old-fashioned brown smog that blurs the outlines of the trees. In the evening, as it cools to 102, I decide to enter the emplacements of the adversary.

I am headed for the Angelus Temple, in Echo Park. A landmark of modern Christianity, it is one of the original churches of the surging charismatic movement. It is not the richest church, nor the most powerful, nor the most famous. But Angelus, founded by Aimee Semple McPherson in the 1920s, pioneered that combination of high production values and uplifting theology that began to purge the stain of hickdom from evangelical faith. Aside from being a historical shrine, the Angelus Temple is a case study in religious evolution. While the New Atheists are arming themselves against faith, faith itself renews its arms. Superstition, it turns out, is a moving target.

In 2001, a merger with a thriving church downtown, run by the young son of a powerful pastor in Phoenix, brought renewal – not merely in the form of massive social outreach and volunteer programs, youth events, and Bible study groups, but also, as the church explains on its Web site, in the form of “new cushioned theater seats, Ferrari-red carpet, modern stainless steel fixtures, and acoustical absorbers hung decoratively from the ceiling similar to the Royal Albert Hall in England.”

It is Saturday night, and I am greeted at the door by a blast of air-conditioning and a wave of sound. It looks like a rock concert. It is a rock concert. More than 500 teenagers are crowding the stage, hands uplifted, singing along. There is a 12-member band, four huge videoscreens, and a crane that allows the camera to swoop through the air, projecting images of the believers back to themselves.

“How many people are excited to give to the Lord tonight?” asks a young man who saunters up to the front. He handles his microphone naturally; he is not self-conscious. “How many people are pumped up? You have a destiny. God has a plan. But you have got to sow some seeds tonight, or it is never going to happen.” Text flashes across the overhead screens, telling the teenagers how to make out their checks.

Behind the lighting rigs and the acoustic panels, stained glass peeks out, a relic of McPherson’s era. McPherson was personally wild and doctrinally flexible. She had visions and spoke in tongues, but she tried to put aside sectarian disputes. Even today, the charismatic movement is somewhat careless of doctrine. There is room for theistic evolutionists, for nonliteralists who hold that each of God’s days in Genesis was the equivalent of a geological epoch, even for the notion that a check made out properly to the Lord can influence divine whim in the matter of a raise at work or a scholarship to college. Of course, evolutionary accommodation is controversial in the seminaries, and the idea of bribing God is rank heresy – no trained theologian in any Christian tradition would endorse it. But such deviations are generously tolerated in practice. The forces at work in a living church have little to do with intellectual disputes over the meaning of the Lord’s word. Having agreed that the Bible is inerrant, one is permitted to put it to use.

This use is supremely practical. Pastor Matthew Barnett, onstage, wears the uniform of America – jeans with loafers, a short-sleeved knit shirt. It’s one of the costumes Kanye West wore on his Touch the Sky Tour, the same costume kids put on to go fold clothes at the mall. Like Kanye, like the kids at the mall, like millions of sober alcoholics, like Jesus, Pastor Matt – as he’s called – does not traffic in proofs. Instead he tells stories. For instance, Pastor Matt used to be fat. Every night at 10 pm, it was off to an orgy of junk food at Jack in the Box. Two monster tacos, curly fries, a chocolate shake. He was programmed. He was helpless. He could not resist. “The devil is a lion seeking whom he may devour,” Pastor Matt says. On the other hand, strength to resist temptation is an explicit promise from the Lord. Let us read from 1 Corinthians: God is faithful, who will not suffer you to be tempted above that ye are able; but will with the temptation also make a way to escape, that ye may be able to bear it.

Anybody who has ever been a teenager will recognize the relevance of Pastor Matt’s sermon. These are the years of confusion, temptation, struggles with self-control. Pastor Matt openly shares with the teenagers the great humiliation he faced when trying to lose weight. The pastor is trim and handsome now. He talks intimately with the teenagers about food, about sex, about drugs. He boosts them up. He helps them cope with their shame. He tells them that they are kings anointed by God, that they simply need to pray, and have faith, and be honest, and express their vulnerability, and work hard, and if they do these things they are guaranteed their reward.

When he calls them to the stage, hundreds go. He puts his hands on their heads, and some cry. The altar call is a moving spectacle, and even we adults, we readers of Dawkins and Harris, we practiced reasoners and sincere pilgrims on the path of nonbelief, may find something in it that makes sense. Notwithstanding the banality of the doctrine, its canned anecdotes, and its questionable fundraising, Pastor Matthew offers a gift to his flock. They sow their seeds, and he blesses them. It is a direct exchange.

THE NEXT MORNING, I seek to cleanse my intellectual conscience among the freethinkers. The Center for Inquiry is also a storied landmark. True, it is not as striking as the Angelus Temple, being only a bland, low structure at the far end of Hollywood Boulevard, miles away from the tourists. But this building is the West Coast branch of one of the greatest anti-supernatural organizations in the world. My favorite thing about the Center for Inquiry is that it is affiliated with the Committee for the Scientific Investigation of Claims of the Paranormal, founded 30 years ago by Isaac Asimov, Paul Kurtz, and Carl Sagan and dedicated to spreading misery among every species of quack.

I have become a connoisseur of atheist groups – there are scores of them, mostly local, linked into a few larger networks. There are some tensions, as is normal in the claustrophobia of powerless subcultures, but relations among the different branches of the movement are mostly friendly. Typical atheists are hardly the rabble-rousing evangelists that Dawkins or Harris might like. They are an older, peaceable, quietly frustrated lot, who meet partly out of idealism and partly out of loneliness. Here in Los Angeles, every fourth Sunday at 11 am, there is a meeting of Atheists United. More than 50 people have shown up today, which is a very good turnout for atheism. Many are approaching retirement age. The speaker this morning, a younger activist named Clark Adams, encourages them with the idea that their numbers are growing. Look at South Park, Adams urges. Look at Howard Stern. Look at Penn & Teller. These are signs of an infidel upsurge.

Still, Adams admits some marketing concerns. Atheists are predominant among the “upper 5 percent,” he says. “Where we’re lagging is among the lower 95 percent.”

This is a true problem, and it goes beyond the difficulty of selling your ideas among those to whom you so openly condescend. The sociologist Rodney Stark has argued that the rise and fall of religions can be understood in economic terms. Believers sacrifice time and money in exchange for both spiritual and material benefits. In other words, religion is rational, but it is governed by the rationality of trade rather than of argument. Stark’s theory is academically controversial, but here, in the Sunday morning meeting of Atheists United, it seems obvious that the narrow reasonableness of Adams can hardly be effective with the deal on offer at the Angelus Temple.

“We’re lagging among the lower 95 percent,” says Adams.

“You are kings anointed by God,” says Pastor Matt.

As the tide of faith rises, atheists, who have no church to buoy them, cling to one another. That a single celebrity, say, Keanu Reeves, is known to care nothing about God is counted as a victory. This parochial and moralistic self-regard begins to inspire in me a feeling of oppression. When Adams starts to recite the names of atheists who may have contributed to the television program Mr. Show With Bob and David between 1995 and 1998, I leave. Standing in the half-empty parking lot is a relief, though I am drenched from the heat.

MY PILGRIMAGE is about to become more difficult. On the one hand, it is obvious that the political prospects of the New Atheism are slight. People see a contradiction in its tone of certainty. Contemptuous of the faith of others, its proponents never doubt their own belief. They are fundamentalists. I hear this protest dozens of times. It comes up in every conversation. Even those who might side with the New Atheists are repelled by their strident tone. (The founders of the Brights, Geisert and Futrell, became grim at the mention of Sam Harris. “We don’t endorse anything from him,” Geisert said. We had talked for nearly three hours, and this was the only dark cloud.) The New Atheists never propose realistic solutions to the damage religion can cause. For instance, the Catholic Church opposes condom use, which makes it complicit in the spread of AIDS. But among the most powerful voices against this tragic mistake are liberals within the Church – exactly those allies the New Atheists reject. The New Atheists care mainly about correct belief. This makes them hopeless, politically.

But on the other hand, the New Atheism does not aim at success by conventional political means. It does not balance interests, it does not make compromises, it does not seek common ground. The New Atheism, outwardly at least, is a straightforward appeal to our intellect. Atheists make their stand upon the truth.

So is atheism true?

There’s good evidence from research by anthropologists such as Pascal Boyer and Scott Atran that a grab bag of cognitive predispositions makes us natural believers. We hear leaves rustle and we imagine that some airy being flutters up there; we see a corpse and continue to fear the judgment and influence of the person it once was. Remarkable progress has been made in understanding why faith is congenial to human nature – and of course that still says nothing about whether it is true. Harris is typically severe in his rejection of the idea that evolutionary history somehow justifies faith. There is, he writes, “nothing more natural than rape. But no one would argue that rape is good, or compatible with a civil society, because it may have had evolutionary advantages for our ancestors.” Like rape, Harris says, religion may be a vestige of our primitive nature that we must simply overcome.

A variety of rebuttals to atheism have been tried over the years. Religious fundamentalists stand on their canonized texts and refuse to budge. The wisdom of this approach – strategically, at least – is evident when you see the awkward positions nonfundamentalists find themselves in. The most active defender of faith among scientists right now is Francis Collins, head of the Human Genome Project. His most recent book is called The Language of God: A Scientist Presents Evidence for Belief. In defiance of the title, Collins never attempts to show that science offers evidence for belief. Rather, he argues only that nothing in science prohibits belief. Unsolved problems in diverse fields, along with a skepticism about knowledge in general, are used to demonstrate that a deity might not be impossible. The problem with this, for defenders of faith, is that they’ve implicitly accepted science as the arbiter of what is real. This leaves the atheists with the upper hand.

That’s because when secular investigations take the lead, sacred doctrines collapse. There’s barely a field of modern research – cosmology, biology, archaeology, anthropology, psychology – in which competing religious explanations have survived unscathed. Even the lowly humanities, which began the demolition job more than 200 years ago with textual criticism of the Bible, continue to make things difficult for believers through careful analysis of the historical origins of religious texts. While Collins and his fellow reconcilers can defend the notion of faith in the abstract, as soon as they get down to doctrine, the secular professors show up with their corrosive arguments. When it comes to concrete examples of exactly what we should believe, reason is a slippery slope, and at the bottom – well, at the bottom is atheism.

I spend months resisting this slide. I turn to the great Oxford professor of science and religion John Hedley Brooke, who convinces me that, contrary to myth, Darwin did not become an atheist because of evolution. Instead, his growing resistance to Christianity came from his moral criticism of 19th-century doctrine, compounded by the tragedy of his daughter’s death. Darwin did not believe that evolution proved there was no God. This is interesting, because the story of Darwin’s relationship to Christianity has figured in polemics for and against evolution for more than a century. But in the context of a real struggle with the claims of atheism, an accurate history of Darwin’s loss of faith counts for little more than celebrity gossip.

From Brooke, I get pointers on the state of the art in academic theology, particularly those philosophers of religion who write in depth about science, such as Willem Drees and Philip Clayton. There is a certain illicit satisfaction in this scholarly work, which to an atheist is no better than astrology. (“The entire thrust of my position is that Christian theology is a nonsubject,” Dawkins has written. “Vacuous. Devoid of coherence or content.”) On the contrary, I find the best of these books to be brilliant, detailed, self-assured. I learn about kenosis, the deliberate decision of God not to disturb the natural order. I learn about panentheism, which says God is both the world and more than the world, and about emergentist theology, which holds that a God might have evolved. There are deep passages surveying theories of knowledge, glossing Kant, Schelling, and Spinoza. I discover a daunting diversity of belief, and of course I’m just beginning. I haven’t even gotten started with Islam, or the Vedic texts, or Zoroastrianism. It is all admirable and stimulating and lacks only the real help anybody in my position would need: reasons to believe that specific religious ideas are true. Even the most careful theologians seem to pose the question backward, starting out with their beliefs and clinging to those fragments that science and logic cannot overturn. The most rigorous of them jettison huge portions of doctrine along the way.

If trained theologians can go this far, who am I to defend supernaturalism on their behalf? Why not be an atheist? I’ve sought aid far and wide, from Echo Park to Harvard, and finally I am almost ready to give in. Only one thing is still bothering me. Were I to declare myself an atheist, what would this mean? Would my life have to change? Would it become my moral obligation to be uncompromising toward fence-sitting friends? That person at dinner, pissing people off with his arrogance, his disrespect, his intellectual scorn – would that be me?

Besides, do we really understand all that religion means? Would it be easy to excise it, even assuming it is false? Didn’t they try a cult of reason once, in France, at the close of the 18th century, and didn’t it turn out to be too ugly even for Robespierre?

THE DOCTOR for these difficulties looks like Santa Claus. His name is Daniel Dennett. He is a renowned philosopher, an atheist, and the possessor of a full white beard. I suspect he must have designed this Father Christmas look intentionally, but in fact it just evolved. “In the ’60s, I looked like Rasputin,” he says. Children have come up to him in airports, checking to see if he is on vacation from the North Pole. When it happens, he does not torment them with knowledge that the person they mistake him for is not real. Instead, the philosopher puts his fingers to his lips and says conspiratorially: “Shhhh.”

Dennett summers on a farm in Maine. Flying in, I have a fine view of the old New England tapestry, which grows more and more rural as we move north: symmetrical fields with pale borders like the membranes of cells, barns and outbuildings like organelles, and, at the center of every thickening cluster of life, always the same vestigial structure, whose black dot of a cupola is offset by a whitish gleam. I know something of the history of the New England church, which began in fanaticism and ended in reform – from witch burning to softest Presbyterianism in a few hundred years. Now, according to the atheists, these structures serve no useful purpose, and besides, they may be conduits for disease. Perhaps it is best that we do away with them all. But can it be done without harm?

Among the New Atheists, Dennett holds an exalted but ambiguous place. Like Dawkins and Harris, he is an evangelizing nonbeliever. He has campaigned in writing on behalf of the Brights and has written a book called Breaking the Spell: Religion as a Natural Phenomenon. In it, the blasting rhetoric of Dawkins and Harris is absent, replaced by provocative, often humorous examples and thought experiments. But like the other New Atheists, Dennett gives no quarter to believers who resist subjecting their faith to scientific evaluation. In fact, he argues that neutral, scientifically informed education about every religion in the world should be mandatory in school. After all, he argues, “if you have to hoodwink – or blindfold – your children to ensure that they confirm their faith when they are adults, your faith ought to go extinct.”

When I arrive at the farm, I find him in the midst of a difficult task. He has been asked by the President’s Council on Bioethics to write an essay reflecting on human dignity. In grappling with these issues, Dennett knows that he can’t rely on faith or scripture. He will not say that life begins when an embryo is ensouled by God. He will not say that hospitals must not invite the indigent to sell their bodies for medical experiments because humans are endowed by their creator with inalienable rights. Ethical problems must be solved by reason, not arbitrary rules. And yet, on the other hand, Dennett knows that reason alone will fail.

We sit in his study, in some creaky chairs, with the deep silence of an August morning around us, and Dennett tells me that he takes very seriously the risk of overreliance on thought. He doesn’t want people to lose confidence in what he calls their “default settings,” by which he means the conviction that their ethical intuitions are trustworthy. These default settings give us a feeling of security, a belief that our own sacrifices will be reciprocated. “If you shatter this confidence,” he says, “then you get into a deep hole. Without trust, everything goes wrong.”

It interests me that, though Dennett is an atheist, he does not see faith merely as a useless vestige of our primitive nature, something we can, with effort, intellectualize away. No rational creature, he says, would be able to do without unexamined, sacred things.

“Would intelligent robots be religious?” it occurs to me to ask.

“Perhaps they would,” he answers thoughtfully. “Although, if they were intelligent enough to evaluate their own programming, they would eventually question their belief in God.”

Dennett is an advocate of admitting that we simply don’t have good reasons for some of the things we believe. Although we must guard our defaults, we still have to admit that they may be somewhat arbitrary. “How else do we protect ourselves?” he asks. “With absolutisms? This means telling lies, and when the lies are exposed, the crash is worse. It’s not that science can discover when the body is ensouled. That’s nonsense. We are not going to tolerate infanticide. But we’re not going to put people in jail for onanism. Instead of protecting stability with a brittle set of myths, we can defend a deep resistance to mucking with the boundaries.”

This sounds to me a little like the religion of reason that Harris foresees.

“Yes, there could be a rational religion,” Dennett says. “We could have a rational policy not even to think about certain things.” He understands that this would create constant tension between prohibition and curiosity. But the borders of our sacred beliefs could be well guarded simply by acknowledging that it is pragmatic to refuse to change them.

I ask Dennett if there might not be a contradiction in his scheme. On the one hand, he aggressively confronts the faithful, attacking their sacred beliefs. On the other hand, he proposes that our inherited defaults be put outside the limits of dispute. But this would make our defaults into a religion, unimpeachable and implacable gods. And besides, are we not atheists? Sacred prohibitions are anathema to us.

Dennett replies that exceptions can be made. “Philosophers are the ones who refuse to accept the sacred values,” he says. For instance, Socrates.

I find this answer supremely odd. The image of an atheist religion whose sacred objects, called defaults, are taboo for all except philosophers – this is the material of the cruelest parody. But that’s not what Dennett means. In his scenario, the philosophers are not revered authorities but mental risk-takers and scouts. Their adventures invite ridicule, or worse. “Philosophers should expect to be hooted at and reviled,” Dennett says. “Socrates drank the hemlock. He knew what he was doing.”

With this, I begin to understand what kind of atheist I want to be. Dennett’s invocation of Socrates is a reminder that there are certain actors in history who change the world by staging their own defeat. Having been raised under Christianity, we are well schooled in this tactic of belated victory. The world has reversed its judgment on Socrates, as on Jesus and the fanatical John Brown. All critics of fundamental values, even those who have no magical beliefs, will find themselves tempted to retrace this path. Dawkins’ tense rhetoric of moral choice, Harris’ vision of apocalypse, their contempt for liberals, the invocation of slavery – this is not the language of intellectual debate but of prophecy.

In Breaking the Spell, Dennett writes about the personal risk inherent in attacking faith. Harris veils his academic affiliation and hometown because he fears for his physical safety. But in truth, the cultural neighborhoods where they live and work bear little resemblance to Italy under Pope Urban VIII, or New England in the 17th century, or Saudi Arabia today. Dennett spends the academic year at Tufts University and summers with family and students in Maine. Dawkins occupies an endowed Oxford chair and walks his dog on the wide streets, alone. Harris sails forward this fall with his second well-publicized book. There have been no fatwas, no prison cells, no gallows or crosses.

Prophecy, I’ve come to realize, is a complex meme. When prophets provoke real trouble, bring confusion to society by sowing reverberant doubts, spark an active, opposing consensus everywhere – that is the sign they’ve hit a nerve. But what happens when they don’t hit a nerve? There are plenty of would-be prophets in the world, vainly peddling their provocative claims. Most of them just end up lecturing to undergraduates, or leading little Christian sects, or getting into Wikipedia edit wars, or boring their friends. An unsuccessful prophet is not a martyr, but a sort of clown.

Where does this leave us, we who have been called upon to join this uncompromising war against faith? What shall we do, we potential enlistees? Myself, I’ve decided to refuse the call. The irony of the New Atheism – this prophetic attack on prophecy, this extremism in opposition to extremism – is too much for me.

The New Atheists have castigated fundamentalism and branded even the mildest religious liberals as enablers of a vengeful mob. Everybody who does not join them is an ally of the Taliban. But, so far, their provocation has failed to take hold. Given all the religious trauma in the world, I take this as good news. Even those of us who sympathize intellectually have good reasons to wish that the New Atheists continue to seem absurd. If we reject their polemics, if we continue to have respectful conversations even about things we find ridiculous, this doesn’t necessarily mean we’ve lost our convictions or our sanity. It simply reflects our deepest, democratic values. Or, you might say, our bedrock faith: the faith that no matter how confident we are in our beliefs, there’s always a chance we could turn out to be wrong.

Wired, Issue 14.11, November 2006

Reinventing 911

It’s another dangerous day in America. Bird flu is spreading, the North Koreans have a nuclear bomb, and Osama bin Laden is still at large. The federal security threat-warning system points to “elevated.” Citizens nationwide have been told to be extra vigilant against new terror attacks.

Meanwhile, in the midsize city of Portland, Oregon, the authorities have other things on their minds. A little before 6 pm on this ordinary Saturday evening, there is a hit-and-run in the city’s western suburbs. A moment later, a silent alarm goes off in a building near downtown. At 6:03, there’s trouble with a drunk on the north side, and at almost the same time there’s a report of a disturbance at a Home Depot. Three quiet minutes go by, and then at 6:07 comes news of another hit-and-run.

From a room on the 10th floor of the old Heathman Hotel downtown, I follow the action as it scrolls across the screen of my laptop, little exclamation points popping up on a detailed satellite photo of the town. Each alert is attached to a short bit of text. I can zoom out, watching multiple traumas light up across the whole metropolitan area of 1.7 million people, or zoom in, finding nearly silent places where nothing that requires attention from the police happens for a long time. The resolution is so good, I can pick out individual buildings.

At 7:38 pm there’s news of a robbery downtown. At 7:53 pm another robbery occurs, across the Willamette River. Between these incidents, there’s a motorist in distress, an audible burglar alarm, and a problem with an “unwanted person” serious enough for the police to dispatch three units.

I stay in front of my map for hours, watching a swift, unceasing flow of local problems. While there is an undeniable voyeuristic appeal to a real-time data feed of break-ins, auto thefts, fisticuffs, and public drunkenness, the true value of this experimental system lies elsewhere. For several months, I’ve been talking with security experts about one of the thorniest problems they face: How can we protect our complex society from massive but unpredictable catastrophes? The homeland security establishment has spent an immeasurable fortune vainly seeking an answer, distributing useless, highly specialized equipment, and toggling its multicolored Homeland Security Advisory System back and forth between yellow, for elevated, and orange, for high. Now I’ve come to take a look at a different set of tools, constructed outside the control of the federal government and based on the notion that the easier it is for me to find out about a loose dog tying up traffic, the safer I am from a terrorist attack.

Art Botterell is a 51-year-old former bureaucrat whose outwardly earnest, well-formulated sentences have just the degree of excessive precision that functions among technical people as sarcasm. At one time, Botterell worked for the State of California, in the Governor’s Office of Emergency Services. But he quit that job in 1995. Today, Botterell is supported by his wife, a teacher, leaving him time to save America.

I first met Botterell earlier this year at a discussion of the book Safe: The Race to Protect Ourselves in a Newly Dangerous World. (Safe has four authors, including Katrina Heron, the former editor in chief of this magazine, and Evan Ratliff, whose story, “Fear, Inc.,” appears in this issue.) He caught my attention because, in an evening of discouraging commentary on the security establishment, he alone expressed optimism. There are enormous public safety resources that remain untapped, Botterell argued. “The focus in homeland security is on the idea of America as an invincible fortress,” he told me later. “Most of the effort goes into prevention, law enforcement, and the military. But those of us in emergency management tend to think, ‘Well, stuff happens. So, what are you going to do about it?'”

In the world of disaster management, here is some of the stuff that happens: Levees burst, power grids go dark, oil tankers run aground, railcars full of toxic chemicals tumble off their tracks, tornadoes sweep houses into the sky. In dealing with such catastrophes, emergency managers have experience in the cascade of consequences: Phone service vanishes, hospitals are jammed, highways slow to a crawl, shelters overflow. No matter how much advance planning may have been done, disaster response becomes an improvisation, and society eventually rights itself through the cumulative effect of many separate acts of intelligence.

Obviously, if you want citizens to improvise intelligently, it is wise to let them know as soon as possible when something goes wrong. Back in 1989, when he was working for the state of California, Botterell started creating an innovative warning system called the Emergency Digital Information Service. Botterell’s system – still in use – aggregates weather alerts, natural disaster information, and other official warnings into a common database, then makes them available through multiple media: pager, email, the Web, and digital radio broadcast. Because EDIS warnings are picked up by television newsrooms, local police, school principals, building management firms – anybody who wants them – the system injects massive redundancy into the public warning system and ensures that any serious news will immediately be bouncing around multiple communication channels.

EDIS was designed to fix two flaws in traditional warnings like tsunami sirens, telephone trees, and old-fashioned broadcast alerts. The first problem is that specialized warning systems are infrequently used, and usually fail under stress. But the second problem is more serious: Humans are encoded with a tendency to pause. When we receive new information that requires urgent action, we hesitate, testing the reality of the news and thinking about what to do. Emergency managers are all too familiar with this feature of human nature. They call it milling.

Milling is rational – and dangerous. Even when a warning is successfully delivered, there are deadly delays before people respond. What are they doing in these minutes, hours, and even days? They are talking to friends and family, watching the news, listening to the radio, calling the police, counting their money, and trying to balance the costs of leaving against the risks of staying. When alerts are given through rarely used pipelines, milling increases. And when the information distributed by hard-pressed government officials is confusing or contradictory, milling increases even more.

During a large disaster, like Hurricane Katrina, warnings get hopelessly jumbled. The truth is that, for warnings to work, it’s not enough for them to be delivered. They must also overcome that human tendency to pause; they must trigger a series of effective actions, mobilizing the informal networks that we depend on in a crisis.

To understand the true nature of warnings, it helps to see them not as single events, like an air-raid siren, but rather as swarms of messages racing through overlapping social networks, like the buzz of gossip. Residents of New Orleans didn’t just need to know a hurricane was coming. They also needed to be informed that floodwaters were threatening to breach the levees, that not all neighborhoods would be inundated, that certain roads would become impassible while alternative evacuation routes would remain open, that buses were available for transport, and that the Superdome was full.

No central authority possessed this information. Knowledge was fragmentary, parceled out among tens of thousands of people on the ground. There was no way to gather all these observations and deliver them to where they were needed. During Hurricane Katrina, public officials from top to bottom found themselves locked within conventional channels, unable to receive, analyze, or redistribute news from outside. In the most egregious example, Homeland Security secretary Michael Chertoff said in a radio interview that he had not heard that people at the New Orleans convention center were without food or water. At that point they’d been stranded two days.

By contrast, in the system Botterell created for California, warnings are sucked up from an array of sources and sent automatically to users throughout the state. Messages are squeezed into a standard format called the Common Alerting Protocol, designed by Botterell in discussion with scores of other disaster experts. CAP gives precise definitions to concepts like proximity, urgency, and certainty. Using CAP, anyone who might respond to an emergency can choose to get warnings for their own neighborhood, for instance, or only the most urgent messages. Alerts can be received by machines, filtered, and passed along. The model is simple and elegant, and because warnings can be tagged with geographical coordinates, users can customize their cell phones, pagers, BlackBerries, or other devices to get only those relevant to their precise locale. The EDIS system proved itself in the 1994 Northridge earthquake, carrying more than 2,000 news releases and media advisories, and it has only grown more robust in the decade since.

Anyone who has paid close attention to the evolution of the Internet will recognize the underlying power of the Common Alerting Protocol. Good standards and widespread access, not hardware or software, bring social networks to life. CAP provided the first proven warning standard, but when it comes to participation, California’s EDIS remained strikingly primitive. To this day only certain agencies – like the US Geological Survey, law enforcement and fire departments, and the National Weather Service – are permitted to send out information. This increases trust, but at the expense of scope.

Until recently, CAP was like the markup languages that existed before the invention of the Web – a useful set of technical rules whose potential to change society looked like nothing more than the exaggerated enthusiasm of a few geeks. Open data standards aren’t sexy. You can’t sell them to the government for a pile of cash. And it’s hard to pose in front of them for celebratory photographs.

On May 11, 2005, a small plane took off from an airfield in Pennsylvania, wandered around for a bit, then aimed straight for the Capitol. This was the type of incident the homeland security establishment had been preparing for ever since 9/11. An evacuation began, and reporters caught sight of members of Congress rushing down the steps of the Capitol. Just over half an hour later, the plane was on the ground. As the pilot explained that he was merely lost, rounds of congratulations began to circulate; the government’s quick reaction had proven that new investments in public safety were paying off. Then the DC mayor, Anthony Williams, told reporters that nobody had alerted his administration to the threat until after the all-clear was sounded. There are more than half a million civilians in the District of Columbia. Wasn’t anybody thinking about them?

Washington’s emergency protocols, it turned out, were a jumble after all. And the same is true across the nation. Thousands of vulnerable targets have been identified, but there is no credible plan for protecting them. The reason is simple: Any plan would be inherently incomplete. The possibilities for disruption are too numerous. You could plan forever and still not account for all of them.

The word that security experts use to describe simple threats to complicated systems is asymmetry. As Stephen Jay Gould pointed out in his essay “The Great Asymmetry,” catastrophe is favored by nature. Species diversity increases for millennia, and then an asteroid extinguishes many forms of life; a skyscraper that takes years to build can be destroyed in an hour. The wreck of a city by a hurricane is an example of asymmetry. So is terrorism – the relative ease of destruction is the edge terrorists use to compensate for their small numbers.

On the other hand, software designers have gotten pretty good at increasing resistance to asymmetrical threats. The principles are well known: Use uncomplicated parts, encourage redundancy, and open the system to public examination so flaws can be discovered and fixed before they become catastrophic. The key is not to anticipate every problem, but to create flexible networks that can route around failure. Yet ever since 9/11, the security establishment has gone in the opposite direction, building highly specialized tools, centralizing control, and increasing secrecy.

After the debacle of the errant Cessna, federal officials pointed out that a system to coordinate response to aerial attacks had already been installed. The system, called the Domestic Events Network, involves an always-open conference call. A dedicated speakerphone sits in the DC police headquarters. In this case, a human error had occurred – some idiot hung up the line.

But of course the problem goes deeper than that. Such rarely used systems actually produce idiocy. Who could remain ready to act on a signal that seldom, if ever, comes through? Eventually, people zone out. They stop paying attention. They become idiots.

Real reactions to real threats take an entirely different form. In the case of the Cessna flyover, plenty of citizens knew that there was an evacuation, even those with no special access to government communications. Why? Because as soon as the evacuation of the Capitol began, it was noted by reporters and bystanders. Within minutes, it was on the Internet. Wherever they occur, major threats nearly always trigger instant ripples through electronic networks. Bursts of communication are unleashed as witnesses spread the word.

This is the raw material of warning. The good thing is that the signal is immediate. The bad thing is that it comes with a lot of noise. A formal structure for warnings, like Art Botterell’s Common Alerting Protocol, eases transmission but doesn’t make the information more reliable. We still need a way to analyze the warnings, to sort the raw cries of amazement and confusion, the requests for aid, and the coolly professional descriptions of experts, and assemble these records into a real-time portrait of a bad event. We need a system to boost intelligence everywhere, providing the kind of distributed, networked resistance crucial for surviving asymmetrical attacks. Such work could hardly be performed by machines. Operators would have to take calls from people on the ground, separate out the cranks, dampen the hysteria, and keep a precise record. In theory, all that information could then easily be pushed back out to the public. Such a system would be expensive, difficult to build, and extremely valuable. Fortunately, in most cities, it already exists.

“A 911 call center is a resource of awesome power,” says Carl Simpson, the director of emergency communications for the city of Portland, “because when something goes wrong, everybody dials 911.”

I was talking with Simpson at the entrance to the metropolitan area’s hypermodern Bureau of Emergency Communications. He led me up to the call center, a large, theatrical, open space where dozens of operators were taking incoming emergency calls and dispatching police, fire, or medical response teams.

Being a 911 operator means balancing seemingly contradictory skills. On one hand, operators have to be fanatically precise and well-organized. On the other, they must be able to establish rapport with panicky callers. Operators need excellent spatial memories so that they can keep a map of an ongoing crisis clear in their minds. But they cannot be wedded to an old picture of reality, because the city is constantly changing. It takes more hours to become a fully trusted operator in Simpson’s center than it does to become a licensed helicopter pilot. The washout rate during training is 40 percent.

I spent most of the day listening to calls, hearing how the narratives of people in distress are taken in, rearranged, stripped of irrelevancies, compared to known data (“There’s a parking lot on the north side there, ma’am, is that where you are?”), and coded for urgency. Simpson pointed out that most people think of a 911 call center only in terms of the data coming in. Very few people have considered what would happen if, after collecting all those public cries of alarm, you extracted the essentials, tagged them for easy distribution, then reversed the flow and pumped that information back out.

In 2002, Simpson went to lunch with a Portland businessman named Charles Jennings. A serial entrepreneur, Jennings made his first product 28 years ago; it was a little booklet called Drought Gardening that included a back-cover photo of the author in a full hippy beard. Later, he helped run his wife’s company, which sold pastries at a street market under one of Portland’s downtown bridges. After stints as a news-paper columnist, a comic strip writer, and a film and television producer, Jennings got into the software business, creating three companies in 10 years.

After September 11, Jennings pulled together several large public meetings in Portland to discuss how the local tech community could help out. Counterterrorism expert Richard Clarke appeared at one of them and spoke about one of the biggest but least-glamorous public safety problems: Emergency personnel – police, firefighters, paramedics – can’t share information easily in a crisis. A handful of projects emerged around that time, including a nonprofit founded by Jennings called the Regional Alliances for Infrastructure and Network Security, and a private software firm called Swan Island Networks (Jennings is CEO).

Their goal was to create a system linking public safety agencies. Jennings’ engineers discovered Art Botterell’s CAP standard, in which they saw a lingua franca of emergency communication. They added mapping, messaging, and security features and set out to license the package to public safety agencies for a fee under the name Connect & Protect. But this plan, seemingly straightforward, included a twist that turned out to be a radical breakthrough. The twist was in the very definition of a public safety agency.

What is a public safety agency? Obviously, the police count, but what about, say, hospitals? If hospitals are included, then why not clinics? If clinics, why not schools, senior housing, and neighborhood groups? Connect & Protect was designed to link people who need to share information in a crisis. But this turned out to be a lot of people.

During that lunch in 2002, Jennings pitched Carl Simpson the idea of capturing all the cries of distress pouring into the 911 center and using them to warn the public. He wanted to use Connect & Protect to give his swarm of public agencies a real-time picture of the region’s emergency activity. At first, Simpson was dubious, but a few weeks later, after a visit to a local school, he changed his mind. Simpson had been standing out on the grounds with the principal when a teacher walked up and asked where the kids would be having lunch that day. The principal squinted up at the clouds and said, “Outside.” Simpson, whose job puts him in the middle of a complex, highly effective sensor network, found this style of information gathering unimpressive. “His sole basis for deciding whether to put his kids inside or outside is a glance at the sky?” he told me later. “What if there was a chemical fire nearby? What if the police were combing the neighborhood for a criminal?” Such emergencies are rare, but when they happen the principal ought to know. Simpson called Jennings back and offered access to the 911 data on two conditions. First, there had to be no additional effort on the part of the dispatchers. And second, it had to be offered to the public schools for free.

Jennings’ company automated the process of reformatting the 911 call records into the CAP standard, and he and Simpson started inviting people to sign on. The schools got access, of course. They invited the security officers at the Oregon Zoo to join the network – it gets 1.3 million visitors a year. The county parole officers got access so they could keep an eye on incidents that might lead them to violators. Then they went further. They provided the 911 data to a private property manager responsible for three high-rises on the east side of the Willamette River, and they also gave access to the management of Lloyd Center, Portland’s biggest shopping complex. The public libraries and the county transportation officials and even the dogcatchers got the warnings.

Meanwhile, the evangelists at the nonprofit that Jennings had founded were out peddling the idea that Connect & Protect wasn’t just for receiving alerts, it was for sending them, too. The raw data of warning and public safety didn’t have to come from 911 alone. Almost everyone receiving information could contribute information.

Network effects began to take hold, and by late 2005 recipients of the 911 alerts were sending warnings directly to one another every day. Messages about auto break-ins at the mall went to high-rises across the street, where the security office had 32 guards on staff. Parole officers sent alerts to the schools. On the Oregon coast, hotel managers used Connect & Protect to pass along news of storm threats. During a recent tsunami warning for the West Coast, Connect & Protect beat the beach siren in one coastal town by 24 minutes.

Connect & Protect is now a large conglomeration of overlapping alerts stretching across nine Oregon counties. Each stream of warnings is controlled by the agency that issues it. Fairly strict security features attempt to limit abuse of the warnings – certain categories of calls, such as reports of sexual crimes, are not transmitted publicly, the alerts can’t easily be copied or pasted, anonymity is forbidden.

Despite these controls, Connect & Protect blatantly undermines privacy. Pick up the phone and call 911, and your address flashes across screens around the city – maybe even your neighbor’s. Then again, if you have a real need for help, your neighbor might be just the person you want to know about it.

Like a charcoal rubbing that reveals the pattern of a relief, the spread of Connect & Protect exposes the region’s real security network, a ubiquitous but previously hidden tangle of private and public groups. The lines of authority through which the alerts travel on Connect & Protect do not form a simple pyramid, but extend in a mycelial net that grows thicker in some places, thinner in others. The network copies – but also broadens and blurs – the existing web of governance. Eventually, most people may be touched by such a network, but the origin and route of any message is unpredictable and constantly changing.

Many of the important nodes of this network are run by people like Derek Bliss. The tall, skinny 36-year-old is the regional manager of First Response, the largest private security firm in the Northwest. “Let’s say there’s a high school football game that doesn’t go so well,” Bliss says, noting that he has security contracts with 10 percent of the Portland schools. “Remarks are made, and our guys have to keep people apart. We send out an alert to all the other schools.” Bliss plays no official role in his region’s crisis management bureaucracy. Yet his office takes about 16,000 calls per year. The 15 cars he has on duty, his secure dispatch center equipped with a generator, his contacts with property owners around the city – none of these count as public resources, even though his team would almost certainly be active in any emergency. Nationally, the employees of private security firms like First Response outnumber public law enforcement officers four to one.

The traditional way to tap into such private security firms – and the rest of the unseen resources that might help in a disaster – is by staging elaborate drills. But you can’t drill for every type of threat, and you can’t drill all the time. Everybody has better things to do. Laborious training sessions are forgotten during the long stretches when everything’s fine. That is the true nature of citizens. Even with constant propaganda, it’s impossible to keep us safe by keeping us scared. Weeks, months, and years pass, and we insist on living normally again.

If national safety – the ability to respond to hurricanes, terrorist attacks, earthquakes – depends on the execution of explicit plans, on soldierly obedience, and on showy security drills, then a decentralized security scheme is useless. But if it depends on improvised reactions to unknown threats, that’s a different story. A deeply textured, unmapped system is hard to bring down. A system that encourages improvisation is quick to recover. Ubiquitous networks of warning may constitute our own asymmetrical advantage, and, like the terrorist networks that occasionally carry out spectacular attacks, their power remains obscure until they’re called into action.

Wired, Issue 13.12, December 2005

Steve Jobs Interview – 1996

In the years since this interview appeared, many people have mentioned how unusual it is that Steve Jobs talked at such length about general topics to a reporter he had never met before. Jobs is known to be strategic in his interviews; that is, he does not range widely beyond the topics on his own agenda. What here appears to be very good luck was in fact due largely to the influence of Kevin Kelly, then my editor at Wired. Kevin and I traveled together down to the NeXT offices in Redwood city, and while there were a few moments when Jobs seemed to interpret my questions about his changing world view as personal attacks, Kevin’s calm and innocent presence helped to convert these tense exchanges into a genuine intellectual conversation. It is interesting to look back on his predictions and compare them to the future that unfolded, but perhaps even more interesting to encounter Jobs at this moment of mixed success and failure, before his return to Apple.

Steve Jobs has been right twice. The first time we got Apple. The second time we got NeXT. The Macintosh ruled. NeXT tanked. Still, Jobs was right both times. Although NeXT failed to sell its elegant and infamously buggy black box, Jobs’s fundamental insight – that personal computers were destined to be connected to each other and live on networks – was just as accurate as his earlier prophecy that computers were destined to become personal appliances.

Now Jobs is making a third guess about the future. His passion these days is for objects. Objects are software modules that can be combined into new applications, much as pieces of Lego are built into toy houses. Jobs argues that objects are the key to keeping up with the exponential growth of the World Wide Web. And it’s commerce, he says, that will fuel the next phase of the Web explosion.

On a foggy morning last year, I drove down to the headquarters of NeXT Computer Inc. in Redwood City, California, to meet with Jobs. The building was quiet and immaculate, with that atmosphere of low-slung corporate luxury typical of successful Silicon Valley companies heading into their second decade. Ironically, NeXT is not a success. After burning through hundreds of millions of dollars from investors, the company abandoned the production of computers, focusing instead on the sale and development of its Nextstep operating system and on extensions into object-oriented technology.

Here at NeXT, Jobs was not interested in talking about Pixar Animation Studios, the maker of the world’s first fully computer-generated feature movie, Toy Story. Jobs founded Pixar in 1986 when he bought out a computer division of Lucasfilm Ltd. for US$60 million, and with Pixar’s upcoming public stock offering, he was poised to become a billionaire in a single day. To Jobs, Pixar was a done deal, Toy Story was in the can, and he was prepared to let his IPO do the talking.

A different type of executive might have talked only about Pixar. But even when given the chance to crow, Jobs kept talking about Web objects and his ambitions for NeXT. He was fixed on the next big thing. And that was fine. After all, people often become more interesting when they’ve failed at something, and with his fall from Apple, the struggle at NeXT, and the triumph of Pixar, Jobs is now moving into his second circuit around the wheel of fortune. What has he learned?

As we began our interview, Jobs was testy. He told me that he didn’t care anymore about revolutionizing society, and that he didn’t believe changes in technology could solve the most important problems we face. The future of the Web was in the hands of big corporations, he said. This was where the money was going to be made. This was where NeXT was pitching its products.

I couldn’t help but wonder how this incarnation of Steve Jobs jibed with the old revolutionary of Apple and the early years of NeXT. As the conversation deepened, some of the connections slowly grew clear. Jobs’s testiness faded, and he allowed himself to speculate on the democratizing effects of the Web and his hope for defending it against the threat of Microsoft. Jobs’s obsession with his old rival took the form of an unusual proposal for all parties to voluntarily keep the Web simple and avoid increasingly popular client-side enhancements like HotJava.

In the old days, Jobs was an evangelist for American education and worked hard to get computers in schools. The partnership between Apple and educators was key in establishing a market for the Macintosh, while the NeXT machine was originally designed to serve primarily as a tool for students and teachers. Now, Jobs flatly concludes, technology can’t help fix the problems with our education system. His new solutions are decidedly low-tech.

The new Steve Jobs scoffs at the naïve idealism of Web partisans who believe the new medium will turn every person into a publisher. The heart of the Web, he said, will be commerce, and the heart of commerce will be corporate America serving custom products to individual consumers. The implicit message of the Macintosh, as unforgettably expressed in the great “1984” commercial, was Power to the People. Jobs’s vision of Web objects serves a different mandate: Give the People What They Want.

The Macintosh computer set the tone for 10 years. Do you think the Web may be setting the tone today?

The desktop computer industry is dead. Innovation has virtually ceased. Microsoft dominates with very little innovation. That’s over. Apple lost. The desktop market has entered the dark ages, and it’s going to be in the dark ages for the next 10 years, or certainly for the rest of this decade.

It’s like when IBM drove a lot of innovation out of the computer industry before the microprocessor came along. Eventually, Microsoft will crumble because of complacency, and maybe some new things will grow. But until that happens, until there’s some fundamental technology shift, it’s just over.

The most exciting things happening today are objects and the Web. The Web is exciting for two reasons. One, it’s ubiquitous. There will be Web dial tone everywhere. And anything that’s ubiquitous gets interesting. Two, I don’t think Microsoft will figure out a way to own it. There’s going to be a lot more innovation, and that will create a place where there isn’t this dark cloud of dominance.

Why do you think the Web has sprouted so fast?

One of the major reasons for the Web’s proliferation so far is its simplicity. A lot of people want to make the Web more complicated. They want to put processing on the clients, they want to do this and that. I hope not too much of that happens too quickly.

It’s much like the old mainframe computing environment, where a Web browser is like a dumb terminal and the Web server is like the mainframe where all the processing’s done. This simple model has had a profound impact by starting to become ubiquitous.

And objects?

When I went to Xerox PARC in 1979, I saw a very rudimentary graphical user interface. It wasn’t complete. It wasn’t quite right. But within 10 minutes, it was obvious that every computer in the world would work this way someday. And you could argue about the number of years it would take, and you could argue about who would be the winners and the losers, but I don’t think you could argue that every computer in the world wouldn’t eventually work this way.

Objects are the same way. Once you understand objects, it’s clear that all software will eventually be written using objects. Again, you can argue about how many years it will take, and who the winners and losers will be during this transition, but you can’t argue about the inevitability of this transition.

Objects are just going to be the way all software is going to be written in five years or – pick a time. It’s so compelling. It’s so obvious. It’s so much better that it’s just going to happen.

How will objects affect the Web?

Think of all the people now bringing goods and services directly to customers through the Web. Every company that wants to vend its goods and services on the Web is going to have a great deal of custom-application software to write. You’re not just going to be able to buy something off the shelf. You’re going to have to hook the Web into your order-management systems, your collection systems. It’s going to be an incredible amount of work.

The number of applications that need to be written is growing exponentially. Unless we can find a way to write them in a tenth of the time, we’re toast.

The end result of objects – this repackaging of software – is that we can develop applications with only about 10 to 20 percent of the software development required any other way.

We see how people won the battle of the desktop by owning the operating system. How does one win on the Web?

There are three parts to the Web. One is the client, the second is the pipes, and the third is the servers.

On the client side, there’s the browser software. In the sense of making money, it doesn’t look like anybody is going to win on the browser software side, because it’s going to be free. And then there’s the typical hardware. It’s possible that some people could come out with some very interesting Web terminals and sell some hardware.

On the pipe side, the RBOCs are going to win. In the coming months, you’re going to see a lot of them offering a service for under $25 a month. You get ISDN strung into your den, you get a little box to hook it into your PC, and you get an Internet account, which is going to be very popular. The RBOCs are going to be the companies that get you on the Web. They have a vested interest in doing that. They’d like to screw the cable companies; they’d like to preserve the customers. This is all happening right now. You don’t see it. It’s under the ground like the roots of a tree, but it’s going to spring up and you’re going to see this big tree within a few years.

As for the server market, companies like Sun are doing a nice business selling servers. But with Web server software, no one company has more than a single-digit market share yet. Netscape sells hardly any, because you can get free public-domain software and it’s very good. Some people say that it’s even better than what you can buy.

Our company decided that people are going to layer stuff above this very simple Web server to help others build Web applications, which is where the bottleneck is right now. There’s some real opportunity there for making major contributions and a lot of money. That’s what WebObjects is all about.

What other opportunities are out there?

Who do you think will be the main beneficiary of the Web? Who wins the most?

People who have something…

To sell!

To share.

To sell!

You mean publishing

It’s more than publishing. It’s commerce. People are going to stop going to a lot of stores. And they’re going to buy stuff over the Web!

What about the Web as the great democratizer?

If you look at things I’ve done in my life, they have an element of democratizing. The Web is an incredible democratizer. A small company can look as large as a big company and be as accessible as a big company on the Web. Big companies spend hundreds of millions of dollars building their distribution channels. And the Web is going to completely neutralize that advantage.

What will the economic landscape look like after that democratic process has gone through another cycle?

The Web is not going to change the world, certainly not in the next 10 years. It’s going to augment the world. And once you’re in this Web-augmented space, you’re going to see that democratization takes place.

The Web’s not going to capture everybody. If the Web got up to 10 percent of the goods and services in this country, it would be phenomenal. I think it’ll go much higher than that. Eventually, it will become a huge part of the economy.

What’s the biggest surprise this technology will deliver?

The problem is I’m older now, I’m 40 years old, and this stuff doesn’t change the world. It really doesn’t.

That’s going to break people’s hearts.

I’m sorry, it’s true. Having children really changes your view on these things. We’re born, we live for a brief instant, and we die. It’s been happening for a long time. Technology is not changing it much – if at all.

These technologies can make life easier, can let us touch people we might not otherwise. You may have a child with a birth defect and be able to get in touch with other parents and support groups, get medical information, the latest experimental drugs. These things can profoundly influence life. I’m not downplaying that. But it’s a disservice to constantly put things in this radical new light – that it’s going to change everything. Things don’t have to change the world to be important.

The Web is going to be very important. Is it going to be a life-changing event for millions of people? No. I mean, maybe. But it’s not an assured Yes at this point. And it’ll probably creep up on people.

It’s certainly not going to be like the first time somebody saw a television. It’s certainly not going to be as profound as when someone in Nebraska first heard a radio broadcast. It’s not going to be that profound.

Then how will the Web impact our society?

We live in an information economy, but I don’t believe we live in an information society. People are thinking less than they used to. It’s primarily because of television. People are reading less and they’re certainly thinking less. So, I don’t see most people using the Web to get more information. We’re already in information overload. No matter how much information the Web can dish out, most people get far more information than they can assimilate anyway.

The problem is television?

When you’re young, you look at television and think, “there’s a conspiracy.” The networks have conspired to dumb us down. But when you get a little older, you realize that’s not true. The networks are in business to give people exactly what they want. That’s a far more depressing thought. Conspiracy is optimistic! You can shoot the bastards! We can have a revolution! But the networks are really in business to give people what they want. It’s the truth.

So Steve Jobs is telling us things are going to continue to get worse.

They are getting worse! Everybody knows that they’re getting worse! Don’t you think they’re getting worse?

I do, but I was hoping I could come here and find out how they were going to get better. Do you really believe that the world is getting worse? Or do you have a feeling that the things you’re involved with are making the world better?

No. The world’s getting worse. It has gotten worse for the last 15 years or so. Definitely. For two reasons. On a global scale, the population is increasing dramatically and all our structures, from ecological to economic to political, just cannot deal with it. And in this country, we seem to have fewer smart people in government, and people don’t seem to be paying as much attention to the important decisions we have to make.

But you seem very optimistic about the potential for change.

I’m an optimist in the sense that I believe humans are noble and honorable, and some of them are really smart. I have a very optimistic view of individuals. As individuals, people are inherently good. I have a somewhat more pessimistic view of people in groups. And I remain extremely concerned when I see what’s happening in our country, which is in many ways the luckiest place in the world. We don’t seem to be excited about making our country a better place for our kids.

The people who built Silicon Valley were engineers. They learned business, they learned a lot of different things, but they had a real belief that humans, if they worked hard with other creative, smart people, could solve most of humankind’s problems. I believe that very much.

I believe that people with an engineering point of view as a basic foundation are in a pretty good position to jump in and solve some of these problems. But in society, it’s not working. Those people are not attracted to the political process. And why would somebody be?

Could technology help by improving education?

I used to think that technology could help education. I’ve probably spearheaded giving away more computer equipment to schools than anybody else on the planet. But I’ve had to come to the inevitable conclusion that the problem is not one that technology can hope to solve. What’s wrong with education cannot be fixed with technology. No amount of technology will make a dent.

It’s a political problem. The problems are sociopolitical. The problems are unions. You plot the growth of the NEA [National Education Association] and the dropping of SAT scores, and they’re inversely proportional. The problems are unions in the schools. The problem is bureaucracy. I’m one of these people who believes the best thing we could ever do is go to the full voucher system.

I have a 17-year-old daughter who went to a private school for a few years before high school. This private school is the best school I’ve seen in my life. It was judged one of the 100 best schools in America. It was phenomenal. The tuition was $5,500 a year, which is a lot of money for most parents. But the teachers were paid less than public school teachers – so it’s not about money at the teacher level. I asked the state treasurer that year what California pays on average to send kids to school, and I believe it was $4,400. While there are not many parents who could come up with $5,500 a year, there are many who could come up with $1,000 a year.

If we gave vouchers to parents for $4,400 a year, schools would be starting right and left. People would get out of college and say, “Let’s start a school.” You could have a track at Stanford within the MBA program on how to be the businessperson of a school. And that MBA would get together with somebody else, and they’d start schools. And you’d have these young, idealistic people starting schools, working for pennies.

They’d do it because they’d be able to set the curriculum. When you have kids you think, What exactly do I want them to learn? Most of the stuff they study in school is completely useless. But some incredibly valuable things you don’t learn until you’re older – yet you could learn them when you’re younger. And you start to think, What would I do if I set a curriculum for a school?

God, how exciting that could be! But you can’t do it today. You’d be crazy to work in a school today. You don’t get to do what you want. You don’t get to pick your books, your curriculum. You get to teach one narrow specialization. Who would ever want to do that?

These are the solutions to our problems in education. Unfortunately, technology isn’t it. You’re not going to solve the problems by putting all knowledge onto CD-ROMs. We can put a Web site in every school – none of this is bad. It’s bad only if it lulls us into thinking we’re doing something to solve the problem with education.

Lincoln did not have a Web site at the log cabin where his parents home-schooled him, and he turned out pretty interesting. Historical precedent shows that we can turn out amazing human beings without technology. Precedent also shows that we can turn out very uninteresting human beings with technology.

It’s not as simple as you think when you’re in your 20s – that technology’s going to change the world. In some ways it will, in some ways it won’t.

If you go back five years, the Web was hardly on anybody’s horizon. Maybe even three years ago, it wasn’t really being taken seriously by many people. Why is the sudden rise of the Web so surprising?

Isn’t it great? That’s exactly what’s not happening in the desktop market.

Why was everyone, including NeXT, surprised, though?

It’s a little like the telephone. When you have two telephones, it’s not very interesting. And three is not very interesting. And four. And, well, a hundred telephones perhaps becomes slightly interesting. A thousand, a little more. It’s probably not until you get to around ten thousand telephones that it really gets interesting.

Many people didn’t foresee, couldn’t imagine, what it would be like to have a million, or a few tens of thousands of Web sites. And when there were only a hundred, or two hundred, or when they were all university ones, it just wasn’t very interesting. Eventually, it went beyond this critical mass and got very interesting very fast. You could see it. And people said, “Wow! This is incredible.”

The Web reminds me of the early days of the PC industry. No one really knows anything. There are no experts. All the experts have been wrong. There’s a tremendous open possibility to the whole thing. And it hasn’t been confined, or defined, in too many ways. That’s wonderful.

There’s a phrase in Buddhism,”Beginner’s mind.” It’s wonderful to have a beginner’s mind.

Earlier, you seemed to say there’s a natural affinity between the Web and objects. That these two things are going to come together and make something very new, right?

Let’s try this another way. What might you want to do on a Web server? We can think of four things:

One is simple publishing. That’s what 99 percent of the people do today. If that’s all you want to do, you can get one of a hundred free Web-server software packages off the Net and just use it. No problem. It works fine. Security’s not a giant issue because you’re not doing credit card transactions over the Web.

The next thing you can do is complex publishing. People are starting to do complex publishing on the Web – very simple forms of it. This will absolutely explode in the next 12 to 18 months. It’s the next big phase of the Web. Have you seen the Federal Express Web site where you can track a package? It took Federal Express about four months to write that program – and it’s extremely simple. Four months. It would be nice to do that in four days, or two days, or one day.

The third thing is commerce, which is even harder than complex publishing because you have to tie the Web into your order-management system, your collection system, things like that. I think we’re still two years away. But that’s also going to be huge.

Last is internal Web sites. Rather than the Internet, it’s intranet. Rather than write several different versions of an application for internal consumption – one for Mac, one for PC, one for Unix – people can write a single version and have a cross-platform product. Everybody uses the Web. We’re going to see companies have dozens – if not hundreds – of Web servers internally as a means to communicate with themselves.

Three of those four functions of the Web require custom applications. And that’s what we do really well with objects. Our new product, WebObjects, allows you to write Web applications 10 times faster.

How does the Web affect the economy?

We live in an information economy. The problem is that information’s usually impossible to get, at least in the right place, at the right time.

The reason Federal Express won over its competitors was its package-tracking system. For the company to bring that package-tracking system onto the Web is phenomenal. I use it all the time to track my packages. It’s incredibly great. Incredibly reassuring. And getting that information out of most companies is usually impossible.

But it’s also incredibly difficult to give information. Take auto dealerships. So much money is spent on inventory – billions and billions of dollars. Inventory is not a good thing. Inventory ties up a ton of cash, it’s open to vandalism, it becomes obsolete. It takes a tremendous amount of time to manage. And, usually, the car you want, in the color you want, isn’t there anyway, so they’ve got to horse-trade around. Wouldn’t it be nice to get rid of all that inventory? Just have one white car to drive and maybe a laserdisc so you can look at the other colors. Then you order your car and you get it in a week.

Today a dealer says, “We can’t get your car in a week. It takes three months.” And you say, “Now wait a minute, I want to order a pink Cadillac with purple leather seats. Why can’t I get that in a week?” And he says, “We gotta make it.” And you say, “Are you making Cadillacs today? Why can’t you paint a pink one today?” And he says, “We didn’t know you wanted a pink one.” And you say, “OK. I’m going to tell you I want a pink one now.” And he says, “We don’t have any pink paint. Our paint supplier needs some lead time on that paint.” And you say, “Is your paint supplier making paint today?” And he says, “Yeah, but by the time we tell him, it takes two weeks.” And you say, “What about leather seats?” And he says, “God, purple leather. It’ll take three months to get that.”

You follow this back, and you find that it’s not how long it takes to make stuff; it’s how long it takes the information to flow through the system. And yet electronics move at the speed of light – or very close to it.

So pushing information into the system is sometimes immensely frustrating, and the Web is going to be just as much of a breakthrough in terms of pushing information in as getting information out.

Your view about the Web is an alternative to the commonly held one that it’s going to be the renaissance of personal publishing. The person who can’t get published through the broadcast media will get a chance to say something.

There’s nothing wrong with that. The Web is great because that person can’t foist anything on you – you have to go get it. They can make themselves available, but if nobody wants to look at their site, that’s fine. To be honest, most people who have something to say get published now.

But when we ask how a person’s life is changed by these technologies, pushing information to customize products makes marginal differences. You go to the store and there’s a lot of different kinds of toilet paper – some have tulips embossed on them and some don’t. You’re standing there making a choice, and you want the one with the embossed tulips.

I like the ones without the tulips.

I do, too – and unscented. But that customization is relevant to you for that second but in no other way. For the average person, the possibility to participate as a publisher or a producer has a higher value for them.

I don’t necessarily agree. The best way to think of the Web is as a direct-to-customer distribution channel, whether it’s for information or commerce. It bypasses all middlemen. And, it turns out, there are a lot of middlepersons in this society. And they generally tend to slow things down, muck things up, and make things more expensive. The elimination of them is going to be profound.

Do you think large institutions are going to be the center of the economy, basically driving it as they are now? Some people say the big company is going to fragment.

I don’t see that. There’s nothing wrong with big companies. A lot of people think big business in America is a bad thing. I think it’s a really good thing. Most people in business are ethical, hard-working, good people. And it’s a meritocracy. There are very visible examples in business of where it breaks down but it’s probably a lot less than in most other areas of society.

You don’t think that structural economic changes will tend to shrink the size of these large companies?

Large companies not paying attention to change will get hurt. The Web will be one more area of significant change and those who don’t pay attention will get hurt, while those who see it early enough will get rewarded.

The Web is just going to be one more of those major change factors that businesses face every decade. This decade, in the next 10 years, it’s going to be the Web. It’s going to be one of them.

But doesn’t the Web foster more freedom for individuals?

It is a leveling of hierarchy. An individual can put up a Web site that, if they put enough work into it, looks just as impressive as the largest company in the world.

I love things that level hierarchy, that bring the individual up to the same level as an organization, or a small group up to the same level as a large group with much greater resources. And the Web and the Internet do that. It’s a very profound thing, and a very good thing.

Yet the majority of your customers for WebObjects seem to be corporations.

That’s correct. And big ones.

Does that cause you any kind of conflict?

Sure. And that’s why we’re going to be giving our WebObjects software away to individuals and educational institutions for noncommercial use. We’ve made the decision to give it away.

What do you think about HotJava and the like?

It’s going to take a long time for that stuff to become a standard on the Web. And that may shoot the Web in the foot. If the Web becomes too complicated, too fraught with security concerns, then its proliferation may stop – or slow down. The most important thing for the Web is to stay ahead of Microsoft. Not to become more complicated.

That’s very interesting. Java pushes the technology toward the client side. Do you find that wrong?

In my opinion? In the next two years? It’s dead wrong. Because it may slow down getting to ubiquity. And anything that slows down the Web reaching ubiquity allows Microsoft to catch up. If Microsoft catches up, it’s far worse than the fact the Web can’t do word processing. Those things can be fixed later.

There’s a window now that will close. If you don’t cross the finish line in the next two years, Microsoft will own the Web. And that will be the end of it.

Let’s assume for a second that many people share an interest in a standard Web that provides a strong alternative to Microsoft. However, when it comes to every individual Web company or Web publisher, they have an interest in making sure that their Web site stays on the edge. I know we do at HotWired. And so we have to get people into HotJava – we have to stay out there – which doesn’t bode well for retaining simplicity. We’re going to be part of that force pushing people toward a more complicated Web, because we have no choice.

The way you make it more complex is not by throwing stuff on the client side but by providing value, like Federal Express does, by becoming more complex on the server side.

I’m just very concerned that if the clients become smart, the first thing this will do is fracture the Web. There won’t be just one standard. There’ll be several; they’re all going to fight; each one has its problems. So it’s going to be very easy to say why just one shouldn’t be the standard. And a fractured Web community will play right into Microsoft’s hands.

The client-server relationship should be frozen for the next two years, and we shouldn’t take it much further. We should just let it be.

By collective agreement?

Yeah. By collective agreement. Sure. Go for ubiquity. If Windows can become ubiquitous, so can the existing Web.

How did Windows become ubiquitous?

A force of self-interest throughout the industry made Windows ubiquitous. Compaq and all these different vendors made Windows ubiquitous. They didn’t know how to spell software, but they wanted to put something on their machines. That made Windows ubiquitous.

So it just kind of happened.

No, it was sort of an algorithm that got set in motion when everyone’s self-interest aligned toward making this happen. And I claim that the same sort of self-interest algorithm is present on the Web. Everyone has a self-interest in making this Web ubiquitous and not having anyone own it – especially not Microsoft.

Is the desktop metaphor going to continue to dominate how we relate to computers, or is there some other metaphor you like better?

To have a new metaphor, you really need new issues. The desktop metaphor was invented because one, you were a stand-alone device, and two, you had to manage your own storage. That’s a very big thing in a desktop world. And that may go away. You may not have to manage your own storage. You may not store much before too long.

I don’t store anything anymore, really. I use a lot of e-mail and the Web, and with both of those I don’t have to ever manage storage. As a matter of fact, my favorite way of reminding myself to do something is to send myself e-mail. That’s my storage.

The minute that I don’t have to manage my own storage, and the minute I live primarily in a connected versus a stand-alone world, there are new options for metaphors.

You have a reputation for making well-designed products. Why aren’t more products made with the aesthetics of great design?

Design is a funny word. Some people think design means how it looks. But of course, if you dig deeper, it’s really how it works. The design of the Mac wasn’t what it looked like, although that was part of it. Primarily, it was how it worked. To design something really well, you have to get it. You have to really grok what it’s all about. It takes a passionate commitment to really thoroughly understand something, chew it up, not just quickly swallow it. Most people don’t take the time to do that.

Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something. It seemed obvious to them after a while. That’s because they were able to connect experiences they’ve had and synthesize new things. And the reason they were able to do that was that they’ve had more experiences or they have thought more about their experiences than other people.

Unfortunately, that’s too rare a commodity. A lot of people in our industry haven’t had very diverse experiences. So they don’t have enough dots to connect, and they end up with very linear solutions without a broad perspective on the problem. The broader one’s understanding of the human experience, the better design we will have.

Is there anything well designed today that inspires you?

Design is not limited to fancy new gadgets. Our family just bought a new washing machine and dryer. We didn’t have a very good one so we spent a little time looking at them. It turns out that the Americans make washers and dryers all wrong. The Europeans make them much better – but they take twice as long to do clothes! It turns out that they wash them with about a quarter as much water and your clothes end up with a lot less detergent on them. Most important, they don’t trash your clothes. They use a lot less soap, a lot less water, but they come out much cleaner, much softer, and they last a lot longer.

We spent some time in our family talking about what’s the trade-off we want to make. We ended up talking a lot about design, but also about the values of our family. Did we care most about getting our wash done in an hour versus an hour and a half? Or did we care most about our clothes feeling really soft and lasting longer? Did we care about using a quarter of the water? We spent about two weeks talking about this every night at the dinner table. We’d get around to that old washer-dryer discussion. And the talk was about design.

We ended up opting for these Miele appliances, made in Germany. They’re too expensive, but that’s just because nobody buys them in this country. They are really wonderfully made and one of the few products we’ve bought over the last few years that we’re all really happy about. These guys really thought the process through. They did such a great job designing these washers and dryers. I got more thrill out of them than I have out of any piece of high tech in years.