Deus Est Machina

[originally published July 2004]

What happens if the artificial intelligence community, in its quest to build intelligent systems, succeeds too well and creates an AI whose intelligence exceeds the threshold marked out by our own? Up to now, it is humans who develop the software and hardware and who drive all progress in capability. After crossing the threshold, however, the AI itself will rapidly augment its own capabilities. What’s the intuition here? Although we use technology to help us conceptualize, design, and build today’s computers and software (and other technological artifacts such as airliners and skyscrapers), there’s no doubt that we remain in the driver’s seat. But imagine the software design process reaching a level of complexity at which human designers exert only executive oversight. Most practitioners can’t really see us getting to this point anytime soon, but remember that compilers astonished assembler programmers in the late 1950s and early 1960s.

Deus Est Machina

If adequate intelligence for designing smarter software is close at hand, we might soon see a time when our intelligent software can improve itself. When we get to where each generation is designed by the previous one, we could reach a stage at which the process accelerates exponentially. At this point, Marvin Minsky (who wondered “if ordinary humans would be lucky enough to be kept as pets by these superior intelligences”), Ray Kurzweil (author of The Law of Accelerating Returns), Hans Moravec (author of Robot: Mere Machine to Transcendent Mind), and others theorize that our machines will permanently surpass our capabilities in the only domain left to us — the intellectual domain. This is called the singularity. Vinge is credited with coining the term for the phenomenon we’re speculating about here in his 1993 essay The Coming Technological Singularity: How to Survive in the Post-Human Era.

The digerati’s fevered speculations have started to infect some of the establishment’s more down-to-Earth leadership, resulting in alarums like Bill Joy’s essay in Wired entitled Why the Future Doesn’t Need Us (vol. 8.04, Apr. 2000), which expresses great dismay at the prospects for the human race’s survival of a singularity and some sidelong mentions by Tom Peters in his recent book Re-imagine!, which mentions Ray Kurzweil and the singularity. What is so compelling about such speculations that they can garner this kind of attention? In this installment of Biblio Tech, we’ll examine the singularity and some of the science fiction that has inspired (or been inspired by) it, focusing most of our attention on two relatively recent contributions to the discussion: The Metamorphosis of Prime Intellect, by Roger Williams, and Singularity Sky, by Charles Stross.

Influential works

Author Title Original Publication Date
Robert A. Heinlein The Moon Is A Harsh Mistress 1969
Thomas J. Ryan The Adolescence of P-1 1977
Vernor Vinge True Names 1981
Neal Stephenson The Diamond Age 1995
Roger Williams The Metamorphosis of Prime Intellect 2002
Charles Stross Singularity Sky 2003

Singularity as Acceleration

Many singularity stories — and some research areas — focus on the creation of superintelligent AIs whose transcendent intellectual capabilities either render our own intellectual efforts irrelevant or, worse yet, enable them to exert physical control over our universe because they’ve mastered physical laws we’ve not yet grasped. Other stories and research examines the notion that the singularity is simply an extension of the accelerating technological change that has characterized human history; others view the singularity sweeping the human race into accelerated evolution as we alter our bodies and minds, with sometimes startling consequences.

Increasingly, singularity researchers talk about how other technologies beyond AI contribute to the shift. The most discussed is nanotechnology: the construction of microscopic mechanisms and automated factories could threaten the very existence of the human race. In The Diamond Age, Neal Stephenson envisions a world in which competing groups’ microscopic agents clash both in the air and in our blood streams—one set to harm us and the other to defend us, both comprising a new generation of germs and antibodies with dramatically sophisticated modes of attack and defense.


In The Moon Is a Harsh Mistress, Robert Heinlein introduces an AI called Mike that emerges from the steady growth of complex systems: it wasn’t designed as, nor was it the consequence of, an intentional effort that exceeded expectations. Conversely, the AI in Thomas J. Ryan’s The Adolescence of P-1 emerges as the logical, if accidental, consequence of an experiment in machine learning that combined with early computer networking to produce a transcendent AI. Mike is humanity’s friend, whereas P-1 is more of a skeptic who practices a self-preservation ethic that is chilling in its brutal clarity. In True Names, Vernor Vinge posits two of the most popular modalities: an emergent (if slower-than-real-time) transcendent AI, and uploading, which is the transference of a person’s personality and memories from his or her meatspace body to a new cyberspace repository. In his Sprawl universe, William Gibson describes several AIs whose capabilities are handicapped by the Turing Police, a law-enforcement agency that exists to prevent AIs from achieving too much capability.

In The Metamorphosis of Prime Intellect, Roger Williams introduces a supercomputer created by a visionary who takes advantage of a newly discovered physical effect. However, this effect has wider implications than originally expected: it lets the transcendent supercomputer assume godlike powers, which precipitates the mother of all existential crises.

According to the author’s Web page,, The Metamorphosis of Prime Intellect was originally written in 1994 but first “published” on a Web site in 2002. It isn’t available on paper (or as Williams says, “dead tree”) and probably never will be. Reading it is a challenge: it starts with a disturbing chapter intended to convey the exquisitely decadent consequences of the ultimate in boredom. Williams’ speculations into the dark games that involuntarily immortal and fabulously wealthy people might play to while away the time are vividly disturbing in a Tales from the Crypt sort of way, and for this reader, at least, distracted from the message.

Despite the story’s inauspicious beginning, its later stages are an engaging read. Williams evokes the essential contradictions in Isaac Asimov’s three laws of robotics by exploring the difference between physical and spiritual harm and distinguishing between short-, medium-, and long-term consequences. (The three laws of robotics are: one, a robot may not injure a human being, or, through inaction, allow a human being to come to harm; two, a robot must obey orders given to it by humans, except where such orders would conflict with the first law; and three, a robot must protect its own existence as long as such protection does not conflict with the first or second laws.)

The novel’s conflict and resolution hinge on a struggle between humans and the AI: the prize is the return of free will to the human race. Williams introduces a clever metric that measures the AI’s compliance with the three laws and uses that metric as a ticking bomb to keep the thrill alive. All in all, a well-written and very creative, if flawed, piece of work.

The Eschaton

Charles Stross is not a newcomer to SF writing; he’s already received two Hugo nominations, one for his novella Lobsters and another for Singularity Sky.

In Singularity Sky, we see a different view of post-singularity life, one in which the transcendent AI has become a nearly silent backdrop for the human race as people live their chaotic lives on a range of planets with several differing cultures, viewpoints, and prospects. As in Metamorphosis, the AI has assumed a godlike role in the universe, albeit one that is more obviously limited by the rules of physics. Awareness of its presence dates from a moment in the past when nine-tenths of the human race suddenly disappeared from Earth overnight. They weren’t killed; the AI, called Eschaton, scattered them to the habitable planets of stars all over the galaxy.

In an ironic symmetry with the Asimov laws of robotics, the humans in Singularity Sky toil under a set of laws that the AI imposed. These laws are designed to prevent humans from attempting any projects that would threaten the AI’s emergence. Time travel is possible according to the novel’s physics, so the Eschaton forbids its use and brutally punishes attempted transgressions.

Stross invents a world of spaceships equipped with phased-array emitters far superior to tacky old ray guns, and energy sources that include a carefully packaged black hole. All this gadgetry comes with physical constraints and limitations, and Stross dedicates plenty of time to elaborating their functions and performance. If you’re into hard-core SF, there’s plenty here (plus a love interest who’s also the toughest meanest hombre, er, woman on the ship, and an engineer who … but that would spoil it).

Are You Scared Yet?

On one hand, it’s hard to dispute the logic of the “gray goo” argument — namely, that progress in nanotechnology will enable a terrorist to create a lethal biological or nanorobotic agent that could threaten the very existence of life on Earth — that Bill Joy and others advance. The capability is plausibly achievable within the next 10 to 30 years. And if it’s possible or even easy to create such a thing, it’s easy to imagine that there is some lunatic somewhere out there with both the skill and the will to do it. On the other hand, we aren’t yet sure what form such a singularity threat would most likely take. Will it be a transcendent AI that can manipulate the human race? A pandemic virus? A nasty micromechanism? Are any limitations inherent in these potential mechanisms that would render our fears moot? One of the fears that the Manhattan Project scientists reportedly investigated was the possibility that the first nuclear bomb would ignite a chain reaction in the atmosphere. Testimony to the seriousness with which some very credible people take this, in a recent interview in The New York Times, Bill Joy expressed his intent to pursue the issue in the public policy arena. The speculations of SF writers are certainly frightening, but only the work of scientists and policy thinkers will help us figure out what we actually have to fear (besides fear itself).

Read the original …

(This article appeared originally in IEEE Security & Privacy in the July/August 2004 issue. This is substantially the same text, with some minor formatting changes to take advantage of the power of the online presentation plus a few minor wordsmithing tweaks. And the table has the original publication dates for the listed books, not the editions in print in 2004 when the article was published.)

Here’s a PDF (article-10-final) of the original article, courtesy of IEEE Security & Privacy.

Cult Classics

[originally published May 2004]

In this installment of Biblio Tech, we’ll look at some science fiction cult classics that challenge classification. Each is a perennial favorite with the SF community, and several have become fixtures in the computer science community.

Two of these works, Dark Star and Alien, are movies, while the other, The Hitchhiker’s Guide to the Galaxy, began as a radio show whose universe and characters have taken on lives of their own. What turns these kinds of works into cult classics? Is it some particularly strong appeal to a preexisting community, or is it some intrinsic merit that creates the community?

In Space No One Can Hear You Scream

Alien is the movie that made Sigourney Weaver a star in 1979, a year that otherwise featured movies like Apocalypse Now, Kramer vs. Kramer, and The China Syndrome. The US withdrawal from Saigon had occurred four years earlier, and Hollywood movies were either very much about Vietnam or very much not about Vietnam. With Alien, we get the first sci-fi gothic horror movie with big-budget production values.

As the movie begins, an interstellar commercial ship is heading for home when it intercepts a distress signal. The crew finds the remains of an alien vessel when they finally arrive at the call’s source; after exploring the ship, they encounter eggs of a race of voracious and hostile creatures whose life cycle involves a hosted larval stage — hosted, as it turns out, in humans. The larva’s emergence from the chest of one of the crew members during a meal is the first shock of the movie. Before this moment, we have no indication that it’s going to be that kind of film.

The rest, as they say, is history. There’s a long and bloody struggle between the alien and the humans on the ship, conducted in dark passageways throughout the ship that provide ample opportunities for heart-pounding fright sequences as the creature pops up unexpectedly. Weaver triumphs at long last, and a film franchise that has so far produced three successful sequels is born. My favorite scene in the entire series comes at the end of the first sequel, Aliens, with a furious cat fight between a mechanically enhanced Weaver and the surviving queen alien, a scene that has an echo in the final scenes of Harry Potter and the Deathly Hallows, but that’s another story.

The Spaced-out Spaceship

Dark Star is an obscure sci-fi flick that appeared in 1974. It features a four-man crew (well, four men, one frozen corpse that is still capable of metaphysical debate, an intelligent computer with a verging-on-sultry female voice, and several smart bombs) on a goofy long-term mission aboard the eponymous starship. The crew’s job is to blow up planets that somehow hinder human expansion in space, but despite the many scenes involving target selection, the planet-busting rationale is never quite clear. We hear about unstable planets that might collide with stars and about the probability of intelligent life on other planets (which always seems to merit extermination), but this reasoning is just intended as background noise. Somewhere along the way, the Dark Star picks up an alien, portrayed by a translucent orange beach ball atop a pair of cheap plastic claws. The alien is mute but clearly intelligent, readily understanding the human crew’s complex statements. When presented with a decision, it taps its claws on the floor impatiently.

Alien and Dark Star differ in look and feel, but they maintain their hold on their cult followers. Both give screenwriting credits to Dan O’Bannon, now best known in the film industry for his expertise in horror films. Viewers have noted several parallels between the two movies that we can probably credit to O’Bannon’s role as writer for both movies. Both feature a small crew on an extended trip and an alien on board that ends up in a-hunted-becomes-the hunter role reversal in the ship’s dark corridors. So what if one is a comedy and the other is gothic horror?

The most memorable scene in Dark Star is the debate between crew member Doolittle and Bomb #20, which can’t detach from the Dark Star bomb bay and is armed and counting down to its detonation. The crew is frantically trying to persuade the bomb to obey their orders but to no avail. Operating on the advice of Commander Powell’s frozen corpse, Doolittle successfully persuades the bomb to question itself.

DOOLITTLE: Now, bomb, consider this next question, very carefully. What is your one purpose in life?
BOMB #20: To explode, of course.
DOOLITTLE: And you can only do it once, right?
BOMB #20: That is correct.
DOOLITTLE: And you wouldn’t want to explode on the basis of false data, would you?
BOMB #20: Of course not.
DOOLITTLE: Well then, you’ve already admitted that you have no real proof of the existence of the outside universe.
BOMB #20: Yes, well…
DOOLITTLE: So you have no absolute proof that Sergeant Pinback ordered you to detonate.
BOMB #20: I recall distinctly the detonation order. My memory is good on matters like these.
DOOLITTLE: Yes, of course you remember it, but what you are remembering is merely a series of electrical impulses which you now realize have no necessary connection with outside reality.
BOMB #20: True, but since this is so, I have no proof that you are really telling me all this.
DOOLITTLE: That’s all beside the point. The concepts are valid, wherever they originate.
BOMB #20: Hmmm…
DOOLITTLE: So if you detonate in…
BOMB #20: … nine seconds…
DOOLITTLE: … you may be doing so on the basis of false data.
BOMB #20: I have no proof that it was false data.
DOOLITTLE: You have no proof that it was correct data. [There is a long pause.]
BOMB #20: I must think on this further. [The bomb raises itself back into the ship; Doolittle practically collapses with relief.]

Despite the absurdity of both the situation and the dialogue, we’re forced to think about the amount of intelligence to add to emerging “smart” devices. Fortunately, we’re a long way from building bombs that have the ability to debate philosophical conundrums. Let’s hope that the Law of Unintended Consequences is carefully considered if and when we do have such a capability.


A year before Alien, Douglas Adams created a remarkably eccentric radio show called The Hitchhiker’s Guide to the Galaxy, released as a novel in 1980. Because it has robots and spaceships, it must be SF, but it’s also absurd British comedy. As the novel begins, a Vogon construction fleet destroys Earth to make way for a hyperspace bypass. Unknown to the bypass’s planners, though, Earth is actually the ultimate in supercomputers; it was constructed to answer the question of “life, the universe, and everything” originally posed 17 million years earlier by a race of superintelligent hyperdimensional beings whose manifestation on Earth is as white lab mice. The original computer built to solve this problem was called Deep Thought, a name that shaped generations of IBM chess-playing and other supercomputers, but after seven and a half million years of work, it delivered the Delphic answer 42. Because they didn’t understand the answer, the sponsors, our friends the white mice, realized that they hadn’t posed the question very well, so they asked Deep Thought to design a new computer that could calculate the answer. If your computer isn’t powerful enough to solve the problem, ask it to design one that is powerful enough. Unfortunately, the Vogon construction fleet destroyed Earth five minutes before it was due to complete its 10-million-year-long calculation.

With this absurd premise at its core, The Hitchhiker’s Guide to the Galaxy conducts a madcap tour of the universe. In the story, an accidental refugee named Arthur Dent and his friend Ford Prefect drift from one calamity to another, ultimately meeting up with Prefect’s old friend Zaphod Beeblebrox, the President of the Galaxy, and a perpetually depressed android named Marvin. What’s the connection between the story and the title of the book? Well, as it happens, Ford Prefect is a traveling researcher for a reference work called The Hitchhiker’s Guide to the Galaxy, and his specific assignment when the action begins is to conduct research for an update to the entry on Earth. Lest you be overcome, the entries on Earth in the Guide are never more than one or two words.

The surprising success of Hitchhiker’s Guide led to four sequels, The Restaurant at the End of the Universe; Life, the Universe, and Everything; So Long, and Thanks for all the Fish; and Mostly Harmless. The books spawned a BBC-produced radio series, a TV show, and according to the Internet Movie Database (, a new movie due to begin filming shortly.


All three of these works feel remarkably random, so what unifies them? In each case, the characters are engaged in some relatively innocuous activities when events overtake them. None of the characters is particularly appealing; you never end up caring very much about what happens to them. So why have these books attained enduring popularity, particularly with the technical community? Dark Star and Hitchhikers Guide were both low-budget surprises, and Alien clearly started out on the B track. Did these movies escape the commercial homogenization of focus groups and industrial psychologists and hence preserve a quirky originality? How do creations, whether group products like movies or individual ones like books manage to capture the imaginations of large numbers of people? What distinguishes the taste of distinct communities of people, such as engineers and scientists, from that of the broader public? Is there something significant in the success of a book or a movie, or is it just random chance or mass hysteria, as Adams implies in Hitchhikers Guide?

Influential works

Apocalypse Now

Author Title Year of Original Publication
John Carpenter Dark Star 1974
Douglas Adams The Hitchhiker’s Guide to the Galaxy 1978
Ridley Scott Alien 1979
Francis Ford Coppola 1979
Robert Benton Kramer Versus Kramer 1979
James Bridges The China Syndrome 1979
Douglas Adams The Restaurant at the End of the Universe 1980
Douglas Adams Life, The Universe, and Everything 1982
Douglas Adams So Long, And Thanks For All The Fish 1984
James Cameron Aliens 1986
Douglas Adams Mostly Harmless 1992
J. K. Rowling Harry Potter and the Deathly Hallows 2007

Read the original …

(This article appeared originally in IEEE Security & Privacy in the May/June 2004 issue. This is substantially the same text, with some minor formatting changes to take advantage of the power of the online presentation plus a few minor wordsmithing tweaks.)

Here’s a PDF (article-09-final) of the original article, courtesy of IEEE Security & Privacy.

Hacking The Best-Seller List

[originally published March 2004]

In the same way that Windows introduced the masses to mice and graphical user interfaces without having invented them, Dan Brown’s books explore for the general public some important themes in security and privacy and their sensitivity to technological change. These are themes that we’ve usually only seen treated in the more rarified zone of hard science fiction. This installment of Biblio Tech departs from the normal pattern of examining more obscure, idea-driven books and stories to focus on the works of a contemporary best-selling author. This departure is unusual because neither this department nor this magazine is part of the star-making machinery behind the popular book. By choosing to look at current popular fiction we run the risk of discovering later that we should have delved deeper. Nevertheless, these works are going to be broadly influential, so let’s look at them.

Hacking the Best-Seller List

Blending Popular Fiction With Science Fiction

Each of Dan Brown’s four novels — Digital Fortress, Angels & Demons, Deception Point, and The Da Vinci Code — starts off with a murder. In each case, the victim is an innocent whose death looms large in the plot of the thriller that follows, although the connection is not clear until later. We lose a programmer, a particle physicist, a geologist studying the Arctic, and a curator at the Louvre, all to murders that shock with their cryptic brutality.

These books have additional parallels, starting with their heroes and heroines. All the main characters are intellectuals, whether they’re academics or intelligence analysts. This makes it possible for the books to be scholarly treasure hunts interlaced with didactic expositions on topics as disparate as religion, art history, architecture, information management, cryptography, and privacy, all at the same time.

Unbreakable Cipher

Digital Fortress starts with the murder of a Japanese programmer whose masterpiece is an unbreakable encryption program. The programmer publishes its source code on his Web site, but encrypts the tarball with the new algorithm. He’s in the midst of auctioning the key to the highest bidder when he’s killed. David Becker, a linguistics professor, and his fiance Susan Fletcher, the chief cryptographer at the National Security Agency (NSA, called “No Such Agency” by some wags because so much of its funding is part of the US government’s “black” budget), must race against time to prevent the decryption key from being widely released.

The code has two layers: a relatively tough outer shell that’s susceptible to brute-force attack, and an inner layer that renders the contents difficult to recognize in natural language. Brown’s explanations of the code’s structural characteristics wouldn’t pass muster in the cryptographic mathematics community, but they’re sufficiently plausible for the rest of us to sustain the story. Brown introduces the concept of unbreakable codes in a long discussion between Fletcher and her boss, the NSA’s deputy director. In the process of the discussion, he also introduces some of the current debates about encryption and public policy, to which he does tolerable justice.

Although he introduces the Electronic Frontier Foundation and presents a somewhat balanced overview of the debate between civil libertarians and law enforcement advocates on the topic of strong encryption, he might have gone further to cover a little more of the debate’s history. As Tom Standage did in The Victorian Internet, Brown might have included some historical perspective from the early days of the telegraph, when many national governments forbade the use of codes and encryption. Covering some of these topics, however, would have meant introducing large-scale systems engineering considerations, something perhaps less than compatible with a popular novel.

Scientifically Informed Fiction

Angels & Demons provides our first encounter with Robert Langdon, the Harvard symbologist whose adventures in The Da Vinci Code have dominated the best-seller list for nearly a year. This book is the closest Brown comes to what the science-fiction community would call “hard” science fiction. One of the first scenes introduces a hypersonic transport, which the director of CERN sends to bring Langdon from Boston to Geneva. The victim whose murder Langdon has been summoned to help investigate is a physicist whose research has produced quantities of antimatter — quantities sufficient to attract a terrorist who wants to use it to destroy the Vatican in Rome. Brown’s introduction of these elements qualify Angels & Demons as science fiction, although the focus on art and architecture help the book appeal to a broader audience.

In his work, Brown often takes a line similar to that in Michael Crichton’s The Andromeda Strain and The Terminal Man: scientifically informed fiction rather than classical science fiction. Both authors’ books differ from classical science fiction in two ways. First, the technical artifacts are based on the contemporary state of the art rather than on plausible or possible items. Second, the fictional world’s social framework doesn’t differ from our contemporary framework. The technology establishes or supports the conflict—it doesn’t create a different infrastructure for the world. For these two reasons, these books are considered to be less ambitious technically and less deserving of the term “science fiction” than works by Isaac Asimov, for example.

Political Intrigue

Deception Point introduces a protagonist named Rachel Sexton. Rachel, like the typical thriller heroine, is the daughter of a senator running for president. Rachel is also a member of the senior staff at the National Reconnaissance Office (NRO, “We Own The Night”), which is the agency that builds and operates US surveillance satellites and other “national technical means.” In this story and in Digital Fortress, Brown demonstrates the fruits of his research into the less well-known but not entirely secret corners of the intelligence community. He weaves together his encyclopedic knowledge of current military and space technology with rumored programs, including the supposed Aurora spy plane that some speculate succeeded the famed SR-71 Blackbird in the 1980s as the world’s fastest air-breathing plane.

Deception Point features the standard race against time as Rachel and oceanographer Michael Tolland hurry to unravel the riddle surrounding a mysterious meteorite found deep underneath the Arctic icecap. (The President recruits Tolland and three other prominent scientists to assess the meteorite’s authenticity.) With a sequence of hair-raising escapes from death straight out of The Perils of Pauline, sinister forces working for a mysterious person identified only as “the controller” pursue characters from NRO headquarters to the Milne Ice Shelf back down to the Atlantic off the coast of New Jersey. Where current technology and rumored future technology leave off, Brown’s imagination provides extensions. At one point, the mysterious soldiers fire bullets made of ice at Sexton and her companions, a weapon we’ve seen before in science fiction works like Asimov’s Caves of Steel.

Woven into the thriller thread is the old debate between secrecy and openness. In Deception Point, a confrontation emerges between the head of NASA and the intelligence community. On one side, Brown’s intelligence community leaders bemoan the aid their enemies gain from the release of scientific information; on the other side, the NASA administrator and his supporters parry with the confidence-building effects of sharing scientific knowledge with “enemies.” This debate is an eternal one and has raged between real-life scientific and military communities for as long as both have existed.

Art and Architecture

Brown’s most recent novel, The Da Vinci Code, officially took him to stardom. It features Robert Langdon in a new adventure that starts with a late-night request from the French judicial police to come to the Louvre, where he’s presented with the naked corpse of a famous curator, Jacques Saunire. He was to have met the curator that evening for the first time if Saunire had kept the appointment his secretary had so mysteriously made shortly before his death. Saunire’s murder introduces the novel; the hunt that ensues leads us on an eclectic tour of art and architecture across France and the United Kingdom, with Langdon and Saunire’s estranged granddaughter, Sophie Neveu, struggling to stay a jump ahead of both the police and the murderer, who is seeking the mysterious keystone of the Priory of Sion, a secret society of supposedly great antiquity.

A bit of technology whets our appetites, but none of it is as exciting as hypersonic jets or antimatter bombs. In The Da Vinci Code, the technology comes mostly from the dark world of intelligence and espionage, plus an entertaining mixture of mathematics and linguistic puzzles. In addition to the geeky stuff, there’s the wonderful description of important works of art and architecture — topics about which our community is generally unevenly educated. Rather than spoil the mystery for those of you who haven’t read the book yet, I’ll leave it at that. Brown asserts in the preface that the Priory of Sion is an ancient, real organization. The available information confirms that there have been organizations with that name at various times throughout history, although the variance between the statements about the Priory of Sion in the book and elsewhere is rather large. This is within the rights of a work of fiction, of course, but the claims have been widely attacked as a hoax. Be that as it may, Brown stirs up a melange of entertaining facts and factoids, producing from it a tasty and entertaining book.

Influential works

Author Title Year of Original Publication
Michael Crichton The Andromeda Strain 1969
Michael Crichton The Terminal Man 1972
Dan Brown Digital Fortress 1998
Tom Standage The Victorian Internet 1999
Dan Brown Angels & Demons 2000
Dan Brown Deception Point 2001
Dan Brown The Da Vinci Code 2003


Dan Brown’s stories feature a charming optimism. What in each book seems at first to be a vast conspiracy hatched by massive dark forces struggling to overwhelm the disorganized and mutually mistrustful powers of good eventually turns out to be a single twisted individual who has cleverly manipulated complex systems to his own ends. Invariably, a few heroic individuals, with luck and pluck, manage to thwart and ultimately unmask the malefactor. As each novel ends, the love interests stroll off to their richly earned rewards, and the world returns to bumbling normalcy.

Above all else, Brown’s work somehow feels realistic in its treatment of technology — it’s there, it can sometimes be confusing, it changes things in unexpected ways, but in the end, the world continues to be more familiar than alien.

Read the original …

(This article appeared originally in IEEE Security & Privacy in the March/April 2004 issue. This is substantially the same text, with some minor formatting changes to take advantage of the power of the online presentation plus a few minor wordsmithing tweaks.)

Here’s a PDF (article-08-final) of the original article, courtesy of IEEE Security & Privacy.

Die Gedanken Sind Frei

[originally published January 2004]

Security and privacy are twin social goods that exist in perpetual tension: our society has debated the trade-offs between them ever since the first days of social organization. Over the ages, the border between security and privacy has moved back and forth as first one side and then the other made bold steps forward impelled by events in ideas, economics, technology, and warfare. At present, privacy appears to be in retreat under the threat of terrorism; it seems at times as if we ourselves are destroying the very freedom that terrorists find so threatening.

In this issue, we’ll look at some radical views of privacy’s future through the eyes of several influential science fiction writers. In The Light of Other Days and The Transparent Society, we see two radical visions of a world in which privacy as we know it has entirely ceased to be. Unlike George Orwell’s 1984, in which despotism armed with two-way television eradicates privacy, these books describe privacy falling victim to technological innovation.

Privacy Is Just an Illusion

In The Light of Other Days, Arthur C. Clarke and Stephen Baxter explore the implications of wormholes, tunnel-like connections between two regions of space-time. Starting with speculations based in comparatively current research in theoretical physics, the authors create a world in which the wealthy and powerful megalomaniac Hiram Patterson sponsors the development of a “Casimir Engine” to produce the negative energy suitable to stabilize a wormhole. From this development comes the WormCam, a technology that lets people capture images from anywhere in the world—even across the universe.

With the WormCam, Clarke and Baxter envision a world in which anyone can observe anything in real time, thus creating the permanent possibility that one or more unseen witnesses could observe any event. The notion that an event is private to its participants does not exist. Clarke and Baxter move on to explore additional implications: when Bobby, one of Hiram’s sons, challenges his physicist brother David to explain the WormCam, they realize that not only can it span space, but time as well. Privacy is henceforward an illusion; the only people who ever had it died before the WormCam’s invention.

In another sequence of episodes, we discover that the total absence of privacy doesn’t mean that truth rules and the miscarriage of justice is now a thing of the past. The megalomaniacal Hiram Patterson manipulates the justice system to frame Kate Manzoni, driven by animosity about her professional activities as a reporter and her personal involvement with his son Bobby.

How would people react to the loss of all possible privacy? The book cleverly shows a range of responses, just the sort of thing a complex society populated by creative people might develop in the face of such a stimulus. Many people accept the loss of privacy fatalistically and go on with their lives as if nothing had happened. Others experiment with radical challenges to accepted mores, for example, by becoming public nudists. Yet others counter the WormCam by shrouding themselves in black robes and meeting in darkened rooms where they communicate solely via gestures communicated from hand to hand by touch. In this fashion, they defeat the WormCam, or at least hold it at bay, by depriving it of photons, the only material it can detect and transmit.

Most writers would be content to stop here, but Clarke and Baxter explore two more elements, each interesting in its own right. One concept is technology that connects information systems directly to the human nervous system. At first, its developers seek the ultimate in virtual reality — not an unattractive vision. However, having enabled individuals to commune with computers, they then extend this ability to let people interconnect their nervous systems with others. The authors portray this as alien and frightening — ultimately, a Borg-like mind begins to emerge. This idea is not original to Clarke and Baxter, nor is it carried off particularly well, but it’s nonetheless engaging, like the rest of the book.

The authors’ other conceptual vision is historical DNA mining. One character programs a computer system to follow trails of mitochondrial DNA back from child to parent to grandparent to great-grandparent and beyond, thus establishing a contextual path back through history. This concept is quite powerful, and the authors do a good job of imagining the unraveling of evolution as explorers follow their ancestors back to bacteria in the primordial ooze. Some very clever twists emerge from this theme, but we’ll draw the curtain to preserve the plot from spoilage.

Finally, what SF story would be complete without a giant asteroid approaching and threatening to end all life on Earth? I don’t know how Clarke and Baxter managed to shoehorn so much potboiler material into 300-plus pages without contracting a case of terminal triteness, but they did. What carries the book, however, is the brilliance of the conceptual visions, not the quality of writing, plotting, or dialogue.


By contrast, David Brin’s The Transparent Society is a relatively staid collection of nonfiction essays exploring the challenges to privacy — or the notions of it — implicit in emerging technological trends. Brin is chiefly known in SF circles as the prolific author of hard SF novels such as Sundiver, Startide Rising, and The Uplift War. He’s also a deeper thinker, though, as The Postman exemplifies.

The premise that Brin develops in The Transparent Society is that modern technology — from miniaturized surveillance cameras to data mining — has already eliminated our naive notions of privacy. The question, Brin argues, is not whether we’ll have privacy in the future, but under what terms its elimination will proceed. Before you deny his assertion, reflect on your ability to use Google to search for people you know or are about to meet. Think about the burgeoning use of video-surveillance technology by both police agencies and corporations. Brin elaborates two lines of argument in urging action to establish new ground rules for the management of information about people.

Brin’s first line of argument is that privacy as we conceive it today is a relatively recent phenomenon, dating from the last 200 years or so. Before that, he contends, people lived primarily in small groups within which very little could be kept from the eyes and ears of the community at large. Although this topic probably bears further exploration by people with deeper research into this sort of historical subject, certainly his point about the nature of privacy is an important one. What exactly is privacy? Is it control of who can see and hear us in various (maybe even embarrassing or delicate) situations?

Brin’s second line of argument is subtler. He notes that privacy is already a thing of the past: all that remains is to negotiate the terms under which we live without it. His point here is more substantial because it addresses the fundamental issues of openness and control of information that we deal with today. Technological advances cannot be undone, for example, so is the person looking at images of you as you walk down the street a friend or neighbor, or is it the police? Here’s where the argument gets the most sophisticated: “Make the cameras available to all so that anyone and everyone can look at their images,” he says. This will ensure that information is not gathered in secrecy and used to extort power. If we expose everything we do to everyone, then greater tolerance will result and no one need fear abuse.

Back in the bad old days, homosexuality was reportedly a disqualification for a security clearance — it was assumed to be a dirty secret and thus exposed you to blackmail. Today, with the homosexual community increasingly out of the closet, does such a restriction still remain? Extend this notion further and you have Brin’s argument — a society in which there is no privacy is one that eliminates blackmail.

Although compelling, this argument is somewhat naive. Marijuana consumption, for example, exposes those who indulge in it to criminal penalties in most parts of the world, but it still seems widely practiced. One of the more pragmatic ways that our society has developed for dealing with divergent views is to use the veil of privacy as a fig leaf. We pretend things are a certain way and encourage a willful ignorance of contrary evidence. “Don’t ask, don’t tell,” is this approach’s catchphrase. It lets society craft compromises that avoid a strict black-and-white resolution, even though the excluded middle exists and is essential to our peaceful coexistence.

Brin’s contention, Pollyanna that he is, is that the only way to survive the end of privacy will be to increase transparency, which will ultimately drive us toward greater tolerance. The alternative, he asserts, is to cede control of information to some powerful elite, whether government or corporate, that will necessarily tend toward corruption and abuse. The world that he suggests will result if we don’t insist on openness is much like Orwell’s 1984. The key question is whether openness and transparency will actually result in greater tolerance or if instead we’ll inherit a tyranny of the majority. Where does tolerance come from, anyway?


In the worlds these authors paint, we see some possible outcomes to the end of privacy as we currently imagine it. Clarke and Baxter make the most evocative exploration of the implications of a total loss of privacy, although to do it, they had to assume a tremendous amount of physics not yet in evidence. Brin’s work makes the point that the future contemplated in The Light of Other Days might not be all that far off. In both cases, the only thing that remains private — unexamined by others and therefore free of actual or potential social constraint — is thought: what goes on between our own ears. An old German poem entitled “Die Gedanken Sind Frei,” or “Thoughts Are Free,” reportedly dates back to the late 18th century. An English translation of the poem that achieved minor success as a popular song includes the assertions, “No scholar can map them,” and “No hunter can trap them.” It goes on optimistically to warn that thought threatens despotism, with the lines,

And if tyrants take me
And throw me in prison
My thoughts will burst free,
Like blossoms in season.
Foundations will crumble,
The structure will tumble,
And free men will cry:
Die Gedanken sind frei!

I’ll leave you with this final question: if we can’t share our thoughts, does it matter if they’re free?

Influential Works

Author Title Year of Original Publication
Georg Orwell 1984 1959
Damon Knight A For Anything 1959
David Brin The Transparent Society 1998 (excerpted in Wired in 1996)
Arthur C. Clarke and Stephen Baxter The Light of Other Days 2001


The first Biblio Tech article (“AI Bites Man,” vol. 1, no. 1, 2003, pp. 63—66) discussed Neal Stephenson’s The Diamond Age and described the plot of a story whose title and author I couldn’t retrieve. In the intervening year, inquiry among a variety of friends and SF experts and research via Internet resources has produced an answer. The story is A for Anything by Damon Knight, originally published in 1959 and possibly the only novel of Knight’s still in print today.

Read the original …

(This article appeared originally in IEEE Security & Privacy in the January/February 2004 issue. This is substantially the same text, with some minor formatting changes to take advantage of the power of the online presentation plus a few minor wordsmithing tweaks.)

Here’s a PDF (article-07-final) of the original article, courtesy of IEEE Security & Privacy.