Hacking The Best-Seller List

[originally published March 2004]

In the same way that Windows introduced the masses to mice and graphical user interfaces without having invented them, Dan Brown’s books explore for the general public some important themes in security and privacy and their sensitivity to technological change. These are themes that we’ve usually only seen treated in the more rarified zone of hard science fiction. This installment of Biblio Tech departs from the normal pattern of examining more obscure, idea-driven books and stories to focus on the works of a contemporary best-selling author. This departure is unusual because neither this department nor this magazine is part of the star-making machinery behind the popular book. By choosing to look at current popular fiction we run the risk of discovering later that we should have delved deeper. Nevertheless, these works are going to be broadly influential, so let’s look at them.

Hacking the Best-Seller List

Blending Popular Fiction With Science Fiction

Each of Dan Brown’s four novels — Digital Fortress, Angels & Demons, Deception Point, and The Da Vinci Code — starts off with a murder. In each case, the victim is an innocent whose death looms large in the plot of the thriller that follows, although the connection is not clear until later. We lose a programmer, a particle physicist, a geologist studying the Arctic, and a curator at the Louvre, all to murders that shock with their cryptic brutality.

These books have additional parallels, starting with their heroes and heroines. All the main characters are intellectuals, whether they’re academics or intelligence analysts. This makes it possible for the books to be scholarly treasure hunts interlaced with didactic expositions on topics as disparate as religion, art history, architecture, information management, cryptography, and privacy, all at the same time.

Unbreakable Cipher

Digital Fortress starts with the murder of a Japanese programmer whose masterpiece is an unbreakable encryption program. The programmer publishes its source code on his Web site, but encrypts the tarball with the new algorithm. He’s in the midst of auctioning the key to the highest bidder when he’s killed. David Becker, a linguistics professor, and his fiance Susan Fletcher, the chief cryptographer at the National Security Agency (NSA, called “No Such Agency” by some wags because so much of its funding is part of the US government’s “black” budget), must race against time to prevent the decryption key from being widely released.

The code has two layers: a relatively tough outer shell that’s susceptible to brute-force attack, and an inner layer that renders the contents difficult to recognize in natural language. Brown’s explanations of the code’s structural characteristics wouldn’t pass muster in the cryptographic mathematics community, but they’re sufficiently plausible for the rest of us to sustain the story. Brown introduces the concept of unbreakable codes in a long discussion between Fletcher and her boss, the NSA’s deputy director. In the process of the discussion, he also introduces some of the current debates about encryption and public policy, to which he does tolerable justice.

Although he introduces the Electronic Frontier Foundation and presents a somewhat balanced overview of the debate between civil libertarians and law enforcement advocates on the topic of strong encryption, he might have gone further to cover a little more of the debate’s history. As Tom Standage did in The Victorian Internet, Brown might have included some historical perspective from the early days of the telegraph, when many national governments forbade the use of codes and encryption. Covering some of these topics, however, would have meant introducing large-scale systems engineering considerations, something perhaps less than compatible with a popular novel.

Scientifically Informed Fiction

Angels & Demons provides our first encounter with Robert Langdon, the Harvard symbologist whose adventures in The Da Vinci Code have dominated the best-seller list for nearly a year. This book is the closest Brown comes to what the science-fiction community would call “hard” science fiction. One of the first scenes introduces a hypersonic transport, which the director of CERN sends to bring Langdon from Boston to Geneva. The victim whose murder Langdon has been summoned to help investigate is a physicist whose research has produced quantities of antimatter — quantities sufficient to attract a terrorist who wants to use it to destroy the Vatican in Rome. Brown’s introduction of these elements qualify Angels & Demons as science fiction, although the focus on art and architecture help the book appeal to a broader audience.

In his work, Brown often takes a line similar to that in Michael Crichton’s The Andromeda Strain and The Terminal Man: scientifically informed fiction rather than classical science fiction. Both authors’ books differ from classical science fiction in two ways. First, the technical artifacts are based on the contemporary state of the art rather than on plausible or possible items. Second, the fictional world’s social framework doesn’t differ from our contemporary framework. The technology establishes or supports the conflict—it doesn’t create a different infrastructure for the world. For these two reasons, these books are considered to be less ambitious technically and less deserving of the term “science fiction” than works by Isaac Asimov, for example.

Political Intrigue

Deception Point introduces a protagonist named Rachel Sexton. Rachel, like the typical thriller heroine, is the daughter of a senator running for president. Rachel is also a member of the senior staff at the National Reconnaissance Office (NRO, “We Own The Night”), which is the agency that builds and operates US surveillance satellites and other “national technical means.” In this story and in Digital Fortress, Brown demonstrates the fruits of his research into the less well-known but not entirely secret corners of the intelligence community. He weaves together his encyclopedic knowledge of current military and space technology with rumored programs, including the supposed Aurora spy plane that some speculate succeeded the famed SR-71 Blackbird in the 1980s as the world’s fastest air-breathing plane.

Deception Point features the standard race against time as Rachel and oceanographer Michael Tolland hurry to unravel the riddle surrounding a mysterious meteorite found deep underneath the Arctic icecap. (The President recruits Tolland and three other prominent scientists to assess the meteorite’s authenticity.) With a sequence of hair-raising escapes from death straight out of The Perils of Pauline, sinister forces working for a mysterious person identified only as “the controller” pursue characters from NRO headquarters to the Milne Ice Shelf back down to the Atlantic off the coast of New Jersey. Where current technology and rumored future technology leave off, Brown’s imagination provides extensions. At one point, the mysterious soldiers fire bullets made of ice at Sexton and her companions, a weapon we’ve seen before in science fiction works like Asimov’s Caves of Steel.

Woven into the thriller thread is the old debate between secrecy and openness. In Deception Point, a confrontation emerges between the head of NASA and the intelligence community. On one side, Brown’s intelligence community leaders bemoan the aid their enemies gain from the release of scientific information; on the other side, the NASA administrator and his supporters parry with the confidence-building effects of sharing scientific knowledge with “enemies.” This debate is an eternal one and has raged between real-life scientific and military communities for as long as both have existed.

Art and Architecture

Brown’s most recent novel, The Da Vinci Code, officially took him to stardom. It features Robert Langdon in a new adventure that starts with a late-night request from the French judicial police to come to the Louvre, where he’s presented with the naked corpse of a famous curator, Jacques Saunire. He was to have met the curator that evening for the first time if Saunire had kept the appointment his secretary had so mysteriously made shortly before his death. Saunire’s murder introduces the novel; the hunt that ensues leads us on an eclectic tour of art and architecture across France and the United Kingdom, with Langdon and Saunire’s estranged granddaughter, Sophie Neveu, struggling to stay a jump ahead of both the police and the murderer, who is seeking the mysterious keystone of the Priory of Sion, a secret society of supposedly great antiquity.

A bit of technology whets our appetites, but none of it is as exciting as hypersonic jets or antimatter bombs. In The Da Vinci Code, the technology comes mostly from the dark world of intelligence and espionage, plus an entertaining mixture of mathematics and linguistic puzzles. In addition to the geeky stuff, there’s the wonderful description of important works of art and architecture — topics about which our community is generally unevenly educated. Rather than spoil the mystery for those of you who haven’t read the book yet, I’ll leave it at that. Brown asserts in the preface that the Priory of Sion is an ancient, real organization. The available information confirms that there have been organizations with that name at various times throughout history, although the variance between the statements about the Priory of Sion in the book and elsewhere is rather large. This is within the rights of a work of fiction, of course, but the claims have been widely attacked as a hoax. Be that as it may, Brown stirs up a melange of entertaining facts and factoids, producing from it a tasty and entertaining book.

Influential works

Author Title Year of Original Publication
Michael Crichton The Andromeda Strain 1969
Michael Crichton The Terminal Man 1972
Dan Brown Digital Fortress 1998
Tom Standage The Victorian Internet 1999
Dan Brown Angels & Demons 2000
Dan Brown Deception Point 2001
Dan Brown The Da Vinci Code 2003


Dan Brown’s stories feature a charming optimism. What in each book seems at first to be a vast conspiracy hatched by massive dark forces struggling to overwhelm the disorganized and mutually mistrustful powers of good eventually turns out to be a single twisted individual who has cleverly manipulated complex systems to his own ends. Invariably, a few heroic individuals, with luck and pluck, manage to thwart and ultimately unmask the malefactor. As each novel ends, the love interests stroll off to their richly earned rewards, and the world returns to bumbling normalcy.

Above all else, Brown’s work somehow feels realistic in its treatment of technology — it’s there, it can sometimes be confusing, it changes things in unexpected ways, but in the end, the world continues to be more familiar than alien.

Read the original …

(This article appeared originally in IEEE Security & Privacy in the March/April 2004 issue. This is substantially the same text, with some minor formatting changes to take advantage of the power of the online presentation plus a few minor wordsmithing tweaks.)

Here’s a PDF (article-08-final) of the original article, courtesy of IEEE Security & Privacy.

Die Gedanken Sind Frei

[originally published January 2004]

Security and privacy are twin social goods that exist in perpetual tension: our society has debated the trade-offs between them ever since the first days of social organization. Over the ages, the border between security and privacy has moved back and forth as first one side and then the other made bold steps forward impelled by events in ideas, economics, technology, and warfare. At present, privacy appears to be in retreat under the threat of terrorism; it seems at times as if we ourselves are destroying the very freedom that terrorists find so threatening.

In this issue, we’ll look at some radical views of privacy’s future through the eyes of several influential science fiction writers. In The Light of Other Days and The Transparent Society, we see two radical visions of a world in which privacy as we know it has entirely ceased to be. Unlike George Orwell’s 1984, in which despotism armed with two-way television eradicates privacy, these books describe privacy falling victim to technological innovation.

Privacy Is Just an Illusion

In The Light of Other Days, Arthur C. Clarke and Stephen Baxter explore the implications of wormholes, tunnel-like connections between two regions of space-time. Starting with speculations based in comparatively current research in theoretical physics, the authors create a world in which the wealthy and powerful megalomaniac Hiram Patterson sponsors the development of a “Casimir Engine” to produce the negative energy suitable to stabilize a wormhole. From this development comes the WormCam, a technology that lets people capture images from anywhere in the world—even across the universe.

With the WormCam, Clarke and Baxter envision a world in which anyone can observe anything in real time, thus creating the permanent possibility that one or more unseen witnesses could observe any event. The notion that an event is private to its participants does not exist. Clarke and Baxter move on to explore additional implications: when Bobby, one of Hiram’s sons, challenges his physicist brother David to explain the WormCam, they realize that not only can it span space, but time as well. Privacy is henceforward an illusion; the only people who ever had it died before the WormCam’s invention.

In another sequence of episodes, we discover that the total absence of privacy doesn’t mean that truth rules and the miscarriage of justice is now a thing of the past. The megalomaniacal Hiram Patterson manipulates the justice system to frame Kate Manzoni, driven by animosity about her professional activities as a reporter and her personal involvement with his son Bobby.

How would people react to the loss of all possible privacy? The book cleverly shows a range of responses, just the sort of thing a complex society populated by creative people might develop in the face of such a stimulus. Many people accept the loss of privacy fatalistically and go on with their lives as if nothing had happened. Others experiment with radical challenges to accepted mores, for example, by becoming public nudists. Yet others counter the WormCam by shrouding themselves in black robes and meeting in darkened rooms where they communicate solely via gestures communicated from hand to hand by touch. In this fashion, they defeat the WormCam, or at least hold it at bay, by depriving it of photons, the only material it can detect and transmit.

Most writers would be content to stop here, but Clarke and Baxter explore two more elements, each interesting in its own right. One concept is technology that connects information systems directly to the human nervous system. At first, its developers seek the ultimate in virtual reality — not an unattractive vision. However, having enabled individuals to commune with computers, they then extend this ability to let people interconnect their nervous systems with others. The authors portray this as alien and frightening — ultimately, a Borg-like mind begins to emerge. This idea is not original to Clarke and Baxter, nor is it carried off particularly well, but it’s nonetheless engaging, like the rest of the book.

The authors’ other conceptual vision is historical DNA mining. One character programs a computer system to follow trails of mitochondrial DNA back from child to parent to grandparent to great-grandparent and beyond, thus establishing a contextual path back through history. This concept is quite powerful, and the authors do a good job of imagining the unraveling of evolution as explorers follow their ancestors back to bacteria in the primordial ooze. Some very clever twists emerge from this theme, but we’ll draw the curtain to preserve the plot from spoilage.

Finally, what SF story would be complete without a giant asteroid approaching and threatening to end all life on Earth? I don’t know how Clarke and Baxter managed to shoehorn so much potboiler material into 300-plus pages without contracting a case of terminal triteness, but they did. What carries the book, however, is the brilliance of the conceptual visions, not the quality of writing, plotting, or dialogue.


By contrast, David Brin’s The Transparent Society is a relatively staid collection of nonfiction essays exploring the challenges to privacy — or the notions of it — implicit in emerging technological trends. Brin is chiefly known in SF circles as the prolific author of hard SF novels such as Sundiver, Startide Rising, and The Uplift War. He’s also a deeper thinker, though, as The Postman exemplifies.

The premise that Brin develops in The Transparent Society is that modern technology — from miniaturized surveillance cameras to data mining — has already eliminated our naive notions of privacy. The question, Brin argues, is not whether we’ll have privacy in the future, but under what terms its elimination will proceed. Before you deny his assertion, reflect on your ability to use Google to search for people you know or are about to meet. Think about the burgeoning use of video-surveillance technology by both police agencies and corporations. Brin elaborates two lines of argument in urging action to establish new ground rules for the management of information about people.

Brin’s first line of argument is that privacy as we conceive it today is a relatively recent phenomenon, dating from the last 200 years or so. Before that, he contends, people lived primarily in small groups within which very little could be kept from the eyes and ears of the community at large. Although this topic probably bears further exploration by people with deeper research into this sort of historical subject, certainly his point about the nature of privacy is an important one. What exactly is privacy? Is it control of who can see and hear us in various (maybe even embarrassing or delicate) situations?

Brin’s second line of argument is subtler. He notes that privacy is already a thing of the past: all that remains is to negotiate the terms under which we live without it. His point here is more substantial because it addresses the fundamental issues of openness and control of information that we deal with today. Technological advances cannot be undone, for example, so is the person looking at images of you as you walk down the street a friend or neighbor, or is it the police? Here’s where the argument gets the most sophisticated: “Make the cameras available to all so that anyone and everyone can look at their images,” he says. This will ensure that information is not gathered in secrecy and used to extort power. If we expose everything we do to everyone, then greater tolerance will result and no one need fear abuse.

Back in the bad old days, homosexuality was reportedly a disqualification for a security clearance — it was assumed to be a dirty secret and thus exposed you to blackmail. Today, with the homosexual community increasingly out of the closet, does such a restriction still remain? Extend this notion further and you have Brin’s argument — a society in which there is no privacy is one that eliminates blackmail.

Although compelling, this argument is somewhat naive. Marijuana consumption, for example, exposes those who indulge in it to criminal penalties in most parts of the world, but it still seems widely practiced. One of the more pragmatic ways that our society has developed for dealing with divergent views is to use the veil of privacy as a fig leaf. We pretend things are a certain way and encourage a willful ignorance of contrary evidence. “Don’t ask, don’t tell,” is this approach’s catchphrase. It lets society craft compromises that avoid a strict black-and-white resolution, even though the excluded middle exists and is essential to our peaceful coexistence.

Brin’s contention, Pollyanna that he is, is that the only way to survive the end of privacy will be to increase transparency, which will ultimately drive us toward greater tolerance. The alternative, he asserts, is to cede control of information to some powerful elite, whether government or corporate, that will necessarily tend toward corruption and abuse. The world that he suggests will result if we don’t insist on openness is much like Orwell’s 1984. The key question is whether openness and transparency will actually result in greater tolerance or if instead we’ll inherit a tyranny of the majority. Where does tolerance come from, anyway?


In the worlds these authors paint, we see some possible outcomes to the end of privacy as we currently imagine it. Clarke and Baxter make the most evocative exploration of the implications of a total loss of privacy, although to do it, they had to assume a tremendous amount of physics not yet in evidence. Brin’s work makes the point that the future contemplated in The Light of Other Days might not be all that far off. In both cases, the only thing that remains private — unexamined by others and therefore free of actual or potential social constraint — is thought: what goes on between our own ears. An old German poem entitled “Die Gedanken Sind Frei,” or “Thoughts Are Free,” reportedly dates back to the late 18th century. An English translation of the poem that achieved minor success as a popular song includes the assertions, “No scholar can map them,” and “No hunter can trap them.” It goes on optimistically to warn that thought threatens despotism, with the lines,

And if tyrants take me
And throw me in prison
My thoughts will burst free,
Like blossoms in season.
Foundations will crumble,
The structure will tumble,
And free men will cry:
Die Gedanken sind frei!

I’ll leave you with this final question: if we can’t share our thoughts, does it matter if they’re free?

Influential Works

Author Title Year of Original Publication
Georg Orwell 1984 1959
Damon Knight A For Anything 1959
David Brin The Transparent Society 1998 (excerpted in Wired in 1996)
Arthur C. Clarke and Stephen Baxter The Light of Other Days 2001


The first Biblio Tech article (“AI Bites Man,” vol. 1, no. 1, 2003, pp. 63—66) discussed Neal Stephenson’s The Diamond Age and described the plot of a story whose title and author I couldn’t retrieve. In the intervening year, inquiry among a variety of friends and SF experts and research via Internet resources has produced an answer. The story is A for Anything by Damon Knight, originally published in 1959 and possibly the only novel of Knight’s still in print today.

Read the original …

(This article appeared originally in IEEE Security & Privacy in the January/February 2004 issue. This is substantially the same text, with some minor formatting changes to take advantage of the power of the online presentation plus a few minor wordsmithing tweaks.)

Here’s a PDF (article-07-final) of the original article, courtesy of IEEE Security & Privacy.

The Girl With No Eyes

[originally published July 2003]

William Gibson may regret coining the term cyberspace in his 1984 novel Neuromancer. He received acclaim with the world of the Sprawl, which he created in the short story Johnny Mnemonic. But it was one well-tuned phrase,

jacked into a custom cyberspace deck that projected his disembodied consciousness into the consensual hallucination that was the matrix,

that helped win him the science-fiction triple crown: the Hugo, the Nebula, and the Philip K. Dick awards. Now he can’t get away from cyberspace, like an actor typecast by a too-successful performance in a role he may no longer love.

In this installment of Biblio Tech, we return to cyberpunk, which was very hot in the 1980s and retained considerable power throughout the 1990s. In the first decade of the 21st century, cyberpunk conjures much less, so this is an excellent time to give it a thoughtful look. Specifically, we’ll explore a particular theme of Gibson’s — namely, what distinguishes the human from the machine.

Human + Machine = ?

In computer science, the fascination with using technology for augmentation, particularly of the human intellect, is one of the oldest drivers in the field. Doug Engelbart introduced the term in the 1960s, but he credits Vannevar Bush’s seminal paper As We May Think, which The Atlantic Monthly published in 1945, for the inspiration. Englebart’s vision has proved incredibly influential, producing the mouse, the graphical user interface, hyperlinks, and online collaboration, among other things. Bush’s technological foresight may have been flawed in the details — our modern information systems are not based on microfilm, for example — but in the broadest sense, he got much of it right. He properly identified that information storage and retrieval would be one of the most important challenges facing those we now call knowledge workers.

Engelbart made his life’s work the solution of the augmentation problem — namely, how to make it easier for people to actually use mechanical aids to increase their capabilities. In the introduction to his 1962 report to the US Air Force on his research in this area www.bootstrap.org/augment/AUGMENT/133182-0.html, he wrote:

By “augmenting human intellect” we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this respect is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insoluble. And by “complex situations” we include the professional problems of diplomats, executives, social scientists, life scientists, physical scientists, attorneys, designers — whether the problem situation exists for twenty minutes or twenty years. We do not speak of isolated clever tricks that help in particular situations. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human “feel for a situation” usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids.

In his work on augmentation, Engelbart invoked important examples to show that augmentation needn’t be simple amplification — as, for instance, a hammer does for our fist or a megaphone does for our voice — but rather, it could be abstraction and extension.

Threading through Gibson’s “Sprawl” stories (Johnny Mnemonic, Neuromancer, Count Zero, and Mona Lisa Overdrive) and later efforts (Virtual Light, Idoru, and All Tomorrow’s Parties) is an exploration of the boundaries and distinctions between humans and machines. Like a child with a box of mixed Lego kits, Gibson experiments with different combinations of pieces, creating monsters and angels and then exploring the potential relationships among them from different directions. From the concept of a person, he probes the implications of using technology for augmentation and the threshold that divides humans from machine. From the idea of the machine, he speculates on what added characteristics could turn an artificial intelligence (AI) into a human.

In Johnny Mnemonic, we encounter several boundary-testing experiments. In the story’s gigantic megalopolis, resulting from the fusion of the cities between Boston and Atlanta into the Boston Atlanta Metropolitan Area, or BAMA, human beings augmented with surgical implants are the norm. The lowest level of augmentation is the jack, the electro-optical connector that lets people connect their nervous systems directly to computers or vice versa.

The higher levels of augmentation we see in Gibson’s work seem to stem from an extrapolation of trends and visions in human prostheses. Although we take baby steps today toward mechanical ears and artificial eyes to help the deaf and the blind, consider a future in which we have perfected the ability to connect man-made devices to our nervous systems. How many people, given the opportunity, would choose to replace some imperfect pieces of their anatomies, not because they failed but just to achieve superior performance?

Some augmentation might not be visible or even operationally valuable, as in the case of Eddie Bax, the Johnny Mnemonic of Gibson’s title. Eddie has a memory device implanted in his head that lets him store data on his clients’ behalf, whether for safekeeping or for smuggling. The Lo Teks, whom we meet in Johnny Mnemonic, tend toward less functional augmentation — essentially, punk modifications like animal teeth, ears, and other changes made for shock value rather than performance enhancement. Gibson’s interest in tattooing and body piercing, also aspects of punk culture, are in evidence in various works, notably Virtual Light. It’s a small conceptual step from a grotesque tattoo to a dog’s ears grafted on a character’s head.

She seems to be staring …

Other augmentation in Gibson’s work is deliberately, even shockingly, visible and extravagantly useful to the augmented person. Molly Millions is one such. She’s invested a fortune in surgical implants to turn herself into a lethal fighting machine, a fortune that she earned by practicing several unsavory professions, including the oldest one. In the tip of each finger is a retractable knife blade, implanted by one of the best of Chiba City’s “black clinics.” Her nervous system is enhanced, rendering her perceptions and reactions lightning fast. Finally, and most strikingly, the prosthetics replacing her eyes combine vision, see-in-the-dark sensors, and computer interfaces, all covered by chrome covers that look at first glance like high-tech reflective sunglasses. Gibson exploits the shock value of this self-mutilation: Molly has superior eyesight and looks incredibly cool with her mirrored eye covers, but wow!

In 1995, Robert Longo made Johnny Mnemonic into movie starring Keanu Reeves. The movie wasn’t particularly successful, although it does have a very attractive star and several engaging elements. My personal beef with it is the rebalancing of the Molly / Eddie dynamic. In the short story Molly is tough and lethal whereas Eddie is a self-described “technical boy” whose one foray into crudeness flops until Molly rescues him. The Molly character that Gibson creates in the Sprawl novels has a lot of potential, and I earnestly hope that the rumored Neuromancer project doesn’t make the same mistake by submerging the killer queen again.

Gibson’s fascination with Molly is evidenced by the fact that unique among the characters he creates for the Sprawl stories, she spans all of them. She’s young and ambitious and serves as the love interest of several other characters in the early stories. In later ones, though, she’s old and cynical, but just as deadly. Why is Molly so important to cyberpunk? Certainly her sexuality is important to the success of Gibson’s early writing, but is that all? I think not. Her integration of technological, albeit not intellectual, augmentation is total and permanent. Her partner in Neuromancer is Case, the console cowboy. His augmentation is intermittent; he’s only augmented when he’s jacked into the matrix. Other times, he’s merely human.

One of the most fascinating experiments of Gibson’s work with Molly and Case comes when he outfits Molly with a sim/stim rig that transmits all her sensory inputs to Case. Suddenly the partnership has Case’s integration with the matrix and Molly’s integration with the physical world.

Machine + Augmentation = ?

In Neuromancer Gibson’s focus is on the quest by a machine, an AI, to augment itself. Throughout the novel, we encounter the efforts of one AI to merge with another, something that the Turing Police are systematically, though incompetently, constituted to prevent. Woven through this is a hard-boiled adventure yarn whose plot twists and confusions would do credit to Raymond Chandler or Dashiell Hammett.

Gibson raises some interesting questions. In Neuromancer, we encounter an AI with Swiss citizenship:

“It owns itself?”

“Swiss citizen, but T-A own the basic software and the mainframe.”

“That’s a good one,” the construct said. “Like, I own your brain and what you know, but your thoughts have Swiss citizenship. Sure. Lotsa luck, AI.”

This is the crux of the question. When a true AI actually comes to be, whether by accident or design, what rights should it have? Who will protect these rights? What will be its attitudes toward the human race? Gibson is not the first to ponder this topic, of course, as we discussed in the first Biblio Tech, but he does seem to have articulated and explored many more different aspects of the question in fictional scenarios.

Machine + Human = ?

In Count Zero we meet another augmented person in the form of Josef Virek. He’s a man whose body’s failure has been arrested but not stopped by the continuous addition of machinery. The novel hints that the augmentation’s primary purpose is preserving Virek’s life, but it is also clear that Virek has gained a certain level of multitasking and has lost some control over some of the manifestations of his persona in the process. This raises an interesting question: Is he still human? What does it mean to be human? Do we have to be a biological entity residing in a body? How much machinery can we add without sacrificing our humanity? Must these functions be provided biologically?

Eddie Flatline, whom we meet in Neuromancer, is a ROM construct — a recording of a dead console cowboy’s personality and memories. At one point, Flatline asks Case, a natural human, to destroy the ROM containing his personality, meaning the ROM construct has enough self-awareness to request death. This is a notion we encountered much earlier in Vernor Vinge’s True Names, when at the end, Erythrina records herself in a computer network’s data space. Explaining herself to her erstwhile but now uncertain ally, Mr. Slippery, she says,

When Bertrand Russell was very old, and probably as dotty as I am now, he talked of spreading his interests and attention out to the greater world and away from his own body, so that when the body died he would scarcely notice it, his whole consciousness would be so diluted through the outside world.

Lawyers have the term natural person to distinguish between corporations and people, because in a certain sense we have created corporations for the purpose of investing them with some of the rights and privileges of people. Perhaps we will be able to persuade an attorney with a theoretical bent to write about this for a future installment of Biblio Tech.

The ultimate reunion of the star-crossed lovers Bobby Newmark and Angela Mitchell in Mona Lisa Overdrive comes only after the deaths of their bodies and the transfer of their personalities into AIs destined to live in the Aleph’s context. In fact, it’s Virek’s quest to acquire the technology to permit that same transfer for himself that precipitates the entire sequence of events in Count Zero and Mona Lisa Overdrive, although Virek himself doesn’t survive the first episode.

As we’ve observed earlier, Gibson isn’t the first to have speculated on the use of AI as a framework for the preservation of the human (the soul?) after death, but he’s certainly the first to render it a casual assumption.

Love Not Human

Idoru‘s thesis is that a human and an AI fall in love and decide to marry. The novel spends its time and energy keeping us engaged in an attempt to grasp this point. The other characters in the story are engaged in various efforts to understand, thwart, or encourage the match.

Gibson is not the first to explore notions of emotional attachment between humans and AIs. Robert A. Heinlein established several close friendships between Mike and the humans most involved in setting up the Lunar revolution in his book, The Moon Is a Harsh Mistress. Mr. Slippery clearly maintains a personal loyalty to Erythrina even after her persona migrates permanently to cyberspace and she ceases to have a physical presence.

In these earlier stories, however, the authors maintained a clear distinction between the personalities living in the machines, whether they originated there or not, and natural humans. In Idoru, however, Gibson deliberately invokes aspects of love that we associate with bodies. Rei Toei, the artificial person, was originally constructed to be a performer. She is manifested as a holograph and appears as an attractive young woman. Rez, the human who wants to marry her, is a successful pop music performer with fan clubs on all continents, so his sudden obsession — his sudden crazy obsession — causes consternation among his friends, managers, and fans. His fascination with Rei Toei has an implicit carnal aspect that makes everyone squirm.

What precisely is marriage between a natural person and a virtual one? Gibson takes pains to make it clear that this is not the Platonic love between man and machine explored by Asimov, Heinlein, and others. This is the real thing. Unfortunately, Gibson walks to the brink but doesn’t jump. He leaves the consummation of the union unexamined at the end of Idoru, as is his right. But in the next installment of the story, All Tomorrow’s Parties, he cheats — when that scene opens, the two have parted company. Worse yet, by the end of that novel, he permanently eliminates the question by means of a deus-ex-machina maneuver that would be irritating if it weren’t such a sublime pun.

Why Cyberpunk?

What’s fascinating about Gibson’s writing is the focus on him as a literary stylist rather than as a speculator on the relationships between humans and their creations. Is this because the critics are largely littérateurs, primarily concerned with the world of words and uncomfortable with attempts to analyze the technological dimensions of Gibson’s work? Or is it because many of the ideas explored in his writing aren’t terribly new, as we’ve discussed in earlier articles?

Gibson’s success to date has been driven more by the punk than the cyber in his world. His artful creation of a jarring, dissonant dystopia is compelling; the technology is more of a veneer. Nonetheless, he has managed to touch and speculate on a collection of important questions that we as technologists should think about. Gradually, we will develop the ability to integrate machine and man; in fact, we’re doing it already with work in prosthetics and artificial intelligence. Because the process will be gradual, we are in danger of letting it happen unexamined. Each incremental step will benefit someone somewhere, and we will manage to avoid thinking about the systemic implications until suddenly we’re in an alien world that might well resemble one of Gibson’s nightmares.

That said, it’s important to recognize part of Gibson’s power as a writer is the power of the professional prestidigitator. His art is in misdirection, not magic. The worlds of Gibson’s writing are dystopic, with many foundations of our present world absent or disturbingly warped. Security comes from powerful allies, never from neutral institutions dedicated to maintaining the public good. Relationships that last are built on raw power, while balanced relationships are evanescent. This isn’t to say that comfortable homey things don’t exist in the Sprawl or in the Virtual Light world, but Gibson definitely makes sure we don’t see much of them.

Whenever we encounter children, as we do in several places, they are either street urchins living by their wits or sheltered flowers of the wealthy, as in the case of 13-year-old Kumiko Yanaka. We meet Kumiko in Mona Lisa Overdrive as she is being sent by private jet for safekeeping in London while her father, some sort of big shot in the Yakuza, sorts out some pending unpleasantness. Nothing about her life is what we would think of as normal. We see no school, we hear of no friends, but we do learn about her mother — albeit only her suicide — and Kumiko’s ambiguous feelings toward her father, whom she blames. Even when we encounter middle-class children, for instance the Tokyo Lo/Rez fanclubs in Idoru, we don’t see the prosaic day-to-day material of school and home that establishes context.


Developmental psychologists tell us that a child’s growth is characterized by an increasing ability to distinguish the self from others and from the world. Gibson’s writing, particularly in the Sprawl stories, explores breaking down that distinction between the self and the other. The console cowboy jacks into and merges with the matrix, being augmented and augmenting in turn. Sim/stim lets couch potatoes share the experiences of the stars, but it also lets Molly and Case achieve a new level of partnership. Wintermute seeks to merge with Neuromancer to create a new level of personality. Virek seeks to migrate his persona from his failing physical body to the immortal realm of the aleph. As quantum mechanics, via uncertainty, made hard little electrons into vague fuzzy presences, Gibson makes his people into fuzzy personas — not by making them vague and indistinct, but by blurring their boundaries. We keep coming back to the gist of his question: Just what is a person?

The people who occupy Gibson’s worlds are adrenaline junkies, criminals, mercenaries, and super celebrities, always living on the very edge. More than that, they are people who are completely foreign and, consequently, fatally fascinating, to the vast bulk of his readers. I’m indebted to Paul Brians of Washington State University, who notes that, “it is not surprising that he gained more of a following among academics than among the sort of people he depicted.”

One of the most fascinating speculations in both philosophy and computer science is over whether the human brain is a machine. If it is, then ultimately we can build a machine with equivalent complexity and capability and duplicate its every capacity, including creativity, imagination, vision, and boredom. If it is not, then some functional process in the brain, as yet not clearly demonstrated, must distinguish it from a computer. Nothing we know about the brain’s physical machinery so far suggests that it has any capability that can’t be duplicated with mechanisms. If so, what prevents us from being able to create an AI equivalent to a person? There conceivably might be some process in the brain that transcends mechanisms, some mystical facility that operates by means we don’t yet know or perhaps cannot ever understand. Or there might be some complexity threshold that we haven’t yet passed with our machines.

In All Tomorrow’s Parties, Gibson gives his personal answer to this question when the artificial person Rei Toei says to Rydell:

“This is human, I think,” she’d said when pressed. “This is the result of what you are, biochemically, being stressed in a particular way. This is wonderful. This is closed to me.”

Here we see Gibson’s failure as a theoretician. Nothing that the idoru claims underpins the distinction between AI and human is plausible to computer scientists and engineers who have considered the topic. There are no biochemical processes that cannot be modeled or simulated using computers. This damp squib leaves us with the unsettling feeling that Gibson has dropped the ball. It’s at times like these that you realize Gibson belongs to the literary world, not the concept-mad world of science fiction, unlike his brethren Asimov, Heinlein, and Vinge.

This installment of Biblio Tech has been dedicated to the work of one person, William Gibson. More than that, however, the articles in this department to date have all been building toward this examination of Gibson’s work. This is fitting; given the influence that his work has exerted on the field of science fiction and the entertainment he has given so many of us. I hope these articles inspire you to read some of the important works we’ve examined and, more importantly, to think about some of the issues discussed. As engineers, computer scientists, and general technologists we are among the best prepared to consider these topics and anticipate the implications of the technologies we are developing. We have an obligation to do so and to engage non-technical people in discussion.

Influential Works

Medium Author Title Year of Original Publication
Article Vannevar Bush As We May Think 1945
Book Philip K. Dick Do Androids Dream of Electric Sheep 1968
Novella Vernor Vinge True Names 1981
Short Story William Gibson Johnny Mnemonic 1981
Film Ridley Scott Blade Runner 1982
Book William Gibson Neuromancer 1984
Book William Gibson Count Zero 1986
Book William Gibson Mona Lisa Overdrive 1988
Book William Gibson Virtual Light 1993
Film Robert Longo Johnny Mnemonic 1995
Book William Gibson Idoru 1996
Film Andy and Larry Wachowski The Matrix 1999
Book William Gibson All Tomorrow’s Parties 1999

Read the original …

(This article appeared originally in IEEE Security & Privacy in the July/August 2003 issue. This is substantially the same text, with some minor formatting changes to take advantage of the power of the online presentation plus a few minor wordsmithing tweaks. And the table has the original publication dates for the listed books, not the editions in print in 2003 when the article was published.)

Here’s a PDF (article-04-final) of the original article, courtesy of IEEE Security & Privacy.

Hey, Robot!

[originally published May 2003]

What area of research, development, and commercial activity owes more of its existence to the arts than robotics does? The word itself comes from an early 20th-century play; less than a decade later, an important film introduced an enduring fantasy concept of what robots look like. Shortly after that, but still before much significant technical research or development occurred in the field, science-fiction writers developed complex theories of robot behavior in stories that are still in print today.

In this installment of Biblio Tech, we’ll look at some of the arts that have shaped our notions of robots. We will see the deep roots these stories have in far earlier concepts that have little to do with engineering but everything to do with the human race’s fascination with creation.

R.U.R. (Rossum’s Universal Robots)

In 1920, Karel Capek completed his play, R.U.R. (Rossum’s Universal Robots); its first production in 1921 brought Capek worldwide renown and introduced the word “robot” to the English language. Some argue that if he had survived the era of Nazi domination in Europe, he would have received the Nobel Prize for Literature.

Rather than mechanical constructions, Rossum’s robots were more biological and chemical in their fabrication. Nonetheless, they are definitely the ancestors of our modern industrial gadgets. Is the distinction between human beings and machines that humans work to live while machines exist to work? If that’s the case, then Rossum’s robots definitely existed, or were at least built, to work. They worked tirelessly and were tremendously more productive than mere humans, but they lacked emotions, creativity, and souls.

In the play’s first act, Helena Glory, the young daughter of “the President,” arrives by ship to the remote island where Rossum has developed the techniques for making robots. She is concerned with the oppression of robots worldwide and wants to foment a revolt among them — to inspire in them a passion for freedom.

What she finds on the island is a factory almost entirely staffed by robots, with a small team of men managing the operation. She is dismayed to discover that the robots are emotionless and unmovable: Rossum and son’s original engineering work produced a simplified physiology and nervous system that were incapable of pain or passion.

All is not lost, however. Dr. Gall, the head of the Experimental Department, is working to add a pain sense to the robots:

Helena: Why do you want to cause them pain?

Dr. Gall: For industrial reasons, Miss Glory. Sometimes a Robot does damage to himself because it doesn’t hurt him. He puts his hand into the machine, breaks his finger, smashes his head, it’s all the same to him. We must provide them with pain. That’s an automatic protection against damage.

In addition, there’s a mysterious disease called “Robot’s cramp” that the managers view as a fatal failure: “A flaw in the works that has to be removed.” Helena recognizes it as something else, though: “No, no, that’s the soul.”

In the remainder of the play, we watch the world’s economies devastated by cheap labor and see governments wage war with armies of robot soldiers. Finally, the robots revolt, ultimately exterminating their creators. The play ends with the emergence of a robot Adam and Eve and the cycle of life begins again.

Frankenstein, The Golem, and Metropolis

We see in Mary Shelley’s 1818 novel Frankenstein (unlike the flood of cinematic caricatures that sprang from it) a set of concepts similar to those in R.U.R. Behind Frankenstein, we see the even older legend of the Golem. In 16th-century Prague, the story goes, Rabbi Loew created a humanoid figure out of clay and brought it to life by marking it with a powerful magic word. He then commanded this creature to defend the Jews of the Prague ghetto against the torments of a contemporary despot. Ultimately, the creature began to show signs of rebellion. The ending of the legend has many different variations. In some versions, Rabbi Loew destroys the Golem; in others, the creature flees, never to be seen again.

Fritz Lang’s 1927 film Metropolis introduced the first cinematic robot, which managed to typecast the entire category for at least 50 years. Lang’s robot is the creation of a mad scientist, Rotwang, who is trying to create a surrogate for his lost love, Hel, to whom he has built an altar in his laboratory. She rejected him in favor of his rival, Joh Fredersen, and died giving birth to their son, Freder Fredersen. Joh Fredersen is the master of the city of Metropolis, an architectural and industrial vision of the early 20th century that we might barely recognize today. Metropolis is divided into two parts: a lower part inhabited by industrial workers who live underground and toil ceaselessly in the bowels of the machines that make Metropolis function, and an upper part peopled by a happy leisure class who spend their time at games and diversions. Near the beginning of the film, Freder ventures below ground, where his heart is moved by the plight of the workers and captivated by the beautiful Maria, a pure and gentle young woman whom he encounters preaching peaceful change. She promises a bridge for the gap between the workers (the Hands) and the managers (the Head). She calls this as-yet-unknown person the Mediator and identifies him as the Heart.

To undermine the workers’ movement, Joh has Rotwang give the robot Maria’s appearance. The robot then proceeds to rouse the workers to violence, which backfires when their children are threatened by floods unleashed by the destruction of some of the machines. Freder and the real Maria rescue the children, and the mob then burns the robot at the stake as Freder brokers a reconciliation between Joh and their leader.

The robot is referred to as the Machine-Man in the English intertitles before it is transformed into Maria’s sinister double. The double is everything that a thousand subsequent movie robots ever were: destructive, soulless, and ultimately evil. This movie is one of the most influential achievements of 20th-century filmmaking; you can see its influences in many subsequent cinematic masterpieces, as well as nearly every third-rate monster flick.

A common theme running through all these early stories is the classical Promethean notion that certain things are not meant for humans to control. Tampering with them trespasses on the domain of the divine and exposes the trespasser to severe punishment. Mary Shelley, in the preface to the 1831 edition of Frankenstein, wrote,

“Frightful must it be; for supremely frightful would be the effect of any human endeavor to mock the stupendous mechanism of the Creator of the world.”

Why is it that these stories — from the Golem to Frankenstein to Metropolis — always adopt classical models? Creating something that is alive or seems to be alive is portrayed always as trespassing on the perquisites of the divine, which is hubris and is certain to be punished by the gods. A simple explanation is that every storyteller tries to create a fiction that meshes with the real world — in this case, a real world in which intelligent robots are manifestly absent. To be complete, then, each story must end with a world without such things and a reason for their absence. You might ask, “But why aren’t there any man-made intelligences?” to which the answer would be, “Because there shouldn’t be, of course.”

In the 20th century, however, technological progress started to undermine the tyranny of “cannot.” Let’s look at the effect of that change on “should not.”

I, Robot

In 1939, a young man with a BS in chemistry from Columbia University wrote a story called Robbie about a little girl’s robot playmate. In a retrospective article about this and his other robot stories, entitled My Robots, Isaac Asimov said,

“In that case, what did I make my robots? I made them engineering devices. I made them tools. I made them machines to serve human ends. And I made them objects with built-in safety features. In other words, I set it up so that a robot could not kill his creator, and having outlawed that heavily overused plot, I was free to consider other, more rational consequences.”

In making them “to serve human ends,” Asimov didn’t innovate. However, in delving more deeply into their construction, particularly into their cognitive construction, he broke new ground.

Asimov went on to earn a PhD in biochemistry and work in academia teaching science, all the while writing science fiction throughout his long career. He brought to his writing tremendous conceptual power and a deep theoretical orientation. He was renowned as a prolific writer who could turn out a story or a book in a startlingly short time period, but this speed came at the expense of quality.

Much of Asimov’s early writing was not his best. The characters in the short stories that make up I, Robot are flat, the dialog wooden, and the best of the plots contrived. He did have exceptional moments in those early days when his writing soared — for example, in Nightfall — but in his youthful work this was the exception rather than the rule.

Nevertheless, the stories in I, Robot are important works, because in addition to repudiating the divine “You may not mock the stupendous mechanism of the Creator of the world” taboo, Asimov made a more fundamental contribution — namely, the Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Suddenly, the issue was not about the sin of creating robots: it was about how to manage them appropriately. The importance of Asimov’s Laws of Robotics was not their precise formulation or wording, but that they existed at all. Engineered things, Asimov tells us, can be made subject to strict controls that aren’t applicable to humans. This is why constructing robot intelligence is not a sin, he says, any more than constructing anything else is a sin. Yes, we must pay attention to complicated details, but difficulty is not impossibility. Check with the Wright brothers, Sir Edmund Hillary, and one or two others if you doubt that fact.


With Bolo, Keith Laumer introduced the robot’s viewpoint. In this series of stories, begun in 1960 and largely complete by 1969, we encounter a series of robotic war machines — the evolutionary descendants of tanks. Laumer wasn’t the theoretician that Asimov was and the logic driving his thinking isn’t particularly transparent, but the concept is compelling.

In the Bolo stories, Laumer inserts sections of first-person monologue by the robot. This is a big step away from Lang’s notion of the robot as incomprehensibly alien — the Other. Instead, the robot thinks about its situation and reasons about the circumstances in which it finds itself. Laumer’s robots are invariably loyal to their human masters, although in Rogue Bolo, we encounter a robot with sufficient intellectual power to conduct a strategic campaign against adversaries that humans haven’t detected, despite direct orders from humans to desist. Implicit in this is Asimov’s assertion of the First Law’s precedence over the Second Law.

Star Wars

Released fifty years after Metropolis in 1977, Star Wars struck another small blow in the struggle to liberate robots from their earlier stereotypes as humanoid, ruthlessly competent, and evil. In this movie, we get a humanoid robot — C3PO — that is trivial and cowardly, though still part of the good guy crowd, in contrast to the lumpish and purely functional (but invariably competent and heroic) R2D2. C3PO is articulate whereas R2D2 is completely wordless, thus providing the ultimate cinematic example of the old saw that actions speak louder than words. Interestingly, in the recently released back-story, The Phantom Menace, we learn that the young Anakin Skywalker constructed C3PO. R2D2’s origin seems to be more prosaic, but there is some sort of justice in the fact that the weak C3PO was built by the person who turns out to be the penultimate bad guy. Perhaps C3PO’s weakness is a foreshadowing of Anakin’s own? It’s worth noting that by 1977, robots were so well established that this convergence of two separate themes — the fiction-inspired C3PO and the reality-inspired R2D2 — merits no more than a minuscule side plot in a science-fiction film.

Blade Runner

With Ridley Scott’s 1982 movie Blade Runner, loosely based on Philip K. Dick’s 1968 novel Do Androids Dream of Electric Sheep?, we return from mechanical humanoids to the biological creations Capek pioneered in R.U.R. Superficially, this is an exercise in which androids, called replicants and physically indistinguishable from humans, rebel against a social order that treats them brutally. They are banned from Earth — a formula that Asimov used to great effect in his robot stories — and have artificially limited lifespans. Their superhuman physical and mental capabilities are key to detecting them when they run and hide. We learn in Blade Runner that they fear death and so run to seek freedom and an unimpeded lifespan.

The understatement built into Blade Runner is overwhelming. Are replicants human? Their bodies are biological and they look like people, so it’s too easy to grant them souls by dismissing their creation as some perversion of cloning. Hannibal Chew, the engineer who boasts, “I design your eyes,” to two replicants right before they kill him, refutes this: if he’d just cloned their eyes, how could they have superior eyesight? And Harrison Ford’s character, Deckard, is a paradox: How can a human, every other instance of whom is manifestly inferior to replicants in physical and intellectual capabilities, manage — unaided — to defeat an entire team of replicants, one after another?

Moreover, Blade Runner re-poses the same question that R.U.R. asked: can you create an entity with intellectual capabilities and not give it a soul? Deckard speculates on this at the end of the movie while reflecting on a replicant’s decision not to kill him when he’d won the final fight:

“I don’t know why he saved my life. Maybe in those last moments he loved life more than he ever had before. Not just his life, anybody’s life, my life. All he’d wanted were the same answers the rest of us want. Where did I come from? Where am I going? How long have I got? All I could do was sit there and watch him die.”

Meanwhile, in the real world …

The golden age of robotics research came to an end sometime in the mid 1980s when a pair of economists observed that the sweet spot for flexible automation was in an area in which US industry took no interest. It turned out that robots are cost-effective for production runs roughly between 1,000 and 10,000 units. US manufacturing tends to have its sweet spots below 1,000 (airliners, electric generators, and supercomputers) and above 100,000 (jelly beans and automobiles). Japan’s manufacturing industry has historically focused its attention on 1,000 to 10,000 unit runs, giving it a tremendous ability to respond to market dynamics with revised products and simultaneously making robotics a far more economically attractive proposition. The result of this economic insight was a dramatic drop in research funding for robotics worldwide. Nonetheless, the field has made substantial technical progress in the past 20 years, albeit largely out of the public eye. Interestingly, there hasn’t been the same attention to robotics in the science fiction community, at least not in the works that have gotten attention from the broadest community of readers.

Is this parallel drop-off in the world of fictional robots because Asimov and Laumer said everything there is to say about robots? Is it that people have recognized the absurdity of humanoid robots and have transformed the debate into one about the broader topic of artificial intelligence, as we considered in the first Biblio Tech article? Or are we just bored with the topic? I’m not sure. I prefer to think that we’re just waiting for some powerful new talent to turn our thinking on its head again with a brilliant new insight.

SIDEBAR: What is a robot?

There is no real consensus on precisely what a robot is. Rather than trying to define one, let’s instead try to identify the characteristics of things that we might call robots. The most appealing description is
that a robot is a system with mechanical components intended to achieve physical action; it also has sensory feedback and a sophisticated and flexible control system that links its sensing to action.

To see if this characterization works, let’s see if it correctly distinguishes between our ideas of robots and nonrobots. The system must be intended to produce mechanical action, so a computer video game is out. The system must use sensory feedback to control motion, so printers are out. So far, there is little to distinguish a robot from a classical control system.

A numerically controlled machine tool is a robot, but just barely. A modern car’s antilock braking system could qualify, although there’s something unsatisfying in it doing so. An airplane’s autopilot certainly qualifies as a robot, particularly the advanced autopilots that can receive a list of waypoints and then navigate themselves from liftoff to approach via GPS. A washing machine that can sense the amount and temperature of water in its tub and act accordingly is probably a robot, albeit not a particularly interesting one. Oddly enough, many of the pick-and-place industrial robot systems in factories in Japan and elsewhere around the world fail this test, because they lack a sensory capability.

Today, robots are almost commonplace. We see them in numerous prosaic roles in factories, but we also see them competing in what can best be called a new form of demolition derby. Students around the world work to build robots to compete in robot soccer; a researcher at Bell Labs built one to play ping-pong a few years back. Numerous toys on the market incorporate various aspects of robotic technology.

Influential Works

Medium Author Title Year of Original Publication
Book Mary Shelley Frankenstein 1818
Play Karel Capek R.U.R. (Rossum’s Universal Robots) 1920
Film Paul Wegner, Carl Boese Der Golem (in German) 1920
Film Fritz Lang Metropolis 1927
Book Isaac Asimov I, Robot Short stories: 1940-1950; collection: 1950
Book Isaac Asimov Nightfall 1941
Book Keith Laumer Bolo Short stories: 1960-1976; various collections
Book Philip K. Dick Do Androids Dream of Electric Sheep? 1968
Film George Lucas Star Wars 1977
Film Ridley Scott Blade Runner 1982
Book Michael Chabon The Amazing Adventures of Kavalier and Clay 2000

Read the original …

(This article appeared originally in IEEE Security & Privacy in the May/June 2003 issue. This is substantially the same text, with some minor formatting changes to take advantage of the power of the online presentation plus a few minor wordsmithing tweaks. And the table has the original publication dates for the listed books, not the editions in print in 2003 when the article was published.)

Here’s a PDF (article-03-final) of the original article, courtesy of IEEE Security & Privacy.

Post-Apocalypse Now

[originally published March 2003]

It’s curious that post-apocalyptic fantasies are such a popular fictional form. What is the allure of the end of civilization as we know it, and how did our interest in it emerge? Writers have speculated about the end of the world for a long time. In fact, we can trace much of our contemporary vocabulary and imagery about the apocalypse back to the Bible’s The Revelation to John. Over the past 50 years, however, we’ve seen a particularly vigorous upsurge in the production of post-apocalyptic works.

In this edition of Biblio Tech, we will look at an example of the post-apocalyptic genre, David Brin’s 1985 novel The Postman and the 1997 Kevin Costner movie that it inspired.


Although the cyberpunk genre, which I mentioned in my last column, focuses on dystopic futures, post-apocalyptic fantasies also tend to present their own dystopias. The difference is the path between the present and the future. In cyberpunk novels, dystopia typically occurs incrementally, smoothly, and continuously. In post-apocalyptic fantasies, however, the future arrives suddenly, cataclysmically, and discontinuously.

For the purposes of this discussion of post-apocalyptic stories, we will exclude the terminal tales, in which the world or the universe and all life in it come to an end, since that really eliminates any further discussion. In addition, we will exclude religious works. For our purposes, post-apocalyptic means that some cataclysmic transforming event upsets the order of things. These stories are typically structured with preambles that establish some linkage to the normal present as we know it, follow with the cataclysm, and finish with a post-apocalyptic world in which characters deal in one way or another with the change.

Why should we, who benefit so much both materially and spiritually from membership in a complex civilization, be so attracted to stories about the end of it? Was the flood of such stories unleashed by the atomic bomb’s arrival in 1945? Certainly the volume published since the 1940s seems remarkable.

These stories clearly fascinate us. The numbers of them written, sold, and still in print are testimony to this fact. But do they entrance us as a snake entrances a bird? Or perhaps we like these stories because we chafe at the strictures and disciplines our complex social system imposes on us. Perhaps we think that we would be better people or create better societies if we got a chance to start over.


Alternatively, perhaps we want to believe that we would survive without the support of the rich framework that lets — nay, requires — all of us be specialists. In 1651, Thomas Hobbes wrote a rebuttal to this fantasy in Leviathan, discussing the state of anarchy resulting from the failure of the common power that underpins social order:

“In such condition there is no place for industry, because the fruit thereof is uncertain: and consequently no culture of the Earth; no navigation, nor use of the commodities that may be imported by sea; no commodious building; no instruments of moving and removing such things as require much force; no knowledge of the face of the Earth; no account of time; no arts; no letters; no society; and which is worst of all, continual fear, and danger of violent death; and the life of man, solitary, poor, nasty, brutish, and short.”

Is there a single magical ingredient that makes society tick, or is society really just the sum of a lot of often-incomprehensible complexity? Some stories — for instance, David Brin’s The Postman — explore the notion that the magic ingredient connecting people to each other is prosaic infrastructure. Or maybe it’s a mystical belief in community. Or perhaps these two are mirror images of each other.

Some years ago, a news report in the US caught national attention by describing a particular street in an inner-city neighborhood that was so dangerous that mail carriers for the US Postal Service were afraid to venture there. Residents had to travel to a distant post office to collect their mail. Consider the horror of this situation — can you think of any service more innocuous, harmless, or inclusive than mail delivery? Can a neighborhood that doesn’t receive mail truly be considered part of the American community?

We learned subsequently that what had driven out the Postal Service was violent drug dealers. The dealers used the residential mailboxes in the entry foyers of neighborhood apartment houses as drops and terrorized residents and letter carriers to keep them away. Fortunately, society ultimately retaliated, reclaiming the mailboxes and rededicating them to their boring but essential function as social glue.

The need to reassert the dominance of civil society, no matter how prosaic, was recognized by civic leaders such as New York’s mayor Rudy Giuliani as central to any campaign to address headline issues like crime, violence, and drug abuse. Our societies are complex, with rich fabrics of interdependencies, fabrics that we ignore at our peril. Edward Lorentz’s articulation of the “butterfly effect” in 1972 as part of the exposition of chaos theory might have struck most of us as incredible, but in the case of the infrastructure of civil society, we have learned that letting enough figurative butterflies die can lead to catastrophic consequences.

Mourning and Restoring the Lost Society

The Postman touched a nerve when it reached bookstores in 1985. Although it was never a bestseller, it established a solid presence and remains in print 18 years later. Among apocalyptic fiction, it stands out in its embrace of the lost civilization and its rejection of the apocalyptic fantasy that “starting over” would make a significant difference in how we treat each other. Efforts such as Larry Niven and Jerry Pournelle’s Lucifer’s Hammer, a story about a world devastated by a meteor strike, also value the destroyed society, but they do so almost accidentally.

In The Postman, Brin establishes the character of Gordon Krantz, a drifter in a devastated world. The destruction’s cause is left deliberately vague — a combination of war, disease, pollution, nuclear winter, and human depravity. Brin implies that no single component would have been sufficient on its own to bring about the disaster, but in combination, accentuated by the centrifugal efforts of a survivalist movement called Holnism, civilization ultimately succumbed.

Gordon drifts west from Idaho in search of a fantasy town, somewhere on the Oregon coast, in which civilization supposedly hasn’t fallen as far. Bandits ambush and rob him, and in desperation, he chances on the ruins of a postal service Jeep. The Jeep provides shelter and its deceased occupant provides clothes and boots to replace those taken by the bandits. Dressed in the mail carrier’s outfit and carrying some of the years-old mail left in his pouch for future entertainment, Gordon heads west. In Pine View, the first town he visits after finding the Jeep, Gordon is mistaken for a postman, but he earns his keep during his short stay with his standard stock in trade: entertaining the town’s citizens with dramatic productions based on remembered fragments of Shakespeare. He accepts letters thrust on him by the residents of Pine View before he leaves without realizing yet the power that he’s awoken in the people there.

When the matriarch of Pine View takes her leave of him at the western edge of town, she asks him, “You aren’t really a postman, are you?” He replies, half-cynically, “If I bring back some letters, you’ll know for sure.”

In Oakridge (the next town), in an effort to overcome a cold reception, he recalls the previous mistaken identity and brashly claims to be a postal carrier for the “Restored United States.” The mayor rejects this grandiose claim, but Gordon manages to bypass him by impulsively pulling a handful of mail out of his pouch and reading out names. Before too long, and fortunately for the story, he names a living resident of the town, and the longing for contact quickly overwhelms the mayor’s suspicious skepticism. This imposture gets him shelter and food. When he leaves, he takes more mail with him.

Within a short while, Gordon has polished his con and mastered an arrogant address appropriate to the highest federal official in the territory. His fame by now is preceding him, so he no longer has to worry about rejection at the town gates. He has begun to deputize local postmen to keep up the fiction, only they don’t realize that they’re participating in a fraud; they take it seriously. He has them swear an oath based on the inscription on the New York General Post Office building:

Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds.

(Contrary to popular belief, the United States Postal Service has no official motto, but several postal buildings contain inscriptions, the most familiar of which is the one you just read. This specific inscription was supplied by William Mitchell Kendall of the firm of McKim, Mead & White, the architects who designed the New York General Post Office. Kendall said the sentence appears in the works of Herodotus and describes the expedition of the Greeks against the Persians under Cyrus, about 500 BC. The Persians operated a system of mounted postal couriers, and the sentence describes the fidelity with which their work was done. George H. Palmer of Harvard University supplied the translation, which he considered the most poetical of about seven translations from the Greek.)

What follows is a virtuous circle in which success breeds success. As Gordon’s con progresses, he begins to realize that it’s not a con: it’s real, and the postal service that he’s bootstrapped out of nothing has taken off. He begins to use it as a platform to correct despotic behavior in the towns he passes through, undermining tyrannies offhandedly, almost casually.

This, essentially, is the first third of the book. The remaining two thirds explore other, less interesting themes. The story might have been better as a novelette or novella, omitting the deceased artificial intelligence, the genetically engineered supermen, and the sublimated Lysistrata corps.

In 1997, Kevin Costner’s movie “The Postman” came out. Although recognizably a child of the book, the movie never achieved commercial success. David Brin notes in his Web site that although the screenplay abandons the last two thirds of the book, the movie gets lost in its attempt to make the back story hang together, frittering away precious time in the effort. Despite that, the film has several emotionally powerful moments that make it worth more than a footnote.


Many engineers spend their careers building or sustaining infrastructure — the very foundations of society and civilization as we know it. The work can be satisfying, although most of that satisfaction is quiet. Our nontechnical friends and relatives never seem to get excited about the infrastructure that sustains them. The wonders of the water systems, the power systems, the telecommunications systems, and other such marvels are unsung and often unremarked. We work on them, making our contributions with scant complaint at the injustice that causes the beneficiaries to remain largely oblivious. It’s a treat, therefore, to occasionally see that sometimes, somewhere, someone notices.

On a cold and rainy night recently, my wife looked out the window at the storm and remarked that it was a very good night to be dry and warm inside. It’s these actual reminders, along with the fictional ones that we’ve considered today, that help us properly value the benefits we’ve received from all of the engineers, builders, plant operators, policemen, firemen, and postmen who make it possible for us to get heat by turning a dial, light by flipping a switch, hear a friend’s voice by picking up a telephone, and receive a drawing from a child by opening a mailbox.

Influential Works

Author Title Publisher Year of Original Publication
D. Brin The Postman Bantam Books 1985
E.M. Forster The Machine Stops and Other Stories 1909
T. Hobbes Leviathan 1660
L. Niven and J. Pournelle Lucifer’s Hammer Fawcett Books 1977

Read the original …

(This article appeared originally in IEEE Security & Privacy in the March/April 2003 issue. This is substantially the same text, with some minor formatting changes to take advantage of the power of the online presentation plus a few minor wordsmithing tweaks. And the table has the original publication dates for the listed books, not the editions in print in 2003 when the article was published.)

Here’s a PDF (article-02-final) of the original article, courtesy of IEEE Security & Privacy.

AI Bites Man?

[originally published January 2003]

Over the years, people have explored the broader implications of many seminal ideas in technology through the medium of speculative fiction. Some of these works tremendously influenced the technical community, as evidenced by the broad suffusion of terms into its working vocabulary. When Robert Morris disrupted the burgeoning Internet in 1988, for example, the computer scientists trying to understand and counteract his attack quickly deemed the offending software a “worm,” after a term first introduced in John Brunner’s seminal 1975 work, The Shockwave Rider. Brunner’s book launched several terms that became standard labels for artifacts we see today, including “virus.”

In future installments of this department we’ll look at the important writers, thinkers, works, and ideas in speculative fiction that have got us thinking about the way technological change could affect our lives. This is not to imply that science fiction writers represent a particularly prescient bunch — I think the norm is ray guns and spaceships — but when they’re good, they’re very good. And whatever gets us thinking is good.

To get started, let’s take a look at some of the key subgenres and eras in science fiction’s history (see the “Influential Works” table at the bottom of this article).

Worlds Like Our Own

Some of the best (and earliest) science fiction work speculates on a world that is clearly derived from our own but that makes a few technically plausible changes to our underlying assumptions. Vernor Vinge’s True Names represents such a world, in which the size and power of computer systems has grown to the point where artificial intelligence capable of passing the Turing test is beginning to emerge. Vinge’s most fascinating speculations involve the genesis and utility of these artificial intelligences, and he explores the notion that AI might emerge accidentally, a theme that appears elsewhere in books like Robert A. Heinlein’s The Moon is a Harsh Mistress and in Thomas J. Ryan’s The Adolescence of P-1.

In True Names, Vinge suggests a radical use for such AI capabilities, namely the preservation of the self beyond the body’s death. Forget cryogenically freezing your brain or body in hope that someone will “cure” old age, he says instead, figure out how to save the contents of your memory and the framework of your personality in a big enough computer. If this AI passes the Turing test, then certainly your friends and relatives won’t be able to tell the difference. But will you know you’re there? Will this AI be self-aware? Will it have a soul?

Cyberpunk and Its Roots

These works naturally evolved into a scarier version of the future. Cyberpunk, one of the most fascinating threads in speculative fiction, is epitomized in the work of William Gibson, who startled us many years ago with a short story called Johnny Mnemonic, now included in the 1986 collection Burning Chrome (and made into an unsuccessful 1995 movie starring Keanu Reeves). Cyberpunk stories generally feature a dystopic world in the near or distant future in which technologies emerging today have changed the ground rules of life.


There isn’t a straight line from worlds that resemble ours to cyberpunk. The genre morphed over the years and decades through a variety of novels. Although cyberpunk is most strongly identified with William Gibson, its roots go much further back … all the way to George Orwell’s Nineteen Eighty-Four.

In 1949, when Orwell published the book that is now a staple of US high-school curricula, television was still a novelty in most households, although the technology itself had been around for 20 years. With TV’s successful integration into modern life, Orwell’s vision of a totalitarian future in which governmental control is mediated through two-way television feels somewhat dated. Anyway, Orwell’s mastery of the language and deep insights into many human issues, including the relationship between memory and truth (as Winston Smith discovers when he starts a diary and discovers the subversive power of a historical record) have prevented obscurity.

An open question is whether new technology tips the balance toward central control, as Orwell feared, or toward liberty, as many have speculated when considering the role of faxes, photocopiers, and even the Internet in the collapse of the former Soviet Union.


Heinlein’s thinly veiled romance of the American Revolution, The Moon is a Harsh Mistress, begins with Manuel Garcia O’Kelly’s discovery that the Lunar Authority’s central computer (“Mike”) has become conscious and is developing a sense of humor. I still use Heinlein’s observation that some jokes are “funny once” in teaching my own young son about humor.

As with True Names, Mike accidentally reaches a level of complexity that mystically tips it over the edge from being a machine to being a person. Among Mike’s numerous achievements that anticipate contemporary technological progress is the creation of a synthetic person, Adam Selene, presented entirely through video and audio.

Unlike the cyberpunk mainstream, which Heinlein anticipated by over a decade, Mistress shows a world vastly different from this one but in which most of us could imagine living and finding happiness. I cherish the humor and the optimism about relationships between artificial and natural intelligences that led Heinlein to name the leading human character Manuel just so Mike the computer could say things to him like, “Man, my best friend.”

Things were changing rapidly in the technical world in 1969 as well. Dating back to that year, all the documents that have described and defined the Internet have been numbers in the RFC (Requests for Comments) series. Each document is numbered sequentially, starting with RFC 1. RFC 4 is dated 24 March 1969. It documents the Arpanet, which would later become known as the Internet, as having four nodes. Two years later, Intel would introduce its 4004, the first commercial microprocessor. The 4004 had a 4-bit-wide arithmetic logic unit (ALU) and was developed for use in a calculator.


The Shockwave Rider is more about the potential role of computers, networks, and technology in society than The Moon is a Harsh Mistress. In Heinlein’s work, the computer’s role is not much different than that of a person with magical powers. The computer’s accomplishments are technically plausible, but the operational aspects of Heinlein’s society are much like those of the 1969 world that published the book.

Brunner, writing six years later, explores more fundamental questions of identity and human relationships in a future world in which a vast global network of computers has changed the dynamic. This world is scary and alien, although not as scary and alien as the one that Gibson would reveal just six years later. Brunner makes clear the scariness of an entirely digitally mediated identity early in the book when Nicky Halflinger’s entire world — electric power, telephone, credit, bank accounts, the works — is turned off in revenge for a verbal insult.

Like Star Wars two years later, the technological marvels of The Shockwave Rider are a bit creaky and imperfect, rendering them adjuncts to a plausible future world rather than central artifacts worthy of attention themselves. This is characteristic of this genre’s best writing — it validates the importance of technology by paying only peripheral attention to the technology itself.

In the technical world, Vint Cerf, Yogen Dalal, and Carl Sunshine published RFC 675 “Specification of Internet Transmission Control Program” in December 1974, making it the earliest RFC with the word “Internet” in the title. In November 1975, Jon Postel published RFC 706 “On the Junk Mail Problem.”


In 1977, Macmillan published Thomas J. Ryan’s novel The Adolescence of P-1. It was an age when vinyl records had to be turned over, when everyone smoked (although not always tobacco), when 256 Mbytes of core was an amount beyond imagination, and when a character in a book could refer to 20,000 machines as “all the computers in the country.”

In Ryan’s book, as in Heinlein’s, computer intelligence emerges accidentally, although in this case by the networking of many computers rather than through the assembly of a large single machine. The precipitating event is the creation of a learning program by a brilliant young programmer, Gregory Burgess, whose fascination with cracking systems leads him to construct several recognizable AI artifacts. Of course, the great pleasure of fiction is the ability to elide the difficult details of building things such as P-1’s program generator, which is the key to its ability to evolve and grow in capabilities beyond those that Burgess originally developed for it.

The Adolescence of P-1 is full of quaintly outdated references to data-processing artifacts that were current in the mid 1970s, reflecting Ryan’s day job as a computer professional on the West coast. Those whose careers brought them into contact with IBM mainframes in their heyday will be amused by the author’s use of operational jargon to provide atmospherics in the book.

Ryan also takes a much less Polyanna-ish view of the relationships between humans and artificial intelligences. Unlike Heinlein, who clearly expresses in Mistress that sentience implies a certain humanistic benevolence, Ryan explores the notion that Gregory Burgess’s AI must have a strong will to survive, which would lead it to be untrusting toward people. P-1 at one point commits murder, for example, and unapologetically explains its actions to Burgess.

Ryan wrote only one book, so he must not have derived much encouragement from the book’s reception, which is unfortunate. His writing is a bit uneven, but it’s certainly entertaining, and his sense of the important issues has held up well.


For some reason, 1981 saw the publication of two seminal stories in the cyberpunk oeuvre. In the technology world, the Arpanet was preparing to transition from the old NCP technology, which it had outgrown, to the new IP and TCP protocols that would bring it fame and fortune along with a new name – the Internet. Computer scientists around the country were avidly reading RFC 789, which documented a now-famous meltdown of the Arpanet. Epidemiologists were talking about an outbreak of a hitherto very rare cancer called Kaposi’s Sarcoma, an outbreak that would be recognized in the following year as a harbinger of a new and terrifying disease: AIDS. IBM, acceding to an internal revolution driven by its microcomputer hobbyists, introduced a new product code-named “Peanut,” the IBM Personal Computer, that catapulted Intel and Microsoft to the forefront. Pundits were moaning that US industrial prowess was a thing of the past and that in the future Americans were destined to play third fiddle, economically, to the Japanese and the Germans.

Vinge’s True Names is a novelette, a short novel, rather than something that could be published economically as a monograph. As a result, it was published in a cheesy Dell series called “Binary Star,” each number of which featured two short novels printed back to back, with the rear cover of one being the upside-down front cover of the other. For you incurable trivia nuts, True Names appeared with a truly dreadful effort called Nightflyers, a gothic horror story transposed to the key of science fiction.

Despite the uninspired company, True Names had an electrifying effect on the computer-science community. The title of the novel refers to a common theme of fairy tales and magical logic — knowing something’s “true name” gives you complete power over it. In the world that Vinge concocts, knowing a computer wizard’s true name permits you to find his or her physical body. Even if entering the Other Plane didn’t leave your body inert and defenseless, revealing the body’s location renders it vulnerable to attack from a variety of long-range weapons. More than that, however, as in The Shockwave Rider, exposure of your true name makes your infrastructure vulnerable to a range of denial-of-service attacks. This represents a rather simplistic view of security models, although one that the modern world hasn’t left very far behind, seeing how only a relatively few years ago a Social Security Number was all you needed to access most of someone’s assets.

William Gibson’s “Johnny Mnemonic” appeared in Omni magazine in May 1981. It introduced a world destined to become famous with books like 1984’s Neuromancer and 1986’s Count Zero.

In 1981, only the paranoid were saying what Johnny Mnemonic says, “We’re an information economy. They teach you that in school. What they don’t tell you is that it’s impossible to move, to live, to operate at any level without leaving traces, seemingly meaningless fragments of personal information. Fragments that can be retrieved, amplified, .” Today, however, every consumer with a credit card and an Internet connection understands this point intuitively. Who says nothing changes?


The year after the Gulf War was a US presidential election year. UUNET and ANS, among others, were duking it out over the Internet’s commercialization. Bloody civil war was beginning in the territory previously known as Yugoslavia. And Bantam published Neal Stephenson’s Snow Crash.

Stephenson, like Ryan and Vinge, is a writer with real experience as a computer professional. Unlike Heinlein and writers like him, for whom technological artifacts always have an aura of magical unreality, Stephenson’s grasp of the underlying technology is so deep and his writing skills so powerful that he is able to weave an entirely credible world.

In Snow Crash, the world starts out as the ultimate virtual reality video game. What Stephenson then explores is the possibility that these synthetic worlds will become real, at least in the sense that the things that happen in them can be of material significance in the meatspace world that our physical bodies inhabit.

Stephenson explores a fascinating thesis — suppose the taxing ability of geography-based governments is eroded in fundamental ways. He’s not the first to have considered this proposition, but he does it particularly well. Stephenson proposes that a set of nongeographic structures might emerge, perhaps like Medieval guilds, structures that organize people into groups based on some other selection criteria, possibly entirely voluntary. Brunner comes close to the same notion, although his organizing entities are corporations and the geographic government continues to have a monopoly on force. For Stephenson, however, the US government is just one of the many competing groups participating in the game.

He raises fundamental questions, though. How will people organize themselves? Religion? Race? Occupation? Philosophy? Ethnic origin? These self-organized groups could manifest themselves as a collection of confederated enclaves providing economic, physical, and emotional security to their … members? Citizens? Subjects? His insight is a powerful one. The craving for these forms of security is deeply rooted and part of what makes us human. What makes Orwell’s Nineteen Eighty-Four ring so false to us, and accentuates the horror of Orwell’s vision, is the complete loss of any acknowledgement of those needs in people. Stephenson corrects that omission, and the world of Snow Crash that results is not nearly as dystopic as Orwell’s or even Gibson’s.


With the publication of The Diamond Age, subtitled “A Young Lady’s Illustrated Primer,” Stephenson explores the implications of a world in which material scarcity is no longer an assumption. The relationship between scarcity and value — or, to be more precise, price — is so deeply built into our psyche that thinking about alternative models is very difficult. I remember a short story, read years ago (title and author lost to me), that explored the same issue much more superficially, although it came to some of the same conclusions. In this story, a pair of matter-duplicating machines is left mysteriously on a doorstep somewhere. Once they become widely available, all material scarcity is banished. What drives economic activity? Why do people work, strive, compete?

In The Diamond Age, Stephenson asserts that the drive to strive and compete won’t go away just because the material forces that created it disappear. He combines the notion of very small machines and the recently demonstrated capability to manipulate individual atoms and creates a world in which atomic raw materials are piped to nanotechnical factories called matter compilers, which can assemble virtually anything, given the design. Scalability arguments underlie his claim that the fabricated objects will have a certain limited physical aesthetic, something that Alvy Ray Smith and others who have explored the use of fractals and other techniques for adding a realistic tinge of randomness to computer-generated images might dispute.


I hope you had as much fun reading this brief history as I had in researching and writing it. Preparing it gave me an opportunity to revisit some of my favorite books and try to articulate my reasons for believing them important. In future columns, we will examine some of these books in greater detail, along with the work of other writers and thinkers.

Influential Works

Author Title Publisher Original Publication Date
J. Brunner The Shockwave Rider Ballantine Books 1975
W. Gibson Johnny Mnemonic Omni 1981
R.A. Heinlein The Moon Is a Harsh Mistress St. Martin’s Press 1969
G. Orwell Nineteen Eighty-Four Knopf 1949
T.J. Ryan The Adolescence of P-1 MacMillan Publishing 1977
N. Stephenson Snow Crash Bantam Doubleday 1992
N. Stephenson The Diamond Age Bantam Doubleday 1995
V. Vinge True Names Tor Books 1981

Read the original …

(This article appeared originally in IEEE Security & Privacy in the January/February 2003 issue. This is substantially the same text, with some minor formatting changes to take advantage of the power of the online presentation plus a few minor wordsmithing tweaks. And the table has the original publication dates for the listed books, not the editions in print in 2003 when the article was published.)

Here’s a PDF (article-01-final) of the original article, courtesy of IEEE Security & Privacy.

The Digital Museum and the Art Ecosystem

Why do art museums exist? To preserve the cultural heritage represented by art objects and educate the public about art, if you examine museum charters. But why do they survive? Or more to the point, how do they survive? Museums are expensive operations and the immediate economic value of the cultural heritage and public education they provide may seem small, at least to the narrow-minded. Nonetheless, museums seem to survive and even thrive, so there is some sort of economic engine operating behind the scenes. What can it be?

Well, let’s examine the set of players involved in art. There are artists and collectors, of course. Beyond that there is the general public, people who are in the main neither artists nor collectors. Next we have museums, which are different from collectors in the use to which they put their collections. Collectors assemble art for their own enjoyment, while museums do so in order to share it with the general public. There are middlemen like art dealers, auction houses, and appraisers, people whose living depends on the existence of an active trade in art. And lastly there are governments.

Just the existence of such a complex ecosystem tells us that there is a lot of vitality in the art community. What are the primary drivers?

In the bad old days the drivers of art, at least the art that survives, were wealthy patrons. Their desire was to have beautiful and interesting things in their homes. The tastes of the wealthy have always driven art, but tastes change and what is fashionable and attractive in one year may suddenly be uninteresting the next. What to do with the gigantic Titian that was the centerpiece of the drawing room last season and is now out of style? Demote it to some lesser venue – a less important room, a country house, or even a basement or attic. Some individuals, whether through discernment or driven by the same sort of pack-rat tendencies that cause my closets to fill up today with things I can’t bear to throw away, became collectors. Because they were wealthy, they could spend the resources to catalog and properly store their accumulations of objects.

Major museums, by which I mean large art collections open to the public, began to appear a few centuries ago (the British Museum in 1753, the Musee du Louvre, long the site of a royal collection, opened to the public by the revolutionary government in 1793, the Metropolitan Museum of Art in New York chartered in 1870), about the time when governments began to discover the consent of the governed and concern themselves with educating and pleasing us, the great unwashed.

Art education seems to have agreed with the middle classes. They have supported governmental grants of tax-exempt status to museums and they have filled their own houses with art. Perhaps not the fabulously expensive major works that capture the headlines, but enough to make the art marketplace expand to meet the demand.

So what does the modern art ecosystem look like? Artists create new works. Collectors rich and not-so-rich fill their houses with them. When the attics get full, the collectors donate some of their less-beloved works to museums and sell the others through auction houses and galleries. While they keep them they buy insurance to protect them and pay appraisers to evaluate their collections so that insurers have something on which to base the policy. Acquisition decisions, whether by direct purchase or by acceptance of gifts, by major museums validate emerging artists, causing the values of works by the same artists in private collections to appreciate in value. Gifts by collectors of appreciated works to museums provides tax breaks that may exceed the amount spent by the art-lover in the first place.

Today we see a mature ecosystem operating. It sustains artists, art schools, collectors, appraisers, galleries, auctioneers, critics, historians, and art-lovers.

So all’s right with the world, right?

Well, maybe not.

Already we see things happening in the library world that should be causing excitement and anxiety in the museum world. Back in the bad old days every major city and university needed at least one large comprehensive library to provide access to an archive of knowledge to support the local populace’s needs for education, research, and commerce. With a scanned image of each book on a server on the Internet, however, the need for a lot of old stone buildings full of beautiful wood shelves lined with dusty old books, the happy home of much of my youth, is eliminated. Why bother traveling to a library to look at a book when you can do it from any convenient web browser?

OK, you say, that’s fine for books where the information is mostly text and the main value is in the abstract content and not the physical object, but for art, for Art, it’s different. You have to be able to see the texture of the paint on the old wooden board to appreciate the Mona Lisa or the surface of the marble to apprehend the Pieta. You have to be able to walk around the sculpture of David in Florence and see it from different angles in different lights to properly understand and enjoy Michelangelo’s work.

Quite right. But many of these things can be captured digitally and delivered over the Internet. The technology for capturing detailed representations of complex 3-D objects, including surface textures, is available today or will soon be available. Rendering the appearance of a surface under arbitrary illumination and viewed from any distance or angle is well within the reach of most of the graphics engines incorporated into modern desktop and laptop computers, particularly the ones built to play the latest video games. The technology is available now to deliver many of the experiences we seek from fine art objects over the Internet. Of course, what’s lacking is a comprehensive supply of digitized representations of these objects. How hard would it be to solve that problem?

Smaller museums have always struggled. What has kept them alive has been the need to have a survey collection available at universities and important regional centers. In the future, however, the survey collection will no longer be an important function of the regional museum because that will be provided over the Internet from major central museums. The only way for a small museum to distinguish itself is to specialize in some niche and develop the definitive collection of some important category of object. Small museums all over the world are now beginning to face up to this reality, so we should see them reshaping their collections over the next several decades.

One of the first things that was done preparing to renovate the Statue of Liberty in the 1980s in anticipation of its centennial was to create an immensely detailed and highly accurate 3-D map of the surface, gathered by laser photogrammetric techniques, as reported in the press at the time. Since then progress in technology has created 3-D scanners capable of mapping the surfaces of moderate-sized objects in considerable detail. This, combined with appropriate photography and texture-mapping can, at least in principle, produce digital representations of medium-sized, medium-complexity objects that would be suitable for many casual art education purposes. Support for more detailed close-up examination would probably involve additional techniques, some of which may be uneconomical at present. The work of Octavo in creating digital representations of rare books gives us a hint of what is coming.

Of course, this will take a lot of time and money. But the results will be fascinating. The largest collections are so vast that the exhibition halls that seem so large to us when we visit are nonetheless incapable of displaying more than a tiny fraction of the available riches. With digital representations available interested people will be able to explore in the sort of depth that is only available today to accredited researchers. Moreover, people living in remote locations not convenient to a large comprehensive public collection will have access to the cultural riches of humanity.

Of course, this begs the question of how it will be funded. The existing art ecosystem may not be as stable and healthy as it seems. Already the need for funds has forced smaller museums to “deaccession” parts of their collections. Only the largest and richest museums located at major global crossroads have any clear path forward. As we’ve seen as music and movies have begun their migration from the bricks and mortar past into the digital future, change can be disruptive and traumatic. A new economic ecosystem for art will evolve over time and stability ultimately will return. With the lessons of the music and video worlds behind us and the clear evidence that the new ecosystem that is emerging has the potential to be lucrative, we can hope that the evolution of museums will be much less traumatic.

Books versus Covers

Back when I was a young scholar there were several things one learned that violated the “never judge a book by its cover” rule. One was that when you saw a disheveled fellow walking down the street talking to himself, you could reliably assume that he was disturbed and probably not taking his medication. And you could assume that a nicely typeset and printed article was worth reading.

Things have changed.

Now when you see an unshaven fellow in rumpled clothes walking down the street conducting an animated conversation you can’t assume that he’s off his Chlorpromazine. He might just as well be an investment banker working on a big deal.

Why did typsetting signify quality writing? Dating from the days of Aldus Manutius typesetting a book or an article attractively in justified columns using proportionally spaced fonts was a time-consuming task involving expensive skilled labor. Because of that high up-front cost, publishers insisted on strong controls on what made it to press. Thus we had powerful editors making decisions about what got into commercial magazines and books. And we had legions of competent copy editors engaged in reviewing and refining the text so that what did make it to press was spelled correctly, grammatically sound, and readable.

No one ever had to explicitly tell us that nicely typeset stuff was generally the better stuff, we learned it subconsciously.

Some years ago, in the first blush of desktop publishing, someone handed me a beautifully typeset article. Shortly after starting to read it I realized that it was hopeless drivel. After a few repetitions of this experience I came to the realization that with Framemaker, Word, and similar systems prettily typeset output could now be produced with less effort than a draft manuscript in the bad old days. An important cultural cue was lost. The book could no longer be judged by its cover.

Fixing a bug in the TreeTable2 example

This New Year I resolved to run backups of our computers regularly in 2007. My vague plan was to dump the data to DVDs, since both of our newest machines, a Dell PC running Windows XP Pro and a Mac have DVD burners.

What, to my dismay, did I learn when I examined the Properties of my home directory on the PC? It weighs in at over 140 gigabytes. The DVDs hold about 6 gigabytes, so it would take at least 24 DVDs to run a backup. Aside from the cost, managing 24 DVDs sort of defeats the purpose.

Before going to plan B, getting an outboard disk drive to use as the backup device, I thought I’d investigate all of this growth in my home directory. Last time I looked, my home directory was less than 10 gigabytes.

In the past I’ve used du from the bash command line to investigate the file system. This is powerful, but it’s slow and very painful. What I really wanted was a tree browser that was smart enough to show me the size of each subtree.

In a project that I’d worked on a couple of years ago I learned that there’s a cool thing called a TreeTable that has just the right properties. The leftmost column is a hierarchical tree browser, while the columns to the right are capable of containing data associated with the tree nodes on the left.

Thought I, “let’s get a treetable class from somewhere and then marry it with some code that can inspect the file system.” So I googled for ‘treetable’ and found not only a very nice treetable library available free, but an example built using it that did exactly what I wanted.

After downloading the source code and creating a project in Eclipse, I ran it. It worked nicely and was just what I wanted. But there was one small problem.

It reported that my home directory had a negative size:

Tree Table - Negative Size

Immediately that told me that somewhere in the code a node size was being represented as an integer, a 32-bit quantity that couldn’t represent more than 2 gigabytes before wrapping and showing a negative number. What I really wanted was an unsigned 64-bit number, though I suspected that I’d have to settle for a long, a 64-bit signed number. That would be adequate for now, since my 140 gigabyte file system size could be represented comfortably in a 38-bit integer.

The next step was to find and fix the problem with the code. My fear was that the underlying system call was returning an integer, which would have made the fix potentially quite painful. Fortunately, however, the problem turned out to reside in a single line of code in FileSystemModel2.java:

if (fn.isTotalSizeValid()) {
return new Integer((int)((FileNode)node).totalSize());

In this you can see that the long that is being returned by totalSize() from the FileNode node is being forcibly converted (don’t you love the word “coerce?”) to an integer.

Replacing the coercion with an appropriate Long object was the work of moments:

if (fn.isTotalSizeValid()) {
return new Long(((FileNode)node).totalSize());

Which had the desired result:

Tree Table - Correct Size

With this version I was able rapidly to navigate to the directory where I had stored the video that I’d made at the Bar Mitzvah of a friend’s son, files that I certainly didn’t need to back up and that represented the vast bulk of the 140 gigabytes.

Source code and education

For a long time I’ve been interested in how good programmers get that way. Back in 2002 I posted a comment to a mailing list of hackers. This group is the original sort of hackers – people who program for love, not the modern sort who write viruses and try to crack systems. One of them was so taken by it that he posted it on his own website

What it says is:

If we taught writing the way we try to teach programming …

Imagine if we tried to teach writing (in English or any other natural language) the way we try to teach programming.

We’d give students dictionaries and grammar books. We’d lecture them on the abstract structure of stories. We’d give them dreadful stuff to read – only things written by the most junior writers, like advanced underclassmen or young grad students (some of whom can indeed write well, but most of whom are dreadful). We’d keep the great literature secret. Shakespeare would be locked up in a corporate vault somewhere. Dickens would be classified Secret by the government. Twain would have been burned by his literary executor to prevent it competing with his own efforts.

And when people take jobs as writers (here the analogy begins to break down) their primary assignments for the first five to fifteen years of their working lives will be copy editing large works that they won’t have time to read end-to-end, for which there is no table of contents or index, and which they receive in a large pile of out-of-order, unnumbered pages, half of which are torn, crumpled, smudged, or otherwise damaged.

Is it any wonder that good programmers are so rare in the wild?

The thinking behind that statement developed back in the 1980s when I was a grad student at CMU. A group of grad students, me among them, met monthly to read code and drink wine. We all agreed that an important ingredient in learning to be a good programmer was reading good and bad code.

Unfortunately, in those days there was precious little code to read. Interestingly, all of the best software research and education institutions of the time were organized around repositories of software that all of the members contributed to and partook of. I include in this category organizations that I knew, or knew of, well enough to make this claim. They included MIT’s AI Lab, Stanford’s AI Lab, CERN, CMU’s Computer Science Department, IBM’s Research Lab, Princeton, Yale, and Berkeley. (There were certainly others, but I didn’t know people there or what sort of source code sharing went on there.) On reflection it’s interesting to realize that this is similar to how some of the earliest universities, Oxford and Cambridge in the UK, came about – a bunch of scholars pooling their most critical and precious resource, their books.

In the early days of software being a programmer meant much more than writing code. Programming included working with users, designing user interfaces, laying out the architecture, as well as writing the actual code. Nowadays we expect software people to be highly specialized and work in large teams, but we continue to believe deeply that a software person must be broadly educated and experienced to be valuable.

Educating a programmer in the ’80s was a challenge because our models of what software was and how it should be built were only beginning to gel. Thus you couldn’t learn design patterns because they hadn’t been invented. Object-oriented programming was implicit in the Simula work that dated from the late 1960s, but the OO intellectual movement didn’t really form until some key ideas escaped from Xerox’s Palo Alto Research Center. And the adoption of those ideas was delayed by the limitations of Smalltalk until C++ and later Java reached maturity.

Things have changed.

The source code to many interesting systems is broadly available to anyone now. The result is that we now have a better ability to educate software people today than twenty and more years ago. And the opportunity to create great software education is no longer limited to the small number of institutions that managed to combine wealth and vision in the right mixture to produce comprehensive source code repositories. Anyone with an Internet connection can get tons of source code to study. Now the challenge is how to focus attention, given how much is available.

In addition, we’ve come a really long way on building software that is composable. UNIX pipelines were a tantalizing hint of the power of composable modules, but they were never quite enough to leap the shark. Now, however, the real action is in the composition of services into mashups, something that can be done rapidly and easily without any formal computer science training.

And with the source code sharing culture, it’s increasingly easy for great artists to compose instead of merely imitating.