Hey, Robot!

[originally published May 2003]

What area of research, development, and commercial activity owes more of its existence to the arts than robotics does? The word itself comes from an early 20th-century play; less than a decade later, an important film introduced an enduring fantasy concept of what robots look like. Shortly after that, but still before much significant technical research or development occurred in the field, science-fiction writers developed complex theories of robot behavior in stories that are still in print today.

In this installment of Biblio Tech, we’ll look at some of the arts that have shaped our notions of robots. We will see the deep roots these stories have in far earlier concepts that have little to do with engineering but everything to do with the human race’s fascination with creation.

R.U.R. (Rossum’s Universal Robots)

In 1920, Karel Capek completed his play, R.U.R. (Rossum’s Universal Robots); its first production in 1921 brought Capek worldwide renown and introduced the word “robot” to the English language. Some argue that if he had survived the era of Nazi domination in Europe, he would have received the Nobel Prize for Literature.

Rather than mechanical constructions, Rossum’s robots were more biological and chemical in their fabrication. Nonetheless, they are definitely the ancestors of our modern industrial gadgets. Is the distinction between human beings and machines that humans work to live while machines exist to work? If that’s the case, then Rossum’s robots definitely existed, or were at least built, to work. They worked tirelessly and were tremendously more productive than mere humans, but they lacked emotions, creativity, and souls.

In the play’s first act, Helena Glory, the young daughter of “the President,” arrives by ship to the remote island where Rossum has developed the techniques for making robots. She is concerned with the oppression of robots worldwide and wants to foment a revolt among them — to inspire in them a passion for freedom.

What she finds on the island is a factory almost entirely staffed by robots, with a small team of men managing the operation. She is dismayed to discover that the robots are emotionless and unmovable: Rossum and son’s original engineering work produced a simplified physiology and nervous system that were incapable of pain or passion.

All is not lost, however. Dr. Gall, the head of the Experimental Department, is working to add a pain sense to the robots:

Helena: Why do you want to cause them pain?

Dr. Gall: For industrial reasons, Miss Glory. Sometimes a Robot does damage to himself because it doesn’t hurt him. He puts his hand into the machine, breaks his finger, smashes his head, it’s all the same to him. We must provide them with pain. That’s an automatic protection against damage.

In addition, there’s a mysterious disease called “Robot’s cramp” that the managers view as a fatal failure: “A flaw in the works that has to be removed.” Helena recognizes it as something else, though: “No, no, that’s the soul.”

In the remainder of the play, we watch the world’s economies devastated by cheap labor and see governments wage war with armies of robot soldiers. Finally, the robots revolt, ultimately exterminating their creators. The play ends with the emergence of a robot Adam and Eve and the cycle of life begins again.

Frankenstein, The Golem, and Metropolis

We see in Mary Shelley’s 1818 novel Frankenstein (unlike the flood of cinematic caricatures that sprang from it) a set of concepts similar to those in R.U.R. Behind Frankenstein, we see the even older legend of the Golem. In 16th-century Prague, the story goes, Rabbi Loew created a humanoid figure out of clay and brought it to life by marking it with a powerful magic word. He then commanded this creature to defend the Jews of the Prague ghetto against the torments of a contemporary despot. Ultimately, the creature began to show signs of rebellion. The ending of the legend has many different variations. In some versions, Rabbi Loew destroys the Golem; in others, the creature flees, never to be seen again.

Fritz Lang’s 1927 film Metropolis introduced the first cinematic robot, which managed to typecast the entire category for at least 50 years. Lang’s robot is the creation of a mad scientist, Rotwang, who is trying to create a surrogate for his lost love, Hel, to whom he has built an altar in his laboratory. She rejected him in favor of his rival, Joh Fredersen, and died giving birth to their son, Freder Fredersen. Joh Fredersen is the master of the city of Metropolis, an architectural and industrial vision of the early 20th century that we might barely recognize today. Metropolis is divided into two parts: a lower part inhabited by industrial workers who live underground and toil ceaselessly in the bowels of the machines that make Metropolis function, and an upper part peopled by a happy leisure class who spend their time at games and diversions. Near the beginning of the film, Freder ventures below ground, where his heart is moved by the plight of the workers and captivated by the beautiful Maria, a pure and gentle young woman whom he encounters preaching peaceful change. She promises a bridge for the gap between the workers (the Hands) and the managers (the Head). She calls this as-yet-unknown person the Mediator and identifies him as the Heart.

To undermine the workers’ movement, Joh has Rotwang give the robot Maria’s appearance. The robot then proceeds to rouse the workers to violence, which backfires when their children are threatened by floods unleashed by the destruction of some of the machines. Freder and the real Maria rescue the children, and the mob then burns the robot at the stake as Freder brokers a reconciliation between Joh and their leader.

The robot is referred to as the Machine-Man in the English intertitles before it is transformed into Maria’s sinister double. The double is everything that a thousand subsequent movie robots ever were: destructive, soulless, and ultimately evil. This movie is one of the most influential achievements of 20th-century filmmaking; you can see its influences in many subsequent cinematic masterpieces, as well as nearly every third-rate monster flick.

A common theme running through all these early stories is the classical Promethean notion that certain things are not meant for humans to control. Tampering with them trespasses on the domain of the divine and exposes the trespasser to severe punishment. Mary Shelley, in the preface to the 1831 edition of Frankenstein, wrote,

“Frightful must it be; for supremely frightful would be the effect of any human endeavor to mock the stupendous mechanism of the Creator of the world.”

Why is it that these stories — from the Golem to Frankenstein to Metropolis — always adopt classical models? Creating something that is alive or seems to be alive is portrayed always as trespassing on the perquisites of the divine, which is hubris and is certain to be punished by the gods. A simple explanation is that every storyteller tries to create a fiction that meshes with the real world — in this case, a real world in which intelligent robots are manifestly absent. To be complete, then, each story must end with a world without such things and a reason for their absence. You might ask, “But why aren’t there any man-made intelligences?” to which the answer would be, “Because there shouldn’t be, of course.”

In the 20th century, however, technological progress started to undermine the tyranny of “cannot.” Let’s look at the effect of that change on “should not.”

I, Robot

In 1939, a young man with a BS in chemistry from Columbia University wrote a story called Robbie about a little girl’s robot playmate. In a retrospective article about this and his other robot stories, entitled My Robots, Isaac Asimov said,

“In that case, what did I make my robots? I made them engineering devices. I made them tools. I made them machines to serve human ends. And I made them objects with built-in safety features. In other words, I set it up so that a robot could not kill his creator, and having outlawed that heavily overused plot, I was free to consider other, more rational consequences.”

In making them “to serve human ends,” Asimov didn’t innovate. However, in delving more deeply into their construction, particularly into their cognitive construction, he broke new ground.

Asimov went on to earn a PhD in biochemistry and work in academia teaching science, all the while writing science fiction throughout his long career. He brought to his writing tremendous conceptual power and a deep theoretical orientation. He was renowned as a prolific writer who could turn out a story or a book in a startlingly short time period, but this speed came at the expense of quality.

Much of Asimov’s early writing was not his best. The characters in the short stories that make up I, Robot are flat, the dialog wooden, and the best of the plots contrived. He did have exceptional moments in those early days when his writing soared — for example, in Nightfall — but in his youthful work this was the exception rather than the rule.

Nevertheless, the stories in I, Robot are important works, because in addition to repudiating the divine “You may not mock the stupendous mechanism of the Creator of the world” taboo, Asimov made a more fundamental contribution — namely, the Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Suddenly, the issue was not about the sin of creating robots: it was about how to manage them appropriately. The importance of Asimov’s Laws of Robotics was not their precise formulation or wording, but that they existed at all. Engineered things, Asimov tells us, can be made subject to strict controls that aren’t applicable to humans. This is why constructing robot intelligence is not a sin, he says, any more than constructing anything else is a sin. Yes, we must pay attention to complicated details, but difficulty is not impossibility. Check with the Wright brothers, Sir Edmund Hillary, and one or two others if you doubt that fact.

Bolo

With Bolo, Keith Laumer introduced the robot’s viewpoint. In this series of stories, begun in 1960 and largely complete by 1969, we encounter a series of robotic war machines — the evolutionary descendants of tanks. Laumer wasn’t the theoretician that Asimov was and the logic driving his thinking isn’t particularly transparent, but the concept is compelling.

In the Bolo stories, Laumer inserts sections of first-person monologue by the robot. This is a big step away from Lang’s notion of the robot as incomprehensibly alien — the Other. Instead, the robot thinks about its situation and reasons about the circumstances in which it finds itself. Laumer’s robots are invariably loyal to their human masters, although in Rogue Bolo, we encounter a robot with sufficient intellectual power to conduct a strategic campaign against adversaries that humans haven’t detected, despite direct orders from humans to desist. Implicit in this is Asimov’s assertion of the First Law’s precedence over the Second Law.

Star Wars

Released fifty years after Metropolis in 1977, Star Wars struck another small blow in the struggle to liberate robots from their earlier stereotypes as humanoid, ruthlessly competent, and evil. In this movie, we get a humanoid robot — C3PO — that is trivial and cowardly, though still part of the good guy crowd, in contrast to the lumpish and purely functional (but invariably competent and heroic) R2D2. C3PO is articulate whereas R2D2 is completely wordless, thus providing the ultimate cinematic example of the old saw that actions speak louder than words. Interestingly, in the recently released back-story, The Phantom Menace, we learn that the young Anakin Skywalker constructed C3PO. R2D2’s origin seems to be more prosaic, but there is some sort of justice in the fact that the weak C3PO was built by the person who turns out to be the penultimate bad guy. Perhaps C3PO’s weakness is a foreshadowing of Anakin’s own? It’s worth noting that by 1977, robots were so well established that this convergence of two separate themes — the fiction-inspired C3PO and the reality-inspired R2D2 — merits no more than a minuscule side plot in a science-fiction film.

Blade Runner

With Ridley Scott’s 1982 movie Blade Runner, loosely based on Philip K. Dick’s 1968 novel Do Androids Dream of Electric Sheep?, we return from mechanical humanoids to the biological creations Capek pioneered in R.U.R. Superficially, this is an exercise in which androids, called replicants and physically indistinguishable from humans, rebel against a social order that treats them brutally. They are banned from Earth — a formula that Asimov used to great effect in his robot stories — and have artificially limited lifespans. Their superhuman physical and mental capabilities are key to detecting them when they run and hide. We learn in Blade Runner that they fear death and so run to seek freedom and an unimpeded lifespan.

The understatement built into Blade Runner is overwhelming. Are replicants human? Their bodies are biological and they look like people, so it’s too easy to grant them souls by dismissing their creation as some perversion of cloning. Hannibal Chew, the engineer who boasts, “I design your eyes,” to two replicants right before they kill him, refutes this: if he’d just cloned their eyes, how could they have superior eyesight? And Harrison Ford’s character, Deckard, is a paradox: How can a human, every other instance of whom is manifestly inferior to replicants in physical and intellectual capabilities, manage — unaided — to defeat an entire team of replicants, one after another?

Moreover, Blade Runner re-poses the same question that R.U.R. asked: can you create an entity with intellectual capabilities and not give it a soul? Deckard speculates on this at the end of the movie while reflecting on a replicant’s decision not to kill him when he’d won the final fight:

“I don’t know why he saved my life. Maybe in those last moments he loved life more than he ever had before. Not just his life, anybody’s life, my life. All he’d wanted were the same answers the rest of us want. Where did I come from? Where am I going? How long have I got? All I could do was sit there and watch him die.”

Meanwhile, in the real world …

The golden age of robotics research came to an end sometime in the mid 1980s when a pair of economists observed that the sweet spot for flexible automation was in an area in which US industry took no interest. It turned out that robots are cost-effective for production runs roughly between 1,000 and 10,000 units. US manufacturing tends to have its sweet spots below 1,000 (airliners, electric generators, and supercomputers) and above 100,000 (jelly beans and automobiles). Japan’s manufacturing industry has historically focused its attention on 1,000 to 10,000 unit runs, giving it a tremendous ability to respond to market dynamics with revised products and simultaneously making robotics a far more economically attractive proposition. The result of this economic insight was a dramatic drop in research funding for robotics worldwide. Nonetheless, the field has made substantial technical progress in the past 20 years, albeit largely out of the public eye. Interestingly, there hasn’t been the same attention to robotics in the science fiction community, at least not in the works that have gotten attention from the broadest community of readers.

Is this parallel drop-off in the world of fictional robots because Asimov and Laumer said everything there is to say about robots? Is it that people have recognized the absurdity of humanoid robots and have transformed the debate into one about the broader topic of artificial intelligence, as we considered in the first Biblio Tech article? Or are we just bored with the topic? I’m not sure. I prefer to think that we’re just waiting for some powerful new talent to turn our thinking on its head again with a brilliant new insight.

SIDEBAR: What is a robot?

There is no real consensus on precisely what a robot is. Rather than trying to define one, let’s instead try to identify the characteristics of things that we might call robots. The most appealing description is
that a robot is a system with mechanical components intended to achieve physical action; it also has sensory feedback and a sophisticated and flexible control system that links its sensing to action.

To see if this characterization works, let’s see if it correctly distinguishes between our ideas of robots and nonrobots. The system must be intended to produce mechanical action, so a computer video game is out. The system must use sensory feedback to control motion, so printers are out. So far, there is little to distinguish a robot from a classical control system.

A numerically controlled machine tool is a robot, but just barely. A modern car’s antilock braking system could qualify, although there’s something unsatisfying in it doing so. An airplane’s autopilot certainly qualifies as a robot, particularly the advanced autopilots that can receive a list of waypoints and then navigate themselves from liftoff to approach via GPS. A washing machine that can sense the amount and temperature of water in its tub and act accordingly is probably a robot, albeit not a particularly interesting one. Oddly enough, many of the pick-and-place industrial robot systems in factories in Japan and elsewhere around the world fail this test, because they lack a sensory capability.

Today, robots are almost commonplace. We see them in numerous prosaic roles in factories, but we also see them competing in what can best be called a new form of demolition derby. Students around the world work to build robots to compete in robot soccer; a researcher at Bell Labs built one to play ping-pong a few years back. Numerous toys on the market incorporate various aspects of robotic technology.

Influential Works

Medium Author Title Year of Original Publication
Book Mary Shelley Frankenstein 1818
Play Karel Capek R.U.R. (Rossum’s Universal Robots) 1920
Film Paul Wegner, Carl Boese Der Golem (in German) 1920
Film Fritz Lang Metropolis 1927
Book Isaac Asimov I, Robot Short stories: 1940-1950; collection: 1950
Book Isaac Asimov Nightfall 1941
Book Keith Laumer Bolo Short stories: 1960-1976; various collections
Book Philip K. Dick Do Androids Dream of Electric Sheep? 1968
Film George Lucas Star Wars 1977
Film Ridley Scott Blade Runner 1982
Book Michael Chabon The Amazing Adventures of Kavalier and Clay 2000

Read the original …

(This article appeared originally in IEEE Security & Privacy in the May/June 2003 issue. This is substantially the same text, with some minor formatting changes to take advantage of the power of the online presentation plus a few minor wordsmithing tweaks. And the table has the original publication dates for the listed books, not the editions in print in 2003 when the article was published.)

Here’s a PDF (article-03-final) of the original article, courtesy of IEEE Security & Privacy.

Post-Apocalypse Now

[originally published March 2003]

It’s curious that post-apocalyptic fantasies are such a popular fictional form. What is the allure of the end of civilization as we know it, and how did our interest in it emerge? Writers have speculated about the end of the world for a long time. In fact, we can trace much of our contemporary vocabulary and imagery about the apocalypse back to the Bible’s The Revelation to John. Over the past 50 years, however, we’ve seen a particularly vigorous upsurge in the production of post-apocalyptic works.

In this edition of Biblio Tech, we will look at an example of the post-apocalyptic genre, David Brin’s 1985 novel The Postman and the 1997 Kevin Costner movie that it inspired.

Dystopia

Although the cyberpunk genre, which I mentioned in my last column, focuses on dystopic futures, post-apocalyptic fantasies also tend to present their own dystopias. The difference is the path between the present and the future. In cyberpunk novels, dystopia typically occurs incrementally, smoothly, and continuously. In post-apocalyptic fantasies, however, the future arrives suddenly, cataclysmically, and discontinuously.

For the purposes of this discussion of post-apocalyptic stories, we will exclude the terminal tales, in which the world or the universe and all life in it come to an end, since that really eliminates any further discussion. In addition, we will exclude religious works. For our purposes, post-apocalyptic means that some cataclysmic transforming event upsets the order of things. These stories are typically structured with preambles that establish some linkage to the normal present as we know it, follow with the cataclysm, and finish with a post-apocalyptic world in which characters deal in one way or another with the change.

Why should we, who benefit so much both materially and spiritually from membership in a complex civilization, be so attracted to stories about the end of it? Was the flood of such stories unleashed by the atomic bomb’s arrival in 1945? Certainly the volume published since the 1940s seems remarkable.

These stories clearly fascinate us. The numbers of them written, sold, and still in print are testimony to this fact. But do they entrance us as a snake entrances a bird? Or perhaps we like these stories because we chafe at the strictures and disciplines our complex social system imposes on us. Perhaps we think that we would be better people or create better societies if we got a chance to start over.

Anarchy

Alternatively, perhaps we want to believe that we would survive without the support of the rich framework that lets — nay, requires — all of us be specialists. In 1651, Thomas Hobbes wrote a rebuttal to this fantasy in Leviathan, discussing the state of anarchy resulting from the failure of the common power that underpins social order:

“In such condition there is no place for industry, because the fruit thereof is uncertain: and consequently no culture of the Earth; no navigation, nor use of the commodities that may be imported by sea; no commodious building; no instruments of moving and removing such things as require much force; no knowledge of the face of the Earth; no account of time; no arts; no letters; no society; and which is worst of all, continual fear, and danger of violent death; and the life of man, solitary, poor, nasty, brutish, and short.”

Is there a single magical ingredient that makes society tick, or is society really just the sum of a lot of often-incomprehensible complexity? Some stories — for instance, David Brin’s The Postman — explore the notion that the magic ingredient connecting people to each other is prosaic infrastructure. Or maybe it’s a mystical belief in community. Or perhaps these two are mirror images of each other.

Some years ago, a news report in the US caught national attention by describing a particular street in an inner-city neighborhood that was so dangerous that mail carriers for the US Postal Service were afraid to venture there. Residents had to travel to a distant post office to collect their mail. Consider the horror of this situation — can you think of any service more innocuous, harmless, or inclusive than mail delivery? Can a neighborhood that doesn’t receive mail truly be considered part of the American community?

We learned subsequently that what had driven out the Postal Service was violent drug dealers. The dealers used the residential mailboxes in the entry foyers of neighborhood apartment houses as drops and terrorized residents and letter carriers to keep them away. Fortunately, society ultimately retaliated, reclaiming the mailboxes and rededicating them to their boring but essential function as social glue.

The need to reassert the dominance of civil society, no matter how prosaic, was recognized by civic leaders such as New York’s mayor Rudy Giuliani as central to any campaign to address headline issues like crime, violence, and drug abuse. Our societies are complex, with rich fabrics of interdependencies, fabrics that we ignore at our peril. Edward Lorentz’s articulation of the “butterfly effect” in 1972 as part of the exposition of chaos theory might have struck most of us as incredible, but in the case of the infrastructure of civil society, we have learned that letting enough figurative butterflies die can lead to catastrophic consequences.

Mourning and Restoring the Lost Society

The Postman touched a nerve when it reached bookstores in 1985. Although it was never a bestseller, it established a solid presence and remains in print 18 years later. Among apocalyptic fiction, it stands out in its embrace of the lost civilization and its rejection of the apocalyptic fantasy that “starting over” would make a significant difference in how we treat each other. Efforts such as Larry Niven and Jerry Pournelle’s Lucifer’s Hammer, a story about a world devastated by a meteor strike, also value the destroyed society, but they do so almost accidentally.

In The Postman, Brin establishes the character of Gordon Krantz, a drifter in a devastated world. The destruction’s cause is left deliberately vague — a combination of war, disease, pollution, nuclear winter, and human depravity. Brin implies that no single component would have been sufficient on its own to bring about the disaster, but in combination, accentuated by the centrifugal efforts of a survivalist movement called Holnism, civilization ultimately succumbed.

Gordon drifts west from Idaho in search of a fantasy town, somewhere on the Oregon coast, in which civilization supposedly hasn’t fallen as far. Bandits ambush and rob him, and in desperation, he chances on the ruins of a postal service Jeep. The Jeep provides shelter and its deceased occupant provides clothes and boots to replace those taken by the bandits. Dressed in the mail carrier’s outfit and carrying some of the years-old mail left in his pouch for future entertainment, Gordon heads west. In Pine View, the first town he visits after finding the Jeep, Gordon is mistaken for a postman, but he earns his keep during his short stay with his standard stock in trade: entertaining the town’s citizens with dramatic productions based on remembered fragments of Shakespeare. He accepts letters thrust on him by the residents of Pine View before he leaves without realizing yet the power that he’s awoken in the people there.

When the matriarch of Pine View takes her leave of him at the western edge of town, she asks him, “You aren’t really a postman, are you?” He replies, half-cynically, “If I bring back some letters, you’ll know for sure.”

In Oakridge (the next town), in an effort to overcome a cold reception, he recalls the previous mistaken identity and brashly claims to be a postal carrier for the “Restored United States.” The mayor rejects this grandiose claim, but Gordon manages to bypass him by impulsively pulling a handful of mail out of his pouch and reading out names. Before too long, and fortunately for the story, he names a living resident of the town, and the longing for contact quickly overwhelms the mayor’s suspicious skepticism. This imposture gets him shelter and food. When he leaves, he takes more mail with him.

Within a short while, Gordon has polished his con and mastered an arrogant address appropriate to the highest federal official in the territory. His fame by now is preceding him, so he no longer has to worry about rejection at the town gates. He has begun to deputize local postmen to keep up the fiction, only they don’t realize that they’re participating in a fraud; they take it seriously. He has them swear an oath based on the inscription on the New York General Post Office building:

Neither snow nor rain nor heat nor gloom of night stays these couriers from the swift completion of their appointed rounds.

(Contrary to popular belief, the United States Postal Service has no official motto, but several postal buildings contain inscriptions, the most familiar of which is the one you just read. This specific inscription was supplied by William Mitchell Kendall of the firm of McKim, Mead & White, the architects who designed the New York General Post Office. Kendall said the sentence appears in the works of Herodotus and describes the expedition of the Greeks against the Persians under Cyrus, about 500 BC. The Persians operated a system of mounted postal couriers, and the sentence describes the fidelity with which their work was done. George H. Palmer of Harvard University supplied the translation, which he considered the most poetical of about seven translations from the Greek.)

What follows is a virtuous circle in which success breeds success. As Gordon’s con progresses, he begins to realize that it’s not a con: it’s real, and the postal service that he’s bootstrapped out of nothing has taken off. He begins to use it as a platform to correct despotic behavior in the towns he passes through, undermining tyrannies offhandedly, almost casually.

This, essentially, is the first third of the book. The remaining two thirds explore other, less interesting themes. The story might have been better as a novelette or novella, omitting the deceased artificial intelligence, the genetically engineered supermen, and the sublimated Lysistrata corps.

In 1997, Kevin Costner’s movie “The Postman” came out. Although recognizably a child of the book, the movie never achieved commercial success. David Brin notes in his Web site that although the screenplay abandons the last two thirds of the book, the movie gets lost in its attempt to make the back story hang together, frittering away precious time in the effort. Despite that, the film has several emotionally powerful moments that make it worth more than a footnote.

Conclusion

Many engineers spend their careers building or sustaining infrastructure — the very foundations of society and civilization as we know it. The work can be satisfying, although most of that satisfaction is quiet. Our nontechnical friends and relatives never seem to get excited about the infrastructure that sustains them. The wonders of the water systems, the power systems, the telecommunications systems, and other such marvels are unsung and often unremarked. We work on them, making our contributions with scant complaint at the injustice that causes the beneficiaries to remain largely oblivious. It’s a treat, therefore, to occasionally see that sometimes, somewhere, someone notices.

On a cold and rainy night recently, my wife looked out the window at the storm and remarked that it was a very good night to be dry and warm inside. It’s these actual reminders, along with the fictional ones that we’ve considered today, that help us properly value the benefits we’ve received from all of the engineers, builders, plant operators, policemen, firemen, and postmen who make it possible for us to get heat by turning a dial, light by flipping a switch, hear a friend’s voice by picking up a telephone, and receive a drawing from a child by opening a mailbox.

Influential Works

Author Title Publisher Year of Original Publication
D. Brin The Postman Bantam Books 1985
E.M. Forster The Machine Stops and Other Stories 1909
T. Hobbes Leviathan 1660
L. Niven and J. Pournelle Lucifer’s Hammer Fawcett Books 1977

Read the original …

(This article appeared originally in IEEE Security & Privacy in the March/April 2003 issue. This is substantially the same text, with some minor formatting changes to take advantage of the power of the online presentation plus a few minor wordsmithing tweaks. And the table has the original publication dates for the listed books, not the editions in print in 2003 when the article was published.)

Here’s a PDF (article-02-final) of the original article, courtesy of IEEE Security & Privacy.

AI Bites Man?

[originally published January 2003]

Over the years, people have explored the broader implications of many seminal ideas in technology through the medium of speculative fiction. Some of these works tremendously influenced the technical community, as evidenced by the broad suffusion of terms into its working vocabulary. When Robert Morris disrupted the burgeoning Internet in 1988, for example, the computer scientists trying to understand and counteract his attack quickly deemed the offending software a “worm,” after a term first introduced in John Brunner’s seminal 1975 work, The Shockwave Rider. Brunner’s book launched several terms that became standard labels for artifacts we see today, including “virus.”

In future installments of this department we’ll look at the important writers, thinkers, works, and ideas in speculative fiction that have got us thinking about the way technological change could affect our lives. This is not to imply that science fiction writers represent a particularly prescient bunch — I think the norm is ray guns and spaceships — but when they’re good, they’re very good. And whatever gets us thinking is good.

To get started, let’s take a look at some of the key subgenres and eras in science fiction’s history (see the “Influential Works” table at the bottom of this article).

Worlds Like Our Own

Some of the best (and earliest) science fiction work speculates on a world that is clearly derived from our own but that makes a few technically plausible changes to our underlying assumptions. Vernor Vinge’s True Names represents such a world, in which the size and power of computer systems has grown to the point where artificial intelligence capable of passing the Turing test is beginning to emerge. Vinge’s most fascinating speculations involve the genesis and utility of these artificial intelligences, and he explores the notion that AI might emerge accidentally, a theme that appears elsewhere in books like Robert A. Heinlein’s The Moon is a Harsh Mistress and in Thomas J. Ryan’s The Adolescence of P-1.

In True Names, Vinge suggests a radical use for such AI capabilities, namely the preservation of the self beyond the body’s death. Forget cryogenically freezing your brain or body in hope that someone will “cure” old age, he says instead, figure out how to save the contents of your memory and the framework of your personality in a big enough computer. If this AI passes the Turing test, then certainly your friends and relatives won’t be able to tell the difference. But will you know you’re there? Will this AI be self-aware? Will it have a soul?

Cyberpunk and Its Roots

These works naturally evolved into a scarier version of the future. Cyberpunk, one of the most fascinating threads in speculative fiction, is epitomized in the work of William Gibson, who startled us many years ago with a short story called Johnny Mnemonic, now included in the 1986 collection Burning Chrome (and made into an unsuccessful 1995 movie starring Keanu Reeves). Cyberpunk stories generally feature a dystopic world in the near or distant future in which technologies emerging today have changed the ground rules of life.

1949

There isn’t a straight line from worlds that resemble ours to cyberpunk. The genre morphed over the years and decades through a variety of novels. Although cyberpunk is most strongly identified with William Gibson, its roots go much further back … all the way to George Orwell’s Nineteen Eighty-Four.

In 1949, when Orwell published the book that is now a staple of US high-school curricula, television was still a novelty in most households, although the technology itself had been around for 20 years. With TV’s successful integration into modern life, Orwell’s vision of a totalitarian future in which governmental control is mediated through two-way television feels somewhat dated. Anyway, Orwell’s mastery of the language and deep insights into many human issues, including the relationship between memory and truth (as Winston Smith discovers when he starts a diary and discovers the subversive power of a historical record) have prevented obscurity.

An open question is whether new technology tips the balance toward central control, as Orwell feared, or toward liberty, as many have speculated when considering the role of faxes, photocopiers, and even the Internet in the collapse of the former Soviet Union.

1969

Heinlein’s thinly veiled romance of the American Revolution, The Moon is a Harsh Mistress, begins with Manuel Garcia O’Kelly’s discovery that the Lunar Authority’s central computer (“Mike”) has become conscious and is developing a sense of humor. I still use Heinlein’s observation that some jokes are “funny once” in teaching my own young son about humor.

As with True Names, Mike accidentally reaches a level of complexity that mystically tips it over the edge from being a machine to being a person. Among Mike’s numerous achievements that anticipate contemporary technological progress is the creation of a synthetic person, Adam Selene, presented entirely through video and audio.

Unlike the cyberpunk mainstream, which Heinlein anticipated by over a decade, Mistress shows a world vastly different from this one but in which most of us could imagine living and finding happiness. I cherish the humor and the optimism about relationships between artificial and natural intelligences that led Heinlein to name the leading human character Manuel just so Mike the computer could say things to him like, “Man, my best friend.”

Things were changing rapidly in the technical world in 1969 as well. Dating back to that year, all the documents that have described and defined the Internet have been numbers in the RFC (Requests for Comments) series. Each document is numbered sequentially, starting with RFC 1. RFC 4 is dated 24 March 1969. It documents the Arpanet, which would later become known as the Internet, as having four nodes. Two years later, Intel would introduce its 4004, the first commercial microprocessor. The 4004 had a 4-bit-wide arithmetic logic unit (ALU) and was developed for use in a calculator.

1975

The Shockwave Rider is more about the potential role of computers, networks, and technology in society than The Moon is a Harsh Mistress. In Heinlein’s work, the computer’s role is not much different than that of a person with magical powers. The computer’s accomplishments are technically plausible, but the operational aspects of Heinlein’s society are much like those of the 1969 world that published the book.

Brunner, writing six years later, explores more fundamental questions of identity and human relationships in a future world in which a vast global network of computers has changed the dynamic. This world is scary and alien, although not as scary and alien as the one that Gibson would reveal just six years later. Brunner makes clear the scariness of an entirely digitally mediated identity early in the book when Nicky Halflinger’s entire world — electric power, telephone, credit, bank accounts, the works — is turned off in revenge for a verbal insult.

Like Star Wars two years later, the technological marvels of The Shockwave Rider are a bit creaky and imperfect, rendering them adjuncts to a plausible future world rather than central artifacts worthy of attention themselves. This is characteristic of this genre’s best writing — it validates the importance of technology by paying only peripheral attention to the technology itself.

In the technical world, Vint Cerf, Yogen Dalal, and Carl Sunshine published RFC 675 “Specification of Internet Transmission Control Program” in December 1974, making it the earliest RFC with the word “Internet” in the title. In November 1975, Jon Postel published RFC 706 “On the Junk Mail Problem.”

1977

In 1977, Macmillan published Thomas J. Ryan’s novel The Adolescence of P-1. It was an age when vinyl records had to be turned over, when everyone smoked (although not always tobacco), when 256 Mbytes of core was an amount beyond imagination, and when a character in a book could refer to 20,000 machines as “all the computers in the country.”

In Ryan’s book, as in Heinlein’s, computer intelligence emerges accidentally, although in this case by the networking of many computers rather than through the assembly of a large single machine. The precipitating event is the creation of a learning program by a brilliant young programmer, Gregory Burgess, whose fascination with cracking systems leads him to construct several recognizable AI artifacts. Of course, the great pleasure of fiction is the ability to elide the difficult details of building things such as P-1’s program generator, which is the key to its ability to evolve and grow in capabilities beyond those that Burgess originally developed for it.

The Adolescence of P-1 is full of quaintly outdated references to data-processing artifacts that were current in the mid 1970s, reflecting Ryan’s day job as a computer professional on the West coast. Those whose careers brought them into contact with IBM mainframes in their heyday will be amused by the author’s use of operational jargon to provide atmospherics in the book.

Ryan also takes a much less Polyanna-ish view of the relationships between humans and artificial intelligences. Unlike Heinlein, who clearly expresses in Mistress that sentience implies a certain humanistic benevolence, Ryan explores the notion that Gregory Burgess’s AI must have a strong will to survive, which would lead it to be untrusting toward people. P-1 at one point commits murder, for example, and unapologetically explains its actions to Burgess.

Ryan wrote only one book, so he must not have derived much encouragement from the book’s reception, which is unfortunate. His writing is a bit uneven, but it’s certainly entertaining, and his sense of the important issues has held up well.

1981

For some reason, 1981 saw the publication of two seminal stories in the cyberpunk oeuvre. In the technology world, the Arpanet was preparing to transition from the old NCP technology, which it had outgrown, to the new IP and TCP protocols that would bring it fame and fortune along with a new name – the Internet. Computer scientists around the country were avidly reading RFC 789, which documented a now-famous meltdown of the Arpanet. Epidemiologists were talking about an outbreak of a hitherto very rare cancer called Kaposi’s Sarcoma, an outbreak that would be recognized in the following year as a harbinger of a new and terrifying disease: AIDS. IBM, acceding to an internal revolution driven by its microcomputer hobbyists, introduced a new product code-named “Peanut,” the IBM Personal Computer, that catapulted Intel and Microsoft to the forefront. Pundits were moaning that US industrial prowess was a thing of the past and that in the future Americans were destined to play third fiddle, economically, to the Japanese and the Germans.

Vinge’s True Names is a novelette, a short novel, rather than something that could be published economically as a monograph. As a result, it was published in a cheesy Dell series called “Binary Star,” each number of which featured two short novels printed back to back, with the rear cover of one being the upside-down front cover of the other. For you incurable trivia nuts, True Names appeared with a truly dreadful effort called Nightflyers, a gothic horror story transposed to the key of science fiction.

Despite the uninspired company, True Names had an electrifying effect on the computer-science community. The title of the novel refers to a common theme of fairy tales and magical logic — knowing something’s “true name” gives you complete power over it. In the world that Vinge concocts, knowing a computer wizard’s true name permits you to find his or her physical body. Even if entering the Other Plane didn’t leave your body inert and defenseless, revealing the body’s location renders it vulnerable to attack from a variety of long-range weapons. More than that, however, as in The Shockwave Rider, exposure of your true name makes your infrastructure vulnerable to a range of denial-of-service attacks. This represents a rather simplistic view of security models, although one that the modern world hasn’t left very far behind, seeing how only a relatively few years ago a Social Security Number was all you needed to access most of someone’s assets.

William Gibson’s “Johnny Mnemonic” appeared in Omni magazine in May 1981. It introduced a world destined to become famous with books like 1984’s Neuromancer and 1986’s Count Zero.

In 1981, only the paranoid were saying what Johnny Mnemonic says, “We’re an information economy. They teach you that in school. What they don’t tell you is that it’s impossible to move, to live, to operate at any level without leaving traces, seemingly meaningless fragments of personal information. Fragments that can be retrieved, amplified, .” Today, however, every consumer with a credit card and an Internet connection understands this point intuitively. Who says nothing changes?

1992

The year after the Gulf War was a US presidential election year. UUNET and ANS, among others, were duking it out over the Internet’s commercialization. Bloody civil war was beginning in the territory previously known as Yugoslavia. And Bantam published Neal Stephenson’s Snow Crash.

Stephenson, like Ryan and Vinge, is a writer with real experience as a computer professional. Unlike Heinlein and writers like him, for whom technological artifacts always have an aura of magical unreality, Stephenson’s grasp of the underlying technology is so deep and his writing skills so powerful that he is able to weave an entirely credible world.

In Snow Crash, the world starts out as the ultimate virtual reality video game. What Stephenson then explores is the possibility that these synthetic worlds will become real, at least in the sense that the things that happen in them can be of material significance in the meatspace world that our physical bodies inhabit.

Stephenson explores a fascinating thesis — suppose the taxing ability of geography-based governments is eroded in fundamental ways. He’s not the first to have considered this proposition, but he does it particularly well. Stephenson proposes that a set of nongeographic structures might emerge, perhaps like Medieval guilds, structures that organize people into groups based on some other selection criteria, possibly entirely voluntary. Brunner comes close to the same notion, although his organizing entities are corporations and the geographic government continues to have a monopoly on force. For Stephenson, however, the US government is just one of the many competing groups participating in the game.

He raises fundamental questions, though. How will people organize themselves? Religion? Race? Occupation? Philosophy? Ethnic origin? These self-organized groups could manifest themselves as a collection of confederated enclaves providing economic, physical, and emotional security to their … members? Citizens? Subjects? His insight is a powerful one. The craving for these forms of security is deeply rooted and part of what makes us human. What makes Orwell’s Nineteen Eighty-Four ring so false to us, and accentuates the horror of Orwell’s vision, is the complete loss of any acknowledgement of those needs in people. Stephenson corrects that omission, and the world of Snow Crash that results is not nearly as dystopic as Orwell’s or even Gibson’s.

1995

With the publication of The Diamond Age, subtitled “A Young Lady’s Illustrated Primer,” Stephenson explores the implications of a world in which material scarcity is no longer an assumption. The relationship between scarcity and value — or, to be more precise, price — is so deeply built into our psyche that thinking about alternative models is very difficult. I remember a short story, read years ago (title and author lost to me), that explored the same issue much more superficially, although it came to some of the same conclusions. In this story, a pair of matter-duplicating machines is left mysteriously on a doorstep somewhere. Once they become widely available, all material scarcity is banished. What drives economic activity? Why do people work, strive, compete?

In The Diamond Age, Stephenson asserts that the drive to strive and compete won’t go away just because the material forces that created it disappear. He combines the notion of very small machines and the recently demonstrated capability to manipulate individual atoms and creates a world in which atomic raw materials are piped to nanotechnical factories called matter compilers, which can assemble virtually anything, given the design. Scalability arguments underlie his claim that the fabricated objects will have a certain limited physical aesthetic, something that Alvy Ray Smith and others who have explored the use of fractals and other techniques for adding a realistic tinge of randomness to computer-generated images might dispute.

Conclusion

I hope you had as much fun reading this brief history as I had in researching and writing it. Preparing it gave me an opportunity to revisit some of my favorite books and try to articulate my reasons for believing them important. In future columns, we will examine some of these books in greater detail, along with the work of other writers and thinkers.

Influential Works

Author Title Publisher Original Publication Date
J. Brunner The Shockwave Rider Ballantine Books 1975
W. Gibson Johnny Mnemonic Omni 1981
R.A. Heinlein The Moon Is a Harsh Mistress St. Martin’s Press 1969
G. Orwell Nineteen Eighty-Four Knopf 1949
T.J. Ryan The Adolescence of P-1 MacMillan Publishing 1977
N. Stephenson Snow Crash Bantam Doubleday 1992
N. Stephenson The Diamond Age Bantam Doubleday 1995
V. Vinge True Names Tor Books 1981

Read the original …

(This article appeared originally in IEEE Security & Privacy in the January/February 2003 issue. This is substantially the same text, with some minor formatting changes to take advantage of the power of the online presentation plus a few minor wordsmithing tweaks. And the table has the original publication dates for the listed books, not the editions in print in 2003 when the article was published.)

Here’s a PDF (article-01-final) of the original article, courtesy of IEEE Security & Privacy.