Five Borough Bike Tour – 2011 May 1

The day was perfect for riding. Not too hot, not too cold. Not too humid. I rendezvoused with my teammates Jane and Tamara at the corner of 70th Street and Columbus Avenue at 6:20 AM. After pumping up our tires and adjusting our bicycles, we headed downtown five miles to the starting line. Because we were riding for them, Noelle Ito of BronxWorks arranged for us to start near the head of the pack, enabling us to get moving soon after the starting gun (it wasn’t really a gun, but rather big jets of flame emitted from the starting gate).

The first few miles, north on Sixth Avenue (Avenue of the Americas, for tourists) were slow, but we began to move more smoothly once we entered Central Park. We rode north along the eastern side of the Park Drive, exiting at 110th street and continuing north through Harlem to 138th Street, where we cut over to the Madison Avenue bridge and the Bronx. We didn’t spend long in the Bronx, returning to Manhattan by the Third Avenue Bridge and then south along the FDR Drive to the 59th Street Bridge.

We crossed over to Queens on the 59th Street Bridge (with me humming the famous Simon and Garfunkel song in my head) and then proceeded north up to Astoria Park where there was a mandatory rest stop. At this point we had traveled 18 of the 42 miles of the ride. After a ten minute break we headed back south through Queens and then across the Pulaski Bridge into Brooklyn.

We cruised through Williamsburg, past the Williamsburg Bridge, and then cut over towards the Manhattan and Brooklyn bridges. We passed through DUMBO and under the Manhattan Bridge Overpass near Brooklyn Bridge Park. We then cut over to the BQE. The BQE stretch was fast and straight, if not entirely the most visually exciting.

We paused for a final rest stop at the Fort Hamilton Park, at the foot of the Verrazano-Narrows bridge. The ride organizers had set up a stand from which they were passing out bananas to the riders. We all remarked at how perfect these bananas were … cool but not cold and exquisitely ripe … not hard and grainy and not soft and mushy … just perfect!

Finally we mounted up for the final stretch, the three or four miles that it took to cross the Verrazano-Narrows Bridge over to Staten Island. I had been nervously anticipating the climb over the bridge all during the ride after some daunting comments from my teammates, but in the event the slope was not so steep as to make riding up it particularly challenging … it was long but not particularly difficult.

Down the other side, carefully using my brakes to avoid approaching the speed of sound, and a final rest stop at Fort Wadsworth.

Then a final couple of miles over to the Staten Island Ferry terminal and a pleasant ferry ride back to Battery Park and the subway home.

In total I rode 47 miles – the 42 miles of the bike tour plus the five miles from home to the starting line. We completed the 42 miles of the tour by about 1PM after the 8AM start. Tamara’s trip computer, which recorded our speed and distance all the way, reported that we averaged 10 mph for the 42 miles.

It was a lot of fun. I neglected to put sunscreen on my exposed skin and picked up a sunburn, but that was my only mishap. Seeing the city up close on a bicycle this way is really a treat. The ride is quite level and not strenuous. And the ride organizers did a remarkable job of making it easy and safe. The route was well marked, there were repair and refreshment stations everywhere, and the riders were courteous and friendly. All in all, a great way to spend a Sunday.

The Digital Museum (part two)

Four years ago, just before I joined Google, I wrote “The Art Ecosystem and the Digital Museum” on this blog.

At Google I worked to promote the digital museum concept and found a number of similarly motivated folks. A team in Europe had worked with the Prado to put a number of the masterpieces from that museum online in a dramatic way with tremendously high resolution images. Others turned up from around Google and joined in. [By the way, you can look at the fourteen Prado pictures in amazingly high resolution using Google Earth. Just turn on 3D buildings in Earth and then navigate to the Prado and you’ll get a popup for the images.]

Today Google launched the Google Art Project ( with participation from seventeen major museums around the world. The site is very cool.

Mr NYGeek’s Kindle – a year later

Almost exactly a year ago I wrote about the new Kindle that a dear friend had given me and the affect that it had had on me. I wrote that item only a few days after receiving it, so it is interesting now to look back at the Kindle after a full year. Let’s look at some of the significant events of the last year involving the Kindle and the entire electronic book space.

Not long after I had received the Kindle I chatted with a colleague, Teddy Kowalski, who had been involved with the Nook development at Barnes & Noble. Soon I ran over to a nearby Barnes & Noble shop, one destined to close in a few days, as it happens, and acquired a Nook. Now I had two different ebook readers.

I found the Nook to be quite comparable to the Kindle. The basic reading UI (forward and back buttons, primarily) is superior on the Nook, but the Kindle is a bit better on the less common functions like zooming around from chapter to chapter or searching.

The Kindle has a clever annotation facility that allows me to select text from whatever I am currently reading and post it to my Facebook wall with my comments. The first time I did this I was delighted to receive a bunch of interesting feedback from my circle of Facebook “friends” with replies and comments on my selection. I am not always interested in sharing my thoughts and notes socially, so the annotation feature is, at this point, cool but not quite as useful as I might like. It comes close to being a way to take notes on what I am writing.

My 2010 Books

Device Title Author Read
Nook Snow Crash Neal Stephenson Yes
Nook Children of Jihad Jared Cohen
Nook The Shape of Water Andrea Camilleri
Nook Death of a Red Heroine Qiu Xiaolong Yes
Nook Cyber War Richard Clarke
Nook Dracula Bram Stoker
Nook The Girl with the Dragon Tattoo Stieg Larsson Yes
Nook Pride and Prejudice Jane Austen
Nook The Girl Who Played with Fire Stieg Larsson Yes
Nook The Girl Who Kicked the Hornet’s Nest Stieg Larsson Yes
Kindle The Girl with the Dragon Tattoo Stieg Larsson Yes
Kindle The Girl Who Played with Fire Stieg Larsson Yes
Kindle The Girl Who Kicked the Hornet’s Nest Stieg Larsson Yes
Kindle The Lord of the Rings (Trilogy) J. R. R. Tolkien
Kindle The Hobbit J. R. R. Tolkien Some
Kindle The Adventures of Tom Sawyer Mark Twain
Kindle The Adventures of Huckleberry Finn Mark Twain
Kindle The Adventures of Tom Sawyer Mark Twain
Kindle The Adventures of Huckleberry Finn Mark Twain
Kindle The Korean War: A History Bruce Cumings Some
Kindle Autobiography of Mark Twain Mark Twain Some
Kindle The Master Switch: The Rise and Fall of Information Empires Tim Wu
Kindle Zero History William Gibson Yes
Kindle I Remember Nothing Nora Ephron Yes
Kindle The Botany of Desire: A Plant’s-Eye View of The World Michael Pollan Yes
Kindle Essence of Decision Philip Zelikow Some
Kindle Spook Country William Gibson Yes
Kindle Tatja Grimm’s World Vernor Vinge
Kindle The Red Mandarin Dress Qiu Xiaolong Yes
Kindle When Red is Black Qiu Xiaolong Yes
Kindle Blind Man’s Bluff: The Untold Story of American Submarine Espionage Sherry Sontag and Christopher Drew Some
Kindle A Supposedly Fun Thing I’ll Never Do Again David Foster Wallace
Kindle Victory in Tripoli Joshua London Yes
Kindle The Pirate Coast Richard Zacks Yes
Kindle The Bedwetter Sarah Silverman
Kindle The Fuller Memorandum Charles Stross Yes
Kindle The Greatest Trade Ever Gregory Zuckerman Some
Kindle The God Engines John Scalzi and Vincent Chong Yes
Kindle Postwar Tony Judt Some
Kindle The Great Gatsby F. Scott Fitzgerald Yes
Kindle What Women Want: The Global Market Turns Female Friendly Paco Underhill
Kindle Case Histories: A Novel Kate Atkinson Yes
Kindle Reflections on The Decline of Science In England Charles Babbage
Kindle The Two Cultures C. P. Snow
Kindle Leaves of Grass Walt Whitman Some

In 2010 I acquired 47 electronic books, including several duplicates. I bought most of them, though several were free. I read 18 of them completely and substantial parts of another nine. While this is nothing like the amount I read back in the days when I was single, when I would read one or two books each week, it feels like a very significant uptick compared to the pace of the last several years.


In practical terms the Kindle/Nook devices have made my commuting time available for reading. I travel from home to work by subway and I generally have somewhere between ten and twenty minutes between time on the platform waiting for a train and the actual travel time. In the past that time was wasted or spent playing simple games on my smartphone, but now this is some of my prime reading time. The device fits in my jacket pocket when the weather is cold enough to require that I wear one or in my hand otherwise. Opening the device and getting to the place to resume reading is much quicker now than ever it was with paper books.

Nook Gym!

Beyond the commuting time that I have reclaimed, I find that these devices have enabled me to significantly enhance my exercise. I have historically tried to spend some time regularly, three or more times per week, on the exercise machines in the basement gym in my apartment building. The limiting factor for me has been how long I could tolerate the boredom. I can not stand to watch TV, a long standing deficiency of mine, and I have never been able to read paper books while working out – between the challenges of keeping the book open to the right page, turning the page when I’ve finished it, and the difficulty of keeping the small fonts in focus while I’m moving vigorously on the machine, I have never been able to combat gym boredom with books.

With these electronic devices, however, everything is different. I make the font bigger so that I can keep my eye on it while working out, and the device sits flat on the console of most of the machines. Turning the page is a simple button press. So now, when I go down to the gym to work out on the treadmill or the elliptical I now take along a Nook or Kindle and I have no trouble staying on the machine for an hour at a time, enabling me to return from the gym drenched in sweat and feeling very satisfied that I have both spent an hour reading and have contributed to my fitness. I have been tracking my exercise in Google Health since the new goals and diaries features were released this past summer and I find that in the last four months I have worked out over 77 times, almost 2/3 of the days.

Broken Books

Not all of the electronic versions of books are completely readable. Thanks to a recommendation from Chacho I started reading the wonderful police procedurals by Qiu Xiaolong set in 1990s Shanghai. When I got to “A Loyal Character Dancer” however, I discovered a problem with the book, which I communicated to Barnes & Noble by email:

I bought a copy of “A Loyal Character Dancer” for my Nook. I was reading it on my Nook today and I found that there is what appears to be a significant section of text missing at location 94 of 296.

In particular, the sentence begins”

‘… she paused to take a sip of her ‘

and continues

‘Zhu upstairs, something could have been done to the steps.’

It is clear that a significant quantity of text is missing from the book.

They responded promptly and courteously:

We apologize for the difficulties you are experiencing.

We have reviewed your order and downloaded the same eBook to our nook. On page 94 of 296, we see the same exact text as you do. Because this file is provided by the publisher, we are forwarding your feedback to them for review.

Please accept our sincere apologies for any inconveniences this may have caused.

We conducted a dialog over the course of a month or more afterwards, but they were unable to get the book corrected and ultimately refunded my money and removed the book from my Nook.

Of course they may have fixed the book by now, but they may not have done. The only way I can tell, I suppose, is to repurchase the book and look to see if it is defective. The process of getting this resolved was so protracted and unsatisfactory that I’m unwilling to start again. I could buy the book in paper, but I so much prefer to read on the Nook and Kindle that I’m loath to do that. So I have paused in my reading of the Inspector Chen Cao books for now.

This highlights a problem with electronic books that do not exist with paper books. In the past when I had the misfortune to purchase a paper book that turned out to be defective I could inspect the replacement copy and verify that it did not suffer from the defect. With electronic books, however, the only way to inspect it is to buy it. Of course, if one copy is defective, every copy will be, so there’s no point in trying to buy another one and see if it is any better.

Devices as far as the eye can see …

I have an iPod Touch and a Nexus 1 smart phone. Nook and Kindle applications are freely available for both, which permits me to read my Kindle and Nook libraries when I don’t have one of my ereaders otherwise available. Now we have an iPad and a Samsung Galaxy Tab and both of them have Kindle and Nook applications, so my wife and I can now read from our ebook library whenever and wherever convenient. This is quite nice, since the iPad and Galaxy Tab reading experiences are quite pleasant, though I’m not sure I have a strong preference for them over the Nook and Kindle eInk.

I recently stopped in to a Barnes & Noble store and played with the new color Nook. It has gorgeous full-screen color and has a full-screen touch pad. This machine is about half the price of an iPad or Galaxy Tab, so I can’t believe that we won’t see competition from B&N in the tablet market, though they will have to reposition the device in the marketplace.

How Many AI People Does It Take To Change A Lightbulb

How Many AI People Does It Take To Change A Lightbulb

[the original was posted in the early 1980s by Jeff Schrager, then a PhD student at CMU]

Q: How many Artificial Intelligence (AI) people does it take to
change a lightbulb?

A: At least 55:

The problem space group (5): [

  • One to define the goal state,
  • One to define the operators,
  • One to describe the universal problem solver,
  • One to hack the production system,
  • One to indicate about how it is a model of human lightbulb changing behavior


The logical formalism group (16): [

  • One to figure out how to describe lightbulb changing in first order logic,
  • One to figure out how to describe lightbulb changing in second order logic,
  • One to show the adequacy of FOL,
  • One to show the inadequacy of FOL,
  • One to show that lightbulb logic is non-monotonic,
  • One to show that it isn’t non-monotonic,
  • One to show how non-monotonic logic is incorporated in FOL,
  • One to determine the bindings for the variables,
  • One to show the completeness of the solution,
  • One to show the consistency of the solution,
  • One to show that the two just above are incoherent,
  • One to hack a theorem prover for lightbulb resolution,
  • One to suggest a parallel theory of lightbulb logic theorem proving,
  • One to show that the parallel theory isn’t complete. …ad infinitum (or absurdum, as you will). …
  • One to indicate how it is a description of human lightbulb changing behavior,
  • One to call the electrician


The robotics group (10): [

  • One to build a vision system to recognize the dead bulb,
  • One to build a vision system to locate a new bulb,
  • One to figure out how to grasp the lightbulb without breaking it,
  • One to figure out how to make a universal joint that will permit the hand to rotate 360+ degrees,
  • One to figure out how to make the universal joint go the other way,
  • One to figure out the arm solutions that will get the arm to the socket,
  • One to organize the construction teams,
  • One to hack the planning system,
  • One to get Westinghouse to sponsor the research,
  • One to indicate about how the robot mimics human motor behavior in lightbulb changing


The knowledge engineering group (6): [

  • One to study electricians’ changing lightbulbs,
  • One to arrange for the purchase of the lisp machines,
  • One to assure the customer that this is a hard problem and that great accomplishments in theory will come from his support of this effort (The same one can arrange for the fleecing.),
  • One to study related research,
  • One to indicate about how it is a description of human lightbulb changing behavior,
  • One to call the lisp hackers


The Lisp hackers (13): [

  • One to bring up the chaos net,
  • One to adjust the microcode to properly reflect the group’s political beliefs,
  • One to fix the compiler,
  • One to make incompatible changes to the primitives,
  • One to provide the Coke,
  • One to rehack the Lisp editor/debugger,
  • One to rehack the window package,
  • Another to fix the compiler,
  • One to convert code to the non-upward compatible Lisp dialect,
  • Another to rehack the window package properly,
  • One to flame on BUG-LISPM,
  • Another to fix the microcode,
  • One to write the fifteen lines of code required to change the lightbulb


The Psychological group (5): [

  • One to build an apparatus which will time lightbulb changing performance,
  • One to gather and run subjects,
  • One to mathematically model the behavior,
  • One to call the expert systems group,
  • One to adjust the resulting system, so that it drops the right number of bulbs


Cyberassault on Estonia

[This editorial was published originally in “Security & Privacy” Volume 5 Number 4 July/August 2007]

Estonia recently survived a massive distributed denial-of-service (DDoS) attack that came on the heels of the Estonian government’s relocation of a statue commemorating Russia’s 1940s wartime role. This action inflamed the feelings of the substantial Russian population in Estonia, as well as those of various elements in Russia itself.

Purple prose then boiled over worldwide, with apocalyptic announcements that a “cyberwar” had been unleashed on the Estonians. Were the attacks initiated by hot-headed nationalists or by a nation state? Accusations and denials have flown, but no nation state has claimed authorship.

It’s not really difficult to decide if this was cyberwarfare or simple criminality. Current concepts of war require people in uniforms or a public declaration. There’s no evidence that such was the case. In addition, there’s no reason to believe that national resources were required to mount the attack. Michael Lesk’s piece on the Estonia attacks in this issue (see the Digital Protection department on p. 76) include estimates that, at current botnet leasing prices, the entire attack could have been accomplished for US$100,000, a sum so small that any member of the upper middle class in Russia, or elsewhere, could have sponsored it.

Was there national agency? It’s highly doubtful that Russian President Vladimir Putin or anyone connected to him authorized the attacks. If any Russian leader had anything to say about the Estonians, it was more likely an intemperate outburst like Henry II’s exclamation about Thomas Becket, “Will no one rid me of this troublesome priest?”

We can learn from this, however: security matters, even for trivial computers. A few tens of thousands of even fairly negligible PCs, when attached by broadband connections to the Internet and commanded in concert, can overwhelm all modestly configured systems — and most substantial ones.

Engineering personal systems so that they can’t be turned into zombies is a task that requires real attention. In the meantime, the lack of quality-of-service facilities in our network infrastructure will leave them vulnerable to future botnet attacks. Several avenues are available to address the weaknesses in our current systems, and we should be exploring all of them. Faced with epidemic disease, financial panic, and other mass threats to the common good, we’re jointly and severally at risk and have a definite and legitimate interest in seeing to it that the lower limits of good behavior aren’t violated.

From the Estonia attacks, we’ve also learned that some national military institutions are, at present, hard-pressed to defend their countries’ critical infrastructures and services. Historically, military responses to attacks have involved applying kinetic energy to the attacking forces or to the attackers’ infrastructure. But when the attacking force is tens or hundreds of thousands of civilian PCs hijacked by criminals, what is the appropriate response? Defense is left to the operators of the services and of the infrastructure, with the military relegated to an advisory role‚Äîsomething that both civilians and military must find uncomfortable. Of course, given the murky situations involved in cyberwar, we’ll probably never fully learn what the defense establishments could or did do.

Pundits have dismissed this incident, arguing that this is a cry of “wolf!” that should be ignored (see Although it’s true that we’re unlikely to be blinded to an invasion by the rebooting of our PCs, it’s na√Øve to suggest that our vulnerability to Internet disruptions has passed its peak. Cyberwar attacks, as demonstrated in 2003 by Slammer, have the potential to disable key infrastructures. To ignore that danger is criminally naive. Nevertheless, all is not lost.


Events like this have been forecast for several years, and as of the latest reports, there were no surprises in this attack. The mobilization of global expertise to support Estonia’s network defense was heartening and will probably be instructive to study. Planners of information defenses and drafters of future cyberdefense treaties should be contemplating these events very carefully. This wasn’t the first such attack — and it won’t be the last.

[Here is a PDF file of the original editorial.]

Insecurity through Obscurity

[This editorial was published originally in “Security & Privacy” Volume 4 Number 5 September/October 2006]

Settling on a design for a system of any sort involves finding a workable compromise among functionality, feasibility, and finance. Does it do enough of what the sponsor wants? Can it be implemented using understood and practical techniques? Is the projected cost reasonable when set against the anticipated revenue or savings?

In the case of security projects, functionality is generally stated in terms of immunity or resistance to attacks that seek to exploit known vulnerabilities. The first step in deciding whether to fund a security project is to assess whether its benefits outweigh the costs. This is easy to state but hard to achieve.

What are the benefits? Some set of exploits will be thwarted. But how likely would they be to occur if we did nothing? And how likely will they be to occur if we implement the proposed remedy? What is the cost incurred per incident to repair the damage if we do nothing? Armed with the answers to these often unanswerable questions, we can get some sort of quantitative handle on the benefits of implementation in dollars-and-cents terms.

What are the costs? Specification, design, implementation, deployment, and operation of the solution represent the most visible costs. What about the efficiency penalty that stems from the increased operational complexity the solution imposes? This represents an opportunity cost in production that you might have achieved if you hadn’t implemented the solution.

In the current world of security practice, it’s far too common, when faced with vast unknowns about benefits, to fall back on one of two strategies: either spend extravagantly to protect against all possible threats or ignore threats too expensive to fix. Protection against all possible threats is an appropriate goal when securing nuclear weapons or similar assets for which failure is unacceptable, but for most other situations, a more pragmatic approach is indicated.

Unfortunately, as an industry, we’re afflicted with a near complete lack of quantitative information about risks. Most of the entities that experience attacks and deal with the resultant losses are commercial enterprises concerned with maintaining their reputation for care and caution. This leads them to the observation that disclosing factual data can assist their attackers and provoke anxiety in their clients. The lack of data-sharing arrangements has resulted in a near-complete absence of incident documentation standards; as such, even if organizations want to compare notes, they face a painful exercise in converting apples to oranges.

If our commercial entities have failed, is there a role for foundations or governments to act? Can we parse the problem into smaller pieces, solve them separately, and make progress that way? Other fields, notably medicine and public health, have addressed this issue more successfully than we have. What can we learn from their experiences? Doctors almost everywhere in the world are required to report the incidence of certain diseases and have been for many years. California’s SB 1386, which requires disclosure of computer security breaches, is a fascinating first step, but it’s just that — a first step. Has anyone looked closely at the public health incidence reporting standards and attempted to map them to the computer security domain? The US Federal Communications Commission (FCC) implemented telephone outage reporting requirements in 1991 after serious incidents and in 2004 increased their scope to include all the communications platforms it regulates. What did it learn from those efforts, and how can we apply them to our field?

The US Census Bureau, because it’s required to share much of the data that it gathers, has developed a relatively mature practice in anonymizing data. What can we learn from the Census Bureau that we can apply to security incident data sharing? Who is working on this? Is there adequate funding?


These are all encouraging steps, but they’re long in coming and limited in scope. Figuring out how to gather and share data might not be as glamorous as cracking a tough cipher or thwarting an exploit, but it does have great leverage.

[Here is a PDF file of the original editorial.]

The Impending Debate

[This editorial was published originally in “Security & Privacy” Volume 4 Number 2 March/April 2006]

There’s some scary stuff going on in the US right now. President Bush says that he has the authority to order, without a warrant, eavesdropping on telephone calls and emails from and to people who have been identified as terrorists. The question of whether the president has this authority will be resolved by a vigorous debate among the government’s legislative, executive, and judicial branches, accompanied, if history is any guide, by copious quantities of impassioned rhetoric and perhaps even the rending of garments and tearing of hair. This is as it should be.

The president’s assertion is not very far, in some ways, from Google’s claims that although its Gmail product examines users’ email for the purpose of presenting to them targeted advertisements, user privacy isn’t violated because no natural person will examine your email. The ability of systems to mine vast troves of data for information has now arrived, but policy has necessarily lagged behind. The clobbering of Darpa’s Total Information Awareness initiative (now renamed Terrorism Information Awareness;,,sid14_gci874056,00.html) in 2004 was a lost opportunity to explore these topics in a policy debate, an opportunity we may now regain. Eavesdropping policy conceived in an era when leaf-node monitoring was the only thing possible isn’t necessarily the right one in this era of global terrorism. What the correct policy should be, however, requires deep thought and vigorous debate lest the law of unintended consequences take over.

Although our concerns in IEEE Security & Privacy are perhaps slightly less momentous, we are, by dint of our involvement with and expertise in the secure transmission and storage of information, particularly qualified to advise the participants in the political debate about the realities and the risks associated with specific assumptions such as what risks are presented by data mining. As individuals, we’ll be called on to inform and advise both the senior policy makers who will engage in this battle and our friends and neighbors who will watch it and worry about the outcome. It behooves us to do two things to prepare for this role. One, we should take the time now to inform ourselves of the technical facts, and two, we should analyze the architectural options and their implications.

Unlike classical law enforcement wiretapping technology (covered in depth in S&P’s November/December 2005 issue), which operates at the leaves of the communication interconnection tree, this surveillance involves operations at or close to the root. When monitoring information at the leaves, only information directed to the specific leaf node is subject to scrutiny. It’s difficult when monitoring at the root to see only communications involving specific players‚ — monitoring at the root necessarily involves filtering out the communications not being monitored, something that involves looking at them. When examining a vast amount of irrelevant information, we haven’t yet demonstrated a clear ability to separate signal (terrorist communication, in this case) from noise (innocuous communication). By tracking down false leads, we waste expensive skilled labor, and might even taint innocent people with suspicion that could feed hysteria in some unfortunate future circumstance.

Who’s involved in the process of examining communications and what are the possible and likely outcomes of engaging in this activity? The security and privacy community has historically developed scenario analysis techniques in which we hypothesize several actors, both well- and ill-intentioned, and contemplate their actions toward one another as if they were playing a game. Assume your adversary makes his best possible move. Now assume you make your best possible response. And so on. In the case of examining communications at the root, we have at least four actors to consider.

One is the innocent communicator whom we’re trying to protect, another is the terrorist whom we’re trying to thwart. The third is the legitimate authority working to protect the innocent from the terrorist, and the fourth, whom we ignore at our peril, is the corrupted authority who, for some unknown reason, is tempted to abuse the information available to him to the detriment of the innocent. We could choose, in recognition of the exigencies of a time of conflict, to reduce our vigilance toward the corrupted authority, but history has taught us that to ignore the concept puts us and our posterity in mortal peril.


Our community’s challenge in the coming debate is to participate effectively, for we occupy two roles at once. We are technical experts to whom participants turn for unbiased fact-based guidance and insight, and we are simultaneously concerned global citizens for whom this debate is meaningful and important. We must avoid the temptation to use our expertise to bias the debate, but we must also avoid being passive bystanders. We must engage thoughtfully and creatively. We owe this to our many countries, our colleagues, our neighbors, our friends, our families, and ourselves.

[Here is a PDF file of the original editorial.]

What’s in a Name?

[This editorial was published originally in “Security & PrivacyVolume 3 Number 2 March/April 2005]

“What’s in a name? That which we call a rose
By any other name would smell as sweet;”
— Romeo and Juliet, Act II, Scene ii

In ancient times, when the economy was agrarian and people almost never traveled more than a few miles from their places of birth, most people made do with a single personal name. Everyone you met generally knew you, and if there did happen to be two Percivals in town, people learned to distinguish between “tall Percival” and “short Percival.”

The development of travel and trade increased the number of different people you might meet in a life time and led to more complex names. By the Greek classical period, an individual’s name had become a three-part structure including a personal name, a patronymic, and a demotic, which identified the person’s deme ‚Äî roughly, one’s village or clan.

This represented the end of the line in the evolution of names for several thousand years. During that time, people developed a range of concepts to enrich names with extra capabilities. Letters of introduction enabled travelers to enter society in a distant city almost as if they were locals. Renaissance banking developed the early ancestors of the letter of credit and the bank account, allowing money to be transferred from place to place without the attendant risk of physically carrying the gold. In response to these innovations, clever people invented novel ways to manage their names, for both legitimate and illegitimate purposes, giving us the alias, the doing business as, and the cover name. Americans invented personal reinvention, or at least made it a central cultural artifact, and developed a strong distaste for central management of the personal namespace.

Enter the computer

With the computer era came the user ID: first one, then two, and then infinity. With the Internet boom, we got retail e-commerce and the proliferation of user IDs and passwords. The venerable letter of introduction reemerged as an identity certificate, and the bank account evolved into dozens of different glittering creatures. While enabling online services to an increasingly mobile population, this explosion in user IDs created inconvenience and risk for people and institutions. As shopping and banking moved online, identity theft went high tech. We responded with two- and three-factor authentication, public key infrastructure, cryptographically strong authentication, and single-sign-on technologies such as Microsoft’s Passport and federated authentication from the Liberty Alliance. We’re currently trapped between Scylla and Charybdis. On one side, civil libertarians warn that a centralized authentication service comprising a concentration of power and operational and systemic risk represents an unacceptable threat to a free society. On the other, we have a chaotic morass of idiosyncratic user ID and password implementations that inconvenience people and invite attack.

The King is dead! Long live the King!

With its controversial Passport technology, Microsoft attempted to address the visible need by offering a single user ID and password framework to sites across the Internet. With eBay’s recent defection, it’s increasingly clear that Passport isn’t winning large ecommerce sites. Ultimately, Passport failed commercially not because of competitors’ hostility or civil libertarians’ skepticism‚ — or even because of the technical problems in the software‚ — but rather because enterprises proved unwilling to cede management of their clients’ identities to a third party. This is an important lesson, but not a reason to give up on the effort to create a usable framework.

Who or what will step up and make the next attempt to meet the need? Did we learn enough from the debate about Passport to clearly identify the salient characteristics of what comes next? Have we made enough progress toward a consensus on the need for “a” solution that the next company up to bat will be willing to hazard the amount of treasure that Microsoft spent on Passport? Now is the time for a vigorous dialogue to get clarity. We aren’t likely again to see a comparable exercise of courage, however misguided, so it behooves us to reduce the risk for the next round of competitors.

A successful Internet identity service framework must include admitting multiple independent authorities. Some industries have a strong need to establish a common identity and will insist on controlling the credential. Some governments will decide to do likewise, whereas others will leave it to the private sector. But identity services shouldn’t be tied to any individual vendor, country, or technology. They should allow the dynamic assembly of sets of privileges, permitting participating systems to assign rights and augment verification requirements.

Thus, a level of proof sufficient for my ISP to permit me to send a social email could be overlaid with an extra layer by my bank before allowing me to transfer money. It should be possible to migrate my identity from one ISP to another without losing all of my privileges, although I might have to re-verify them. It should be possible to easily firewall segments of my identity from others so that losing control over one component doesn’t result in the loss of the others.


This can’t be all that’s required, or we wouldn’t still be scratching our heads about it at this late date. It’s clear that there are thorny policy issues in addition to some very challenging technical questions. Getting to a workable Internet identity framework will take hard work, so let’s get going.

[Here is a PDF file of the original editorial.]

From the Editors: A Witty Lesson

[This editorial was published originally in “Security & PrivacyVolume 2 Number 4 July/August 2004]

Archaeologists wonder why the city of Naachtun, capital of the Mayan kingdom of Masuul, was abandoned suddenly, with no evidence of natural or manmade disaster. No volcanic eruption. No invading hordes. Why, after more than 250 years of growth and economic vigor was this city abruptly evacuated? Did the leading people in the city fail to react to some important change? What happened?

Two recent Internet worms, Slammer and Witty, have sounded an alarm to the entire computer security industry. To date, however, we have failed to respond to the alarm with the vigor warranted. Could we be dooming the Internet itself to the fate of Naachtun?

When Slammer hit in January 2003, it shocked the security community by growing with unprecedented rapidity‚ — doubling every eight seconds or so. The bulk of the machines destined to be infected were hit within 10 minutes, although the impact on the Internet peaked after only three.

Oh my gosh, we all said; this is really bad. Later, we breathed a sigh of relief, thinking the worm’s virulence had been a fluke. We thought we’d never again see an exploit that could be distributed in a single UDP packet. And it was really our own fault, we acknowledged, because the vulnerability and its patch were published in July 2002, six months prior to the attack; the lesson is that we have to tighten up our system-management capabilities.

The good news is that we haven’t yet seen another major worm propagated via single UDP packets.

Now for the bad news.

Media reports indicate that some new virus toolkits make malware construction as easy as running a computer game’s installation wizard. While such toolkits might not be very serious threats in themselves, they warn us that we can no longer assume that the time scale for virus and worm propagation is slow enough to analyze, plan, and execute in the way we’re used to doing.

And now for the really bad news.

On 8 March 2004, a vulnerability was discovered in a popular security product. Ten days later, the vendor released a vulnerability notice along with a patch. The Witty worm, designed to exploit this vulnerability, struck the following day. The Witty worm is notable for four things:

It was released one day after the publication of the vulnerability with the associated patch.

It pretargeted a set of vulnerable machines, thus accelerating its initial growth.

It was actively destructive.

It targeted a security product.

Colleen Shannon and David Moore of the Cooperative Association for Internet Data Analysis (CAIDA) completed an excellent analysis of the Witty worm shortly after it hit; their report is included as a special feature in this issue of IEEE Security & Privacy. As they note, the key point is that, “the patch model for Internet security has failed spectacularly…. When end users participating in the best security practice that can be reasonably expected get infected with a virulent and damaging worm, we must reconsider the notion that end-user behavior can solve or even effectively mitigate the malicious software problem ….”

So now what?

The US National Cyber Security Partnership has recently completed a set of recommendations in response to the National Strategy to Secure Cyberspace report. One of its top recommendations is, “developing best practices for putting security at the heart of the software design process” ( Denial is right out, though we will continue with business as usual for a while. Meanwhile, we’d better get cracking on replumbing the software-development infrastructure so that we can confidently know that any program that can hang out a socket won’t be vulnerable.

[Here is a PDF file of the original editorial.]

From the Editors: Whose Data Are These, Anyway?

[This editorial was published originally in “Security & PrivacyVolume 2 Number 3 May/June 2004]

A few years ago I had lunch with Ray Cornbill, a friend of mine who is a distinguished professor, though not a physician, at a major medical school. Ray’s unique sideline is as an international rugby coach. We chatted about our work and compared notes on current events. As we finished our lunch and prepared to depart, he made a remarkable statement: “I’m going over to the radiology practice to pick up my old x-rays.”

What did he mean by that? It turns out that the radiology lab that had taken his x-rays for the past couple of decades decided that it could no longer afford to keep the old ones around. Because he was a well-known professor at an affiliated medical school, a staff member had given him the heads up about the imminent disposal. Why did he care? Well, before becoming a rugby coach, he was an active rugby player for many years. Rugby is, shall we say, a rather vigorous sport, and he had suffered several injuries to various parts of his musculoskeletal system. His x-rays represented his physical history; he felt strongly that losing this history would make it harder for his doctors to care for him in the future, so he wanted to collect them before they were discarded. Notice that he wasn’t endangered by disclosure of confidential information, but rather by the loss of valuable historical data.

You go to a doctor or a hospital for an x-ray, but did you ever wonder who “owns” that x-ray? Your doctor ordered it and analyzed it, the radiologist made it and stored it, your insurance company paid for it — but it’s a picture of your body and is probably of the most value to you. But possession is nine-tenths of the law, as they say.

More than who has ownership claims, however, this question raises the issue of what ownership is. If we were to express the rights of ownership as methods in C++, they’d have names like these: Copy. Move. Destroy. Read. Modify. Transmit to someone else. The lab invoked its “ownership by possession” right in planning to destroy the x-rays. This is not entirely unfair; after all, the lab had spent money for years to store them and it is reasonable for them to want to stop spending money without a return. But in doing so it ran up against others’ unarticulated but valid ownership claims.

Lately, much discussion in our community has focused on digital rights management (DRM), which seems mostly an examination of copyright law and how it might change in light of current technological advances. The debate has grown heated as the ability to use technology to recast the terms and conditions of, for example, the sale of a book or recording have penetrated the awareness of both sellers and buyers. As the likes of Lawrence Lessig of Creative Commons and Clifford Lynch of the Coalition for Networked Information point out, the doctrine of first sale can be overthrown technologically. There might come a day when the fact that you’ve paid for a book doesn’t mean that you can lend it to a friend or even reread it years later. This has huge implications for libraries and educational institutions, as well as for people who ever replace their computers or DVD players. These important issues are currently being debated in a wide range of fora.

What is most troubling, however, is that the only model that seems to get serious attention is wrapped around broadcast-like models of distribution — models with a few producers and many consumers, such as books, music, movies, cable TV, broadcast TV, and radio. Questions such as rights management for things like x-ray images and medical records in general don’t attract much attention. For these sorts of intellectual property, we have to think more deeply about rights such as the right to destroy, the right to disclose, to publish or otherwise render unsecret, and others. Some work has been done in this area, such as for items like news photos, which tend to distinguish between the rights of celebrities and those of the rest of the population. The owners of important buildings have begun to assert rights to pictures that professional photographers take of them. And the performing arts community has established, by contract and practice, a system of fractional rights (primarily for managing royalty streams) that has some interesting characteristics.


Of course, this debate is not just about x-rays. It’s about any of your information sitting in a folder in your doctor’s office or that your bank has on file. Certainly the medical community, struggling with new HIPAA rules governing medical privacy, doesn’t seem to have managed to provoke an engagement from the computer-science and software-engineering research community. This is unfortunate for several reasons: our community has the ability to make a unique and valuable contribution to the discussion, and now is the time to resolve conceptual issues and design decisions so that the next generation of infrastructure can accommodate the emerging requirements.

[Here is a PDF file of the original editorial.]