The Impending Debate

[This editorial was published originally in “Security & Privacy” Volume 4 Number 2 March/April 2006]

There’s some scary stuff going on in the US right now. President Bush says that he has the authority to order, without a warrant, eavesdropping on telephone calls and emails from and to people who have been identified as terrorists. The question of whether the president has this authority will be resolved by a vigorous debate among the government’s legislative, executive, and judicial branches, accompanied, if history is any guide, by copious quantities of impassioned rhetoric and perhaps even the rending of garments and tearing of hair. This is as it should be.

The president’s assertion is not very far, in some ways, from Google’s claims that although its Gmail product examines users’ email for the purpose of presenting to them targeted advertisements, user privacy isn’t violated because no natural person will examine your email. The ability of systems to mine vast troves of data for information has now arrived, but policy has necessarily lagged behind. The clobbering of Darpa’s Total Information Awareness initiative (now renamed Terrorism Information Awareness; http://searchsecurity.techtarget.com/sDefinition/0,,sid14_gci874056,00.html) in 2004 was a lost opportunity to explore these topics in a policy debate, an opportunity we may now regain. Eavesdropping policy conceived in an era when leaf-node monitoring was the only thing possible isn’t necessarily the right one in this era of global terrorism. What the correct policy should be, however, requires deep thought and vigorous debate lest the law of unintended consequences take over.

Although our concerns in IEEE Security & Privacy are perhaps slightly less momentous, we are, by dint of our involvement with and expertise in the secure transmission and storage of information, particularly qualified to advise the participants in the political debate about the realities and the risks associated with specific assumptions such as what risks are presented by data mining. As individuals, we’ll be called on to inform and advise both the senior policy makers who will engage in this battle and our friends and neighbors who will watch it and worry about the outcome. It behooves us to do two things to prepare for this role. One, we should take the time now to inform ourselves of the technical facts, and two, we should analyze the architectural options and their implications.

Unlike classical law enforcement wiretapping technology (covered in depth in S&P’s November/December 2005 issue), which operates at the leaves of the communication interconnection tree, this surveillance involves operations at or close to the root. When monitoring information at the leaves, only information directed to the specific leaf node is subject to scrutiny. It’s difficult when monitoring at the root to see only communications involving specific players‚ — monitoring at the root necessarily involves filtering out the communications not being monitored, something that involves looking at them. When examining a vast amount of irrelevant information, we haven’t yet demonstrated a clear ability to separate signal (terrorist communication, in this case) from noise (innocuous communication). By tracking down false leads, we waste expensive skilled labor, and might even taint innocent people with suspicion that could feed hysteria in some unfortunate future circumstance.

Who’s involved in the process of examining communications and what are the possible and likely outcomes of engaging in this activity? The security and privacy community has historically developed scenario analysis techniques in which we hypothesize several actors, both well- and ill-intentioned, and contemplate their actions toward one another as if they were playing a game. Assume your adversary makes his best possible move. Now assume you make your best possible response. And so on. In the case of examining communications at the root, we have at least four actors to consider.

One is the innocent communicator whom we’re trying to protect, another is the terrorist whom we’re trying to thwart. The third is the legitimate authority working to protect the innocent from the terrorist, and the fourth, whom we ignore at our peril, is the corrupted authority who, for some unknown reason, is tempted to abuse the information available to him to the detriment of the innocent. We could choose, in recognition of the exigencies of a time of conflict, to reduce our vigilance toward the corrupted authority, but history has taught us that to ignore the concept puts us and our posterity in mortal peril.

Conclusion

Our community’s challenge in the coming debate is to participate effectively, for we occupy two roles at once. We are technical experts to whom participants turn for unbiased fact-based guidance and insight, and we are simultaneously concerned global citizens for whom this debate is meaningful and important. We must avoid the temptation to use our expertise to bias the debate, but we must also avoid being passive bystanders. We must engage thoughtfully and creatively. We owe this to our many countries, our colleagues, our neighbors, our friends, our families, and ourselves.

[Here is a PDF file of the original editorial.]

What’s in a Name?

[This editorial was published originally in “Security & PrivacyVolume 3 Number 2 March/April 2005]

“What’s in a name? That which we call a rose
By any other name would smell as sweet;”
— Romeo and Juliet, Act II, Scene ii

In ancient times, when the economy was agrarian and people almost never traveled more than a few miles from their places of birth, most people made do with a single personal name. Everyone you met generally knew you, and if there did happen to be two Percivals in town, people learned to distinguish between “tall Percival” and “short Percival.”

The development of travel and trade increased the number of different people you might meet in a life time and led to more complex names. By the Greek classical period, an individual’s name had become a three-part structure including a personal name, a patronymic, and a demotic, which identified the person’s deme ‚Äî roughly, one’s village or clan.

This represented the end of the line in the evolution of names for several thousand years. During that time, people developed a range of concepts to enrich names with extra capabilities. Letters of introduction enabled travelers to enter society in a distant city almost as if they were locals. Renaissance banking developed the early ancestors of the letter of credit and the bank account, allowing money to be transferred from place to place without the attendant risk of physically carrying the gold. In response to these innovations, clever people invented novel ways to manage their names, for both legitimate and illegitimate purposes, giving us the alias, the doing business as, and the cover name. Americans invented personal reinvention, or at least made it a central cultural artifact, and developed a strong distaste for central management of the personal namespace.

Enter the computer

With the computer era came the user ID: first one, then two, and then infinity. With the Internet boom, we got retail e-commerce and the proliferation of user IDs and passwords. The venerable letter of introduction reemerged as an identity certificate, and the bank account evolved into dozens of different glittering creatures. While enabling online services to an increasingly mobile population, this explosion in user IDs created inconvenience and risk for people and institutions. As shopping and banking moved online, identity theft went high tech. We responded with two- and three-factor authentication, public key infrastructure, cryptographically strong authentication, and single-sign-on technologies such as Microsoft’s Passport and federated authentication from the Liberty Alliance. We’re currently trapped between Scylla and Charybdis. On one side, civil libertarians warn that a centralized authentication service comprising a concentration of power and operational and systemic risk represents an unacceptable threat to a free society. On the other, we have a chaotic morass of idiosyncratic user ID and password implementations that inconvenience people and invite attack.

The King is dead! Long live the King!

With its controversial Passport technology, Microsoft attempted to address the visible need by offering a single user ID and password framework to sites across the Internet. With eBay’s recent defection, it’s increasingly clear that Passport isn’t winning large ecommerce sites. Ultimately, Passport failed commercially not because of competitors’ hostility or civil libertarians’ skepticism‚ — or even because of the technical problems in the software‚ — but rather because enterprises proved unwilling to cede management of their clients’ identities to a third party. This is an important lesson, but not a reason to give up on the effort to create a usable framework.

Who or what will step up and make the next attempt to meet the need? Did we learn enough from the debate about Passport to clearly identify the salient characteristics of what comes next? Have we made enough progress toward a consensus on the need for “a” solution that the next company up to bat will be willing to hazard the amount of treasure that Microsoft spent on Passport? Now is the time for a vigorous dialogue to get clarity. We aren’t likely again to see a comparable exercise of courage, however misguided, so it behooves us to reduce the risk for the next round of competitors.

A successful Internet identity service framework must include admitting multiple independent authorities. Some industries have a strong need to establish a common identity and will insist on controlling the credential. Some governments will decide to do likewise, whereas others will leave it to the private sector. But identity services shouldn’t be tied to any individual vendor, country, or technology. They should allow the dynamic assembly of sets of privileges, permitting participating systems to assign rights and augment verification requirements.

Thus, a level of proof sufficient for my ISP to permit me to send a social email could be overlaid with an extra layer by my bank before allowing me to transfer money. It should be possible to migrate my identity from one ISP to another without losing all of my privileges, although I might have to re-verify them. It should be possible to easily firewall segments of my identity from others so that losing control over one component doesn’t result in the loss of the others.

Conclusion

This can’t be all that’s required, or we wouldn’t still be scratching our heads about it at this late date. It’s clear that there are thorny policy issues in addition to some very challenging technical questions. Getting to a workable Internet identity framework will take hard work, so let’s get going.

[Here is a PDF file of the original editorial.]

From the Editors: A Witty Lesson

[This editorial was published originally in “Security & PrivacyVolume 2 Number 4 July/August 2004]

Archaeologists wonder why the city of Naachtun, capital of the Mayan kingdom of Masuul, was abandoned suddenly, with no evidence of natural or manmade disaster. No volcanic eruption. No invading hordes. Why, after more than 250 years of growth and economic vigor was this city abruptly evacuated? Did the leading people in the city fail to react to some important change? What happened?

Two recent Internet worms, Slammer and Witty, have sounded an alarm to the entire computer security industry. To date, however, we have failed to respond to the alarm with the vigor warranted. Could we be dooming the Internet itself to the fate of Naachtun?

When Slammer hit in January 2003, it shocked the security community by growing with unprecedented rapidity‚ — doubling every eight seconds or so. The bulk of the machines destined to be infected were hit within 10 minutes, although the impact on the Internet peaked after only three.

Oh my gosh, we all said; this is really bad. Later, we breathed a sigh of relief, thinking the worm’s virulence had been a fluke. We thought we’d never again see an exploit that could be distributed in a single UDP packet. And it was really our own fault, we acknowledged, because the vulnerability and its patch were published in July 2002, six months prior to the attack; the lesson is that we have to tighten up our system-management capabilities.

The good news is that we haven’t yet seen another major worm propagated via single UDP packets.

Now for the bad news.

Media reports indicate that some new virus toolkits make malware construction as easy as running a computer game’s installation wizard. While such toolkits might not be very serious threats in themselves, they warn us that we can no longer assume that the time scale for virus and worm propagation is slow enough to analyze, plan, and execute in the way we’re used to doing.

And now for the really bad news.

On 8 March 2004, a vulnerability was discovered in a popular security product. Ten days later, the vendor released a vulnerability notice along with a patch. The Witty worm, designed to exploit this vulnerability, struck the following day. The Witty worm is notable for four things:

It was released one day after the publication of the vulnerability with the associated patch.

It pretargeted a set of vulnerable machines, thus accelerating its initial growth.

It was actively destructive.

It targeted a security product.

Colleen Shannon and David Moore of the Cooperative Association for Internet Data Analysis (CAIDA) completed an excellent analysis of the Witty worm shortly after it hit; their report is included as a special feature in this issue of IEEE Security & Privacy. As they note, the key point is that, “the patch model for Internet security has failed spectacularly…. When end users participating in the best security practice that can be reasonably expected get infected with a virulent and damaging worm, we must reconsider the notion that end-user behavior can solve or even effectively mitigate the malicious software problem ….”

So now what?

The US National Cyber Security Partnership has recently completed a set of recommendations in response to the National Strategy to Secure Cyberspace report. One of its top recommendations is, “developing best practices for putting security at the heart of the software design process” (www.cyberpartnership.org/SDLCFULL.pdf). Denial is right out, though we will continue with business as usual for a while. Meanwhile, we’d better get cracking on replumbing the software-development infrastructure so that we can confidently know that any program that can hang out a socket won’t be vulnerable.

[Here is a PDF file of the original editorial.]

From the Editors: Whose Data Are These, Anyway?

[This editorial was published originally in “Security & PrivacyVolume 2 Number 3 May/June 2004]

A few years ago I had lunch with Ray Cornbill, a friend of mine who is a distinguished professor, though not a physician, at a major medical school. Ray’s unique sideline is as an international rugby coach. We chatted about our work and compared notes on current events. As we finished our lunch and prepared to depart, he made a remarkable statement: “I’m going over to the radiology practice to pick up my old x-rays.”

What did he mean by that? It turns out that the radiology lab that had taken his x-rays for the past couple of decades decided that it could no longer afford to keep the old ones around. Because he was a well-known professor at an affiliated medical school, a staff member had given him the heads up about the imminent disposal. Why did he care? Well, before becoming a rugby coach, he was an active rugby player for many years. Rugby is, shall we say, a rather vigorous sport, and he had suffered several injuries to various parts of his musculoskeletal system. His x-rays represented his physical history; he felt strongly that losing this history would make it harder for his doctors to care for him in the future, so he wanted to collect them before they were discarded. Notice that he wasn’t endangered by disclosure of confidential information, but rather by the loss of valuable historical data.

You go to a doctor or a hospital for an x-ray, but did you ever wonder who “owns” that x-ray? Your doctor ordered it and analyzed it, the radiologist made it and stored it, your insurance company paid for it — but it’s a picture of your body and is probably of the most value to you. But possession is nine-tenths of the law, as they say.

More than who has ownership claims, however, this question raises the issue of what ownership is. If we were to express the rights of ownership as methods in C++, they’d have names like these: Copy. Move. Destroy. Read. Modify. Transmit to someone else. The lab invoked its “ownership by possession” right in planning to destroy the x-rays. This is not entirely unfair; after all, the lab had spent money for years to store them and it is reasonable for them to want to stop spending money without a return. But in doing so it ran up against others’ unarticulated but valid ownership claims.

Lately, much discussion in our community has focused on digital rights management (DRM), which seems mostly an examination of copyright law and how it might change in light of current technological advances. The debate has grown heated as the ability to use technology to recast the terms and conditions of, for example, the sale of a book or recording have penetrated the awareness of both sellers and buyers. As the likes of Lawrence Lessig of Creative Commons and Clifford Lynch of the Coalition for Networked Information point out, the doctrine of first sale can be overthrown technologically. There might come a day when the fact that you’ve paid for a book doesn’t mean that you can lend it to a friend or even reread it years later. This has huge implications for libraries and educational institutions, as well as for people who ever replace their computers or DVD players. These important issues are currently being debated in a wide range of fora.

What is most troubling, however, is that the only model that seems to get serious attention is wrapped around broadcast-like models of distribution — models with a few producers and many consumers, such as books, music, movies, cable TV, broadcast TV, and radio. Questions such as rights management for things like x-ray images and medical records in general don’t attract much attention. For these sorts of intellectual property, we have to think more deeply about rights such as the right to destroy, the right to disclose, to publish or otherwise render unsecret, and others. Some work has been done in this area, such as for items like news photos, which tend to distinguish between the rights of celebrities and those of the rest of the population. The owners of important buildings have begun to assert rights to pictures that professional photographers take of them. And the performing arts community has established, by contract and practice, a system of fractional rights (primarily for managing royalty streams) that has some interesting characteristics.

Conclusion

Of course, this debate is not just about x-rays. It’s about any of your information sitting in a folder in your doctor’s office or that your bank has on file. Certainly the medical community, struggling with new HIPAA rules governing medical privacy, doesn’t seem to have managed to provoke an engagement from the computer-science and software-engineering research community. This is unfortunate for several reasons: our community has the ability to make a unique and valuable contribution to the discussion, and now is the time to resolve conceptual issues and design decisions so that the next generation of infrastructure can accommodate the emerging requirements.

[Here is a PDF file of the original editorial.]

From the Editors: Toward a Security Ontology

[This editorial was published originally in “Security & PrivacyVolume 1 Number 3 May/June 2003]

There comes a point in the life of any new discipline when it realizes that it must begin to grow up. That time has come to the security field, as this magazine’s founding indicates. Many things come with adulthood — some desirable and some less so. If we’re to establish a place in the engineering community for ourselves as practitioners with expertise in security and privacy issues, we must be clear about what it is that we do and what we don’t do; what can be expected of us and the boundaries of our capabilities.

Today, far too much security terminology is vaguely defined. We find ourselves confused when we communicate with our colleagues and, worse yet, we confuse the people we’re trying to serve. Back in the bad old days, it seemed clearer. The Orange Book (see the related sidebar) was new and seemed relevant, and the industry agreed on the nature of the security problem. Today, we find the Orange Book, developed near the end of mainframes’ golden age and before the widespread networking of everything with a program counter, less helpful.

In the midst of a security incident, we have a responsibility to communicate clearly and calmly about what’s happening. We must be able to explain during incidents (and at other times) to fellow security experts, to other technologists, and to the general public in a clear and effective way just what it is that we do, how we do it, and how they benefit from our work. For this conversation, simple appeals for better security are too trivial, but detailed analyses of cryptographic key lengths are too fine-grained.

There have been several attempts at assembling glossaries of terms in the field. Although these have been useful contributions, glossaries are inherently unable to give form and direction to a field. A glossary is generally a collection of known terms and should be inclusive in scope. This means that it naturally includes contradictory or subtly overlapping terms, leaving it to the reader to decide which to use and which to discard. Independent practitioners will innocently make different choices, and suddenly we’re in comp.tower.of.babel.

It is the nature of an active technical field that there be continuing change. New systems are built, old ones are modified, and both have new vulnerabilities. Attacks are developed that exploit these vulnerabilities, letting bad guys wreak a certain amount of havoc before we can mobilize and close them off. Some of these exploits are not conceptually new: we’ve seen them before and we can classify them with other like things. This helps us predict outcomes and set expectations. Other things truly are new: we must name them so that we can talk about them later. What’s missing is a broader context that we can use to organize our thinking and discussion.

What the field needs is an ontology — a set of descriptions of the most important concepts and the relationships among them. Such an ontology would include at least these concepts: data, secrecy, privacy, availability, integrity, threats, exploits, vulnerabilities, detection, defense, cost, policy, encryption, response, value, owner, authorization, authentication, roles, methods, and groups. It should also contain these relationships: “owns,” “is an instance of,” “acts on,” “controls,” “values,” “characterizes,” “makes sets of,” “identifies,” and “quantifies.” A good ontology will help us organize our thinking and writing about the field and help us teach our students and communicate with our clients. A great ontology will help us report incidents more effectively, share data and information across organizations, and discuss issues among ourselves. Just as students of medicine must learn a medical ontology as part of their education, to avoid mistakes and improve the quality of care, so ultimately should all information technologists learn the meanings and implications of these terms and their relationships.

Conclusion

There has been a substantial amount of good work along the lines of developing an ontology, starting at least with the Orange Book. However, recent rapid growth in the field has left the old ontology behind; as a result, it increasingly feels like we’re entering the precincts of the Tower of Babel. We need a good ontology. Maybe we can set the example by building our ontology in a machine-usable form in using XML and developing it collaboratively. Is there a Linnaeus, a father of taxonomy, for our field waiting in the wings somewhere?

[Here is a PDF file of the original editorial.]