From the Editors: What’s in a Name?

[This editorial was published originally in “Security & Privacy” Volume 3 Number 2 March/April 2005]

“What’s in a name? That which we call a rose

By any other name would smell as sweet;”
—Romeo and Juliet, Act II, Scene ii

In ancient times, when the economy was agrarian and people almost never traveled more than a few miles from their places of birth, most people made do with a single personal name. Everyone you met generally knew you, and if there did happen to be two Percivals in town, people learned to distinguish between “tall Percival” and “short Percival.”

The development of travel and trade increased the number of different people you might meet in a life time and led to more complex names. By the Greek classical period, an individual’s name had become a three-part structure including a personal name, a patronymic, and a demotic, which identified the person’s deme—roughly, one’s village or clan.

This represented the end of the line in the evolution of names for several thousand years. During that time, people developed a range of concepts to enrich names with extra capabilities. Letters of introduction enabled travelers to enter society in a distant city almost as if they were locals. Renaissance banking developed the early ancestors of the letter of credit and the bank account, allowing money to be transferred from place to place without the attendant risk of physically carrying the gold. In response to these innovations, clever people invented novel ways to manage their names, for both legitimate and illegitimate purposes, giving us the alias, the doing business as, and the cover name. Americans invented personal reinvention, or at least made it a central cultural artifact, and developed a strong distaste for central management of the personal namespace.

Enter the computer

With the computer era came the user ID: first one, then two, and then infinity. With the Internet boom, we got retail e-commerce and the proliferation of user IDs and passwords. The venerable letter of introduction reemerged as an identity certificate, and the bank account evolved into dozens of different glittering creatures. While enabling online services to an increasingly mobile population, this explosion in user IDs created inconvenience and risk for people and institutions. As shopping and banking moved online, identity theft went high tech. We responded with two- and three-factor authentication, public key infrastructure, cryptographically strong authentication, and single-sign-on technologies such as Microsoft’s Passport and federated authentication from the Liberty Alliance.

We’re currently trapped between Scylla and Charybdis. On one side, civil libertarians warn that a centralized authentication service comprising a concentration of power and operational and systemic risk represents an unacceptable threat to a free society. On the other, we have a chaotic morass of idiosyncratic user ID and password implementations that inconvenience people and invite attack.

The King is dead! Long live the King!

With its controversial Passport technology, Microsoft attempted to address the visible need by offering a single user ID and password framework to sites across the Internet. With eBay’s recent defection, it’s increasingly clear that Passport isn’t winning large ecommerce sites. Ultimately, Passport failed commercially not because of competitors’ hostility or civil libertarians’ skepticism—or even because of the technical problems in the software—but rather because enterprises proved unwilling to cede management of their clients’ identities to a third party. This is an important lesson, but not a reason to give up on the effort to create a usable framework.
Who or what will step up and make the next attempt to meet the need? Did we learn enough from the debate about Passport to clearly identify the salient characteristics of what comes next? Have we made enough progress toward a consensus on the need for “a” solution that the next company up to bat will be willing to hazard the amount of treasure that Microsoft spent on Passport? Now is the time for a vigorous dialogue to get clarity. We aren’t likely again to see a comparable exercise of courage, however misguided, so it behooves us to reduce the risk for the next round of competitors.
A successful Internet identity service framework must include admitting multiple independent authorities. Some industries have a strong need to establish a common identity and will insist on controlling the credential. Some governments will decide to do likewise, whereas others will leave it to the private sector. But identity services shouldn’t be tied to any individual vendor, country, or technology. They should allow the dynamic assembly of sets of privileges, permitting participating systems to assign rights and augment verification requirements.
Thus, a level of proof sufficient for my ISP to permit me to send a social email could be overlaid with an extra layer by my bank before allowing me to transfer money. It should be possible to migrate my identity from one ISP to another without losing all of my privileges, although I might have to re-verify them. It should be possible to easily firewall segments of my identity from others so that losing control over one component doesn’t result in the loss of the others.

Conclusion

This can’t be all that’s required, or we wouldn’t still be scratching our heads about it at this late date. It’s clear that there are thorny policy issues in addition to some very challenging technical questions. Getting to a workable Internet identity framework will take hard work, so let’s get going.

From the Editors: Charge of the Light Brigade

[Originally published in the January/February 2008 issue (Volume 6 number 1) of IEEE Security & Privacy magazine.]

In 1970, the late Per Brinch Hansen wrote a seminal article (“The Nucleus of a Multiprogramming System”) that articulated and justified what today we call policy/mechanism separation. He introduced the concept in the context of an operating system’s design at a time when experts felt we lacked a clear understanding of what the ultimate shape of operating systems would be. The concept, like other powerful memes, was so compelling that it took on a life of its own and is now an article of faith in CS education—taught without reference to the original context.

The idea isn’t original to computer science—it has existed for thousands of years. In martial terms, it’s reflected in the popular paraphrase from Alfred, Lord Tennyson‘s poem, Charge of the Light Brigade: “ours is not to reason why; ours is but to do and die.” Separation of policy and mechanism has become an article of faith in our field because it’s so powerful at helping us distinguish between the things that we can decide now and the things that we might need to change later. The key is identifying the knobs and dials of potential policy choices during system design and implementing a flexible way to allow those choices to be bound late.

In this issue of IEEE Security & Privacy, the article “Risking Communications Security: Potential Hazards of the Protect America Act” (p. 24) explores some of the hazards associated with a blind application of this principle to large infrastructures such as the Internet and the telephone system. Although this analysis is conducted in terms of a specific US law, it raises universal questions faced by free societies when considering the tension between individual privacy rights and the collective “right” to security.

What we see at play here are three large policy objectives in conflict: first, allowing the security establishment to scrutinize communications they legitimately believe would-be terrorists could use; second, protecting innocent people’s privacy using communications networks; and third, safeguarding the integrity of the communications systems so that their continuing operational integrity isn’t jeopardized. The article cites several recent examples from around the world in which the introduction of ill-considered monitoring systems has led to disastrous unintended consequences.

To date, the public debate on the Protect America Act has focused on the first two issues: the trade-off between security and privacy. Whether a piece of legislation drafted in haste by one of the most divided US congresses in history could have found a wise balance will be learned only retrospectively. What the authors of “Risking Communications Security” clearly demonstrate is that the third policy objective certainly isn’t addressed in the law.

But that’s okay, you might be tempted to say: policy should be set independently from how it is implemented. As Tennyson illustrates, it’s a good theory—but in practice, the unintended consequences can be shocking. One of the preconditions to doing a good job of separating policy from mechanism is that the knobs and dials offered to policy writers can be implemented. As the article’s authors observe, the law assumes that communications networks can deliver high-quality information about the geographic location of end points. This isn’t easy and might not be possible for many modes of communication, particularly for voice over IP (VoIP), cell phones, and WiFi connections.

Leaving issues to be addressed later, as this law does, is a time-honored tradition in legislation. The Protect America Act expires in February 2008, so there will be at least one chance to renegotiate the terms. This means that this specific act won’t affect capital spending by infrastructure builders in the US until after the rewrite. It gives interested people in the US and elsewhere a chance to mitigate the systemic risk by influencing the next rewrite, assuming that a more rational political conversation emerges in the US Congress sometime in the next few years. This article should be a key input to that conversation.

Patch Management – Bits, Bad Guys, and Bucks!

(This article was originally published in 2003 by Secure Business Quarterly, a now-defunct publication.  Not having an original copy handy and not being able to refer people to the original site, I have retrieved a copy from the Internet Archive Wayback Machine (dated 2006 in their archive).  The text of the original article is reproduced here for convenience.)

After the flames from Slammer’s attack were doused and the technology industry caught up on its lost sleep, we started asking questions. Why did this happen? Could we have prevented it? What can we do to keep such a thing from happening again?

These are questions we ask after every major security incident, of course. We quickly learned that the defect in SQL Server had been identified and patches prepared for various platforms more than six months before, so attention turned to system administrators. Further inquiry, however, shows that things are more complex.

There were several complicating factors that conspired to make success at patching this system problematical. First of all, there were several different patches out, none of which had been widely or well publicized. In addition, there were confusing version incompatibilities that made the patching of some systems into much larger endeavors, as chains of dependencies had to be unraveled and entire sets of patches applied and tested. And finally, to add insult to injury, at least one patch introduced a memory leak into SQL Server.

As if that weren’t enough, MSDE includes an invisible SQL Server.  MSDE comes with a component of Visual Studio, which made that product vulnerable even though it neither included an explicit SQL Server license nor any DBA visibility. That shouldn’t have added risk, except that some pieces of software were shipped with MSDE and other no-longer-needed parts of the development environment included. As we all know, many software products are shipped with development artifacts intertwined with the production code because disk space is cheaper than keeping track of and subsequently removing all of the trash lying around in the development tree. And those development tools are really useful when tech support has to diagnose a problem.

To compound the challenge, patches in general can’t be trusted without testing. A typical large environment runs multiple versions of desktop operating systems, say NT4, 2000, XP Home, and XP Pro. If the patch addresses multiple issues across several versions of a common application, you’re talking about a product that has about fifty configuration permutations. Testing that many cases represents significant time and cost.

Finally, there’s the sheer volume of patches flowing from the vendor community. There’s no easy way for an administrator to tell whether a particular patch is ‘really serious,’ ‘really, really, serious,’ or ‘really, really, really, serious.’ The industry hasn’t yet figured out how to normalize all of the verbiage. Even so, knowing that a weakness exists doesn’t give any insight into how virulent a particular exploit of that weakness might prove to be. Slammer was remarkably virulent, but its patch went out to the systems community along with thousands of remedies for other weaknesses that haven’t been exploited nearly so effectively.

Unfortunately, an aggressive strategy of applying all patches is one that is uneconomical with the current operating model of the industry. Let’s look at some numbers. These numbers are benchmark numbers that are broadly typical of costs and performance in the entire industry, not specific to any individual company. The state of automation in the desktop OS world has improved dramatically in the last ten years. A decade ago an upgrade to a large population of desktop machines required a human visit to each machine. Today the automated delivery of software is dramatically superior, but not where it ultimately needs to be. Let’s say, for the purpose of argument, that automated patch installation for a large network is 90% successful.  (A colleague suggests that today a more realistic number is 80%, “even assuming no restriction on network capacity and using the latest version of SMS”; he characterized 90% as the “go out and get drunk” level of success.)  A person must visit each of the 10% of machines for which the automated installation failed. This person must figure out what went wrong and install the patch by hand. For a large corporate network with, say, 50,000 machines to be patched, that translates to 5,000 individual visits. A benchmark number for human support at the desktop is about $50 per visit. Thus, a required patch in a modern environment translates into a $250,000 expenditure. That’s not a trivial amount of money and it makes the role of a system manager, who faces tight budgets and skeptical customers, even more challenging.

The costs aside, how close to 100% is required to close a loophole? Informal comments from several CISOs suggest that Slammer incapacitated corporate networks with roughly 50,000 hosts by infecting only about two hundred machines. Thats 0.4%. With NIMDA, it was worse: one enterprise disconnected itself from the Internet for two weeks because it had two copies of NIMDA that were actually triggered. What can we do to make our systems less vulnerable and reduce both the probability of another Slammer incident and, more importantly, the harm that such an incident threatens? We can work together in the industry to improve the automated management of systems. Every 1% improvement in the performance of automated patch installation systems translates directly into $25,000 cash savings for the required patches in our example. An improvement of 9%, from 90% to 99%, translates into a savings of $225,000 for each patch that must be distributed. Do that a few times, and, as Everett Dirksen noted, “pretty soon you’re talking about real money.”

Improving the effectiveness of patch application requires that we improve both our software packaging and our patch distribution and execution automation. After we improve packaging and distribution, we can figure out a way to easily and quickly tell what components reside on each system, ideally by asking the system to tell us. Databases get out of date but the system itself usually won’t lie. We can build automated techniques to identify all of the patches from all relevant vendors that need to be applied to a given system — and then apply them. We can either simplify our system configurations, which has obvious benefits, or figure out ways to ensure that components are better behaved, which reduces the combinatorial complexity of applying and testing the applied patches.

The methods and practices of the security industry, particularly the high technology security world, have been for years derived from those developed for national security problems. These are problems for which the cost of failure is so enormous, as in the theft or misuse of nuclear weapons, that failure is not an option. For this class of problem, the commercial practice of balancing risk and cost is impossible, except in the vacuous limiting case in which cost is infinite. Now we are working through practical security management in a commercial environment and we are beginning to get our hands around some of the quantitative aspects of the problems we face. If getting a patch out to all computers in our environment will cost us a quarter of a million dollars and we have X patches per year, then we have to weigh that cost against the cost of the harm suffered and the cleanup expense incurred if we don’t distribute all of the patches. It tells us to spend some money, though not an arbitrarily large amount; on improving the quality of our automated patch distribution and application processes, with an objective of absolute 100% coverage for automatic updates. It tells us to work for a better system of quantification of threat severity. It tells us to spend effort on strengthening our incident response capabilities.

Cyberassault on Estonia

[This editorial was published originally in “Security & Privacy” Volume 5 Number 4 July/August 2007]

Estonia recently survived a massive distributed denial-of-service (DDoS) attack that came on the heels of the Estonian government’s relocation of a statue commemorating Russia’s 1940s wartime role. This action inflamed the feelings of the substantial Russian population in Estonia, as well as those of various elements in Russia itself.

Purple prose then boiled over worldwide, with apocalyptic announcements that a “cyberwar” had been unleashed on the Estonians. Were the attacks initiated by hot-headed nationalists or by a nation state? Accusations and denials have flown, but no nation state has claimed authorship.

It’s not really difficult to decide if this was cyberwarfare or simple criminality. Current concepts of war require people in uniforms or a public declaration. There’s no evidence that such was the case. In addition, there’s no reason to believe that national resources were required to mount the attack. Michael Lesk’s piece on the Estonia attacks in this issue (see the Digital Protection department on p. 76) include estimates that, at current botnet leasing prices, the entire attack could have been accomplished for US$100,000, a sum so small that any member of the upper middle class in Russia, or elsewhere, could have sponsored it.

Was there national agency? It’s highly doubtful that Russian President Vladimir Putin or anyone connected to him authorized the attacks. If any Russian leader had anything to say about the Estonians, it was more likely an intemperate outburst like Henry II’s exclamation about Thomas Becket, “Will no one rid me of this troublesome priest?”

We can learn from this, however: security matters, even for trivial computers. A few tens of thousands of even fairly negligible PCs, when attached by broadband connections to the Internet and commanded in concert, can overwhelm all modestly configured systems — and most substantial ones.

Engineering personal systems so that they can’t be turned into zombies is a task that requires real attention. In the meantime, the lack of quality-of-service facilities in our network infrastructure will leave them vulnerable to future botnet attacks. Several avenues are available to address the weaknesses in our current systems, and we should be exploring all of them. Faced with epidemic disease, financial panic, and other mass threats to the common good, we’re jointly and severally at risk and have a definite and legitimate interest in seeing to it that the lower limits of good behavior aren’t violated.

From the Estonia attacks, we’ve also learned that some national military institutions are, at present, hard-pressed to defend their countries’ critical infrastructures and services. Historically, military responses to attacks have involved applying kinetic energy to the attacking forces or to the attackers’ infrastructure. But when the attacking force is tens or hundreds of thousands of civilian PCs hijacked by criminals, what is the appropriate response? Defense is left to the operators of the services and of the infrastructure, with the military relegated to an advisory role‚Äîsomething that both civilians and military must find uncomfortable. Of course, given the murky situations involved in cyberwar, we’ll probably never fully learn what the defense establishments could or did do.

Pundits have dismissed this incident, arguing that this is a cry of “wolf!” that should be ignored (see www.nytimes.com/2007/06/24/weekinreview/24schwartz.html). Although it’s true that we’re unlikely to be blinded to an invasion by the rebooting of our PCs, it’s na√Øve to suggest that our vulnerability to Internet disruptions has passed its peak. Cyberwar attacks, as demonstrated in 2003 by Slammer, have the potential to disable key infrastructures. To ignore that danger is criminally naive. Nevertheless, all is not lost.

Conclusion

Events like this have been forecast for several years, and as of the latest reports, there were no surprises in this attack. The mobilization of global expertise to support Estonia’s network defense was heartening and will probably be instructive to study. Planners of information defenses and drafters of future cyberdefense treaties should be contemplating these events very carefully. This wasn’t the first such attack — and it won’t be the last.

[Here is a PDF file of the original editorial.]

Insecurity through Obscurity

[This editorial was published originally in “Security & Privacy” Volume 4 Number 5 September/October 2006]

Settling on a design for a system of any sort involves finding a workable compromise among functionality, feasibility, and finance. Does it do enough of what the sponsor wants? Can it be implemented using understood and practical techniques? Is the projected cost reasonable when set against the anticipated revenue or savings?

In the case of security projects, functionality is generally stated in terms of immunity or resistance to attacks that seek to exploit known vulnerabilities. The first step in deciding whether to fund a security project is to assess whether its benefits outweigh the costs. This is easy to state but hard to achieve.

What are the benefits? Some set of exploits will be thwarted. But how likely would they be to occur if we did nothing? And how likely will they be to occur if we implement the proposed remedy? What is the cost incurred per incident to repair the damage if we do nothing? Armed with the answers to these often unanswerable questions, we can get some sort of quantitative handle on the benefits of implementation in dollars-and-cents terms.

What are the costs? Specification, design, implementation, deployment, and operation of the solution represent the most visible costs. What about the efficiency penalty that stems from the increased operational complexity the solution imposes? This represents an opportunity cost in production that you might have achieved if you hadn’t implemented the solution.

In the current world of security practice, it’s far too common, when faced with vast unknowns about benefits, to fall back on one of two strategies: either spend extravagantly to protect against all possible threats or ignore threats too expensive to fix. Protection against all possible threats is an appropriate goal when securing nuclear weapons or similar assets for which failure is unacceptable, but for most other situations, a more pragmatic approach is indicated.

Unfortunately, as an industry, we’re afflicted with a near complete lack of quantitative information about risks. Most of the entities that experience attacks and deal with the resultant losses are commercial enterprises concerned with maintaining their reputation for care and caution. This leads them to the observation that disclosing factual data can assist their attackers and provoke anxiety in their clients. The lack of data-sharing arrangements has resulted in a near-complete absence of incident documentation standards; as such, even if organizations want to compare notes, they face a painful exercise in converting apples to oranges.

If our commercial entities have failed, is there a role for foundations or governments to act? Can we parse the problem into smaller pieces, solve them separately, and make progress that way? Other fields, notably medicine and public health, have addressed this issue more successfully than we have. What can we learn from their experiences? Doctors almost everywhere in the world are required to report the incidence of certain diseases and have been for many years. California’s SB 1386, which requires disclosure of computer security breaches, is a fascinating first step, but it’s just that — a first step. Has anyone looked closely at the public health incidence reporting standards and attempted to map them to the computer security domain? The US Federal Communications Commission (FCC) implemented telephone outage reporting requirements in 1991 after serious incidents and in 2004 increased their scope to include all the communications platforms it regulates. What did it learn from those efforts, and how can we apply them to our field?

The US Census Bureau, because it’s required to share much of the data that it gathers, has developed a relatively mature practice in anonymizing data. What can we learn from the Census Bureau that we can apply to security incident data sharing? Who is working on this? Is there adequate funding?

Conclusion

These are all encouraging steps, but they’re long in coming and limited in scope. Figuring out how to gather and share data might not be as glamorous as cracking a tough cipher or thwarting an exploit, but it does have great leverage.

[Here is a PDF file of the original editorial.]

The Impending Debate

[This editorial was published originally in “Security & Privacy” Volume 4 Number 2 March/April 2006]

There’s some scary stuff going on in the US right now. President Bush says that he has the authority to order, without a warrant, eavesdropping on telephone calls and emails from and to people who have been identified as terrorists. The question of whether the president has this authority will be resolved by a vigorous debate among the government’s legislative, executive, and judicial branches, accompanied, if history is any guide, by copious quantities of impassioned rhetoric and perhaps even the rending of garments and tearing of hair. This is as it should be.

The president’s assertion is not very far, in some ways, from Google’s claims that although its Gmail product examines users’ email for the purpose of presenting to them targeted advertisements, user privacy isn’t violated because no natural person will examine your email. The ability of systems to mine vast troves of data for information has now arrived, but policy has necessarily lagged behind. The clobbering of Darpa’s Total Information Awareness initiative (now renamed Terrorism Information Awareness; http://searchsecurity.techtarget.com/sDefinition/0,,sid14_gci874056,00.html) in 2004 was a lost opportunity to explore these topics in a policy debate, an opportunity we may now regain. Eavesdropping policy conceived in an era when leaf-node monitoring was the only thing possible isn’t necessarily the right one in this era of global terrorism. What the correct policy should be, however, requires deep thought and vigorous debate lest the law of unintended consequences take over.

Although our concerns in IEEE Security & Privacy are perhaps slightly less momentous, we are, by dint of our involvement with and expertise in the secure transmission and storage of information, particularly qualified to advise the participants in the political debate about the realities and the risks associated with specific assumptions such as what risks are presented by data mining. As individuals, we’ll be called on to inform and advise both the senior policy makers who will engage in this battle and our friends and neighbors who will watch it and worry about the outcome. It behooves us to do two things to prepare for this role. One, we should take the time now to inform ourselves of the technical facts, and two, we should analyze the architectural options and their implications.

Unlike classical law enforcement wiretapping technology (covered in depth in S&P’s November/December 2005 issue), which operates at the leaves of the communication interconnection tree, this surveillance involves operations at or close to the root. When monitoring information at the leaves, only information directed to the specific leaf node is subject to scrutiny. It’s difficult when monitoring at the root to see only communications involving specific players‚ — monitoring at the root necessarily involves filtering out the communications not being monitored, something that involves looking at them. When examining a vast amount of irrelevant information, we haven’t yet demonstrated a clear ability to separate signal (terrorist communication, in this case) from noise (innocuous communication). By tracking down false leads, we waste expensive skilled labor, and might even taint innocent people with suspicion that could feed hysteria in some unfortunate future circumstance.

Who’s involved in the process of examining communications and what are the possible and likely outcomes of engaging in this activity? The security and privacy community has historically developed scenario analysis techniques in which we hypothesize several actors, both well- and ill-intentioned, and contemplate their actions toward one another as if they were playing a game. Assume your adversary makes his best possible move. Now assume you make your best possible response. And so on. In the case of examining communications at the root, we have at least four actors to consider.

One is the innocent communicator whom we’re trying to protect, another is the terrorist whom we’re trying to thwart. The third is the legitimate authority working to protect the innocent from the terrorist, and the fourth, whom we ignore at our peril, is the corrupted authority who, for some unknown reason, is tempted to abuse the information available to him to the detriment of the innocent. We could choose, in recognition of the exigencies of a time of conflict, to reduce our vigilance toward the corrupted authority, but history has taught us that to ignore the concept puts us and our posterity in mortal peril.

Conclusion

Our community’s challenge in the coming debate is to participate effectively, for we occupy two roles at once. We are technical experts to whom participants turn for unbiased fact-based guidance and insight, and we are simultaneously concerned global citizens for whom this debate is meaningful and important. We must avoid the temptation to use our expertise to bias the debate, but we must also avoid being passive bystanders. We must engage thoughtfully and creatively. We owe this to our many countries, our colleagues, our neighbors, our friends, our families, and ourselves.

[Here is a PDF file of the original editorial.]

What’s in a Name?

[This editorial was published originally in “Security & PrivacyVolume 3 Number 2 March/April 2005]

“What’s in a name? That which we call a rose
By any other name would smell as sweet;”
— Romeo and Juliet, Act II, Scene ii

In ancient times, when the economy was agrarian and people almost never traveled more than a few miles from their places of birth, most people made do with a single personal name. Everyone you met generally knew you, and if there did happen to be two Percivals in town, people learned to distinguish between “tall Percival” and “short Percival.”

The development of travel and trade increased the number of different people you might meet in a life time and led to more complex names. By the Greek classical period, an individual’s name had become a three-part structure including a personal name, a patronymic, and a demotic, which identified the person’s deme ‚Äî roughly, one’s village or clan.

This represented the end of the line in the evolution of names for several thousand years. During that time, people developed a range of concepts to enrich names with extra capabilities. Letters of introduction enabled travelers to enter society in a distant city almost as if they were locals. Renaissance banking developed the early ancestors of the letter of credit and the bank account, allowing money to be transferred from place to place without the attendant risk of physically carrying the gold. In response to these innovations, clever people invented novel ways to manage their names, for both legitimate and illegitimate purposes, giving us the alias, the doing business as, and the cover name. Americans invented personal reinvention, or at least made it a central cultural artifact, and developed a strong distaste for central management of the personal namespace.

Enter the computer

With the computer era came the user ID: first one, then two, and then infinity. With the Internet boom, we got retail e-commerce and the proliferation of user IDs and passwords. The venerable letter of introduction reemerged as an identity certificate, and the bank account evolved into dozens of different glittering creatures. While enabling online services to an increasingly mobile population, this explosion in user IDs created inconvenience and risk for people and institutions. As shopping and banking moved online, identity theft went high tech. We responded with two- and three-factor authentication, public key infrastructure, cryptographically strong authentication, and single-sign-on technologies such as Microsoft’s Passport and federated authentication from the Liberty Alliance. We’re currently trapped between Scylla and Charybdis. On one side, civil libertarians warn that a centralized authentication service comprising a concentration of power and operational and systemic risk represents an unacceptable threat to a free society. On the other, we have a chaotic morass of idiosyncratic user ID and password implementations that inconvenience people and invite attack.

The King is dead! Long live the King!

With its controversial Passport technology, Microsoft attempted to address the visible need by offering a single user ID and password framework to sites across the Internet. With eBay’s recent defection, it’s increasingly clear that Passport isn’t winning large ecommerce sites. Ultimately, Passport failed commercially not because of competitors’ hostility or civil libertarians’ skepticism‚ — or even because of the technical problems in the software‚ — but rather because enterprises proved unwilling to cede management of their clients’ identities to a third party. This is an important lesson, but not a reason to give up on the effort to create a usable framework.

Who or what will step up and make the next attempt to meet the need? Did we learn enough from the debate about Passport to clearly identify the salient characteristics of what comes next? Have we made enough progress toward a consensus on the need for “a” solution that the next company up to bat will be willing to hazard the amount of treasure that Microsoft spent on Passport? Now is the time for a vigorous dialogue to get clarity. We aren’t likely again to see a comparable exercise of courage, however misguided, so it behooves us to reduce the risk for the next round of competitors.

A successful Internet identity service framework must include admitting multiple independent authorities. Some industries have a strong need to establish a common identity and will insist on controlling the credential. Some governments will decide to do likewise, whereas others will leave it to the private sector. But identity services shouldn’t be tied to any individual vendor, country, or technology. They should allow the dynamic assembly of sets of privileges, permitting participating systems to assign rights and augment verification requirements.

Thus, a level of proof sufficient for my ISP to permit me to send a social email could be overlaid with an extra layer by my bank before allowing me to transfer money. It should be possible to migrate my identity from one ISP to another without losing all of my privileges, although I might have to re-verify them. It should be possible to easily firewall segments of my identity from others so that losing control over one component doesn’t result in the loss of the others.

Conclusion

This can’t be all that’s required, or we wouldn’t still be scratching our heads about it at this late date. It’s clear that there are thorny policy issues in addition to some very challenging technical questions. Getting to a workable Internet identity framework will take hard work, so let’s get going.

[Here is a PDF file of the original editorial.]

From the Editors: A Witty Lesson

[This editorial was published originally in “Security & PrivacyVolume 2 Number 4 July/August 2004]

Archaeologists wonder why the city of Naachtun, capital of the Mayan kingdom of Masuul, was abandoned suddenly, with no evidence of natural or manmade disaster. No volcanic eruption. No invading hordes. Why, after more than 250 years of growth and economic vigor was this city abruptly evacuated? Did the leading people in the city fail to react to some important change? What happened?

Two recent Internet worms, Slammer and Witty, have sounded an alarm to the entire computer security industry. To date, however, we have failed to respond to the alarm with the vigor warranted. Could we be dooming the Internet itself to the fate of Naachtun?

When Slammer hit in January 2003, it shocked the security community by growing with unprecedented rapidity‚ — doubling every eight seconds or so. The bulk of the machines destined to be infected were hit within 10 minutes, although the impact on the Internet peaked after only three.

Oh my gosh, we all said; this is really bad. Later, we breathed a sigh of relief, thinking the worm’s virulence had been a fluke. We thought we’d never again see an exploit that could be distributed in a single UDP packet. And it was really our own fault, we acknowledged, because the vulnerability and its patch were published in July 2002, six months prior to the attack; the lesson is that we have to tighten up our system-management capabilities.

The good news is that we haven’t yet seen another major worm propagated via single UDP packets.

Now for the bad news.

Media reports indicate that some new virus toolkits make malware construction as easy as running a computer game’s installation wizard. While such toolkits might not be very serious threats in themselves, they warn us that we can no longer assume that the time scale for virus and worm propagation is slow enough to analyze, plan, and execute in the way we’re used to doing.

And now for the really bad news.

On 8 March 2004, a vulnerability was discovered in a popular security product. Ten days later, the vendor released a vulnerability notice along with a patch. The Witty worm, designed to exploit this vulnerability, struck the following day. The Witty worm is notable for four things:

It was released one day after the publication of the vulnerability with the associated patch.

It pretargeted a set of vulnerable machines, thus accelerating its initial growth.

It was actively destructive.

It targeted a security product.

Colleen Shannon and David Moore of the Cooperative Association for Internet Data Analysis (CAIDA) completed an excellent analysis of the Witty worm shortly after it hit; their report is included as a special feature in this issue of IEEE Security & Privacy. As they note, the key point is that, “the patch model for Internet security has failed spectacularly…. When end users participating in the best security practice that can be reasonably expected get infected with a virulent and damaging worm, we must reconsider the notion that end-user behavior can solve or even effectively mitigate the malicious software problem ….”

So now what?

The US National Cyber Security Partnership has recently completed a set of recommendations in response to the National Strategy to Secure Cyberspace report. One of its top recommendations is, “developing best practices for putting security at the heart of the software design process” (www.cyberpartnership.org/SDLCFULL.pdf). Denial is right out, though we will continue with business as usual for a while. Meanwhile, we’d better get cracking on replumbing the software-development infrastructure so that we can confidently know that any program that can hang out a socket won’t be vulnerable.

[Here is a PDF file of the original editorial.]

From the Editors: Whose Data Are These, Anyway?

[This editorial was published originally in “Security & PrivacyVolume 2 Number 3 May/June 2004]

A few years ago I had lunch with Ray Cornbill, a friend of mine who is a distinguished professor, though not a physician, at a major medical school. Ray’s unique sideline is as an international rugby coach. We chatted about our work and compared notes on current events. As we finished our lunch and prepared to depart, he made a remarkable statement: “I’m going over to the radiology practice to pick up my old x-rays.”

What did he mean by that? It turns out that the radiology lab that had taken his x-rays for the past couple of decades decided that it could no longer afford to keep the old ones around. Because he was a well-known professor at an affiliated medical school, a staff member had given him the heads up about the imminent disposal. Why did he care? Well, before becoming a rugby coach, he was an active rugby player for many years. Rugby is, shall we say, a rather vigorous sport, and he had suffered several injuries to various parts of his musculoskeletal system. His x-rays represented his physical history; he felt strongly that losing this history would make it harder for his doctors to care for him in the future, so he wanted to collect them before they were discarded. Notice that he wasn’t endangered by disclosure of confidential information, but rather by the loss of valuable historical data.

You go to a doctor or a hospital for an x-ray, but did you ever wonder who “owns” that x-ray? Your doctor ordered it and analyzed it, the radiologist made it and stored it, your insurance company paid for it — but it’s a picture of your body and is probably of the most value to you. But possession is nine-tenths of the law, as they say.

More than who has ownership claims, however, this question raises the issue of what ownership is. If we were to express the rights of ownership as methods in C++, they’d have names like these: Copy. Move. Destroy. Read. Modify. Transmit to someone else. The lab invoked its “ownership by possession” right in planning to destroy the x-rays. This is not entirely unfair; after all, the lab had spent money for years to store them and it is reasonable for them to want to stop spending money without a return. But in doing so it ran up against others’ unarticulated but valid ownership claims.

Lately, much discussion in our community has focused on digital rights management (DRM), which seems mostly an examination of copyright law and how it might change in light of current technological advances. The debate has grown heated as the ability to use technology to recast the terms and conditions of, for example, the sale of a book or recording have penetrated the awareness of both sellers and buyers. As the likes of Lawrence Lessig of Creative Commons and Clifford Lynch of the Coalition for Networked Information point out, the doctrine of first sale can be overthrown technologically. There might come a day when the fact that you’ve paid for a book doesn’t mean that you can lend it to a friend or even reread it years later. This has huge implications for libraries and educational institutions, as well as for people who ever replace their computers or DVD players. These important issues are currently being debated in a wide range of fora.

What is most troubling, however, is that the only model that seems to get serious attention is wrapped around broadcast-like models of distribution — models with a few producers and many consumers, such as books, music, movies, cable TV, broadcast TV, and radio. Questions such as rights management for things like x-ray images and medical records in general don’t attract much attention. For these sorts of intellectual property, we have to think more deeply about rights such as the right to destroy, the right to disclose, to publish or otherwise render unsecret, and others. Some work has been done in this area, such as for items like news photos, which tend to distinguish between the rights of celebrities and those of the rest of the population. The owners of important buildings have begun to assert rights to pictures that professional photographers take of them. And the performing arts community has established, by contract and practice, a system of fractional rights (primarily for managing royalty streams) that has some interesting characteristics.

Conclusion

Of course, this debate is not just about x-rays. It’s about any of your information sitting in a folder in your doctor’s office or that your bank has on file. Certainly the medical community, struggling with new HIPAA rules governing medical privacy, doesn’t seem to have managed to provoke an engagement from the computer-science and software-engineering research community. This is unfortunate for several reasons: our community has the ability to make a unique and valuable contribution to the discussion, and now is the time to resolve conceptual issues and design decisions so that the next generation of infrastructure can accommodate the emerging requirements.

[Here is a PDF file of the original editorial.]

From the Editors: Toward a Security Ontology

[This editorial was published originally in “Security & PrivacyVolume 1 Number 3 May/June 2003]

There comes a point in the life of any new discipline when it realizes that it must begin to grow up. That time has come to the security field, as this magazine’s founding indicates. Many things come with adulthood — some desirable and some less so. If we’re to establish a place in the engineering community for ourselves as practitioners with expertise in security and privacy issues, we must be clear about what it is that we do and what we don’t do; what can be expected of us and the boundaries of our capabilities.

Today, far too much security terminology is vaguely defined. We find ourselves confused when we communicate with our colleagues and, worse yet, we confuse the people we’re trying to serve. Back in the bad old days, it seemed clearer. The Orange Book (see the related sidebar) was new and seemed relevant, and the industry agreed on the nature of the security problem. Today, we find the Orange Book, developed near the end of mainframes’ golden age and before the widespread networking of everything with a program counter, less helpful.

In the midst of a security incident, we have a responsibility to communicate clearly and calmly about what’s happening. We must be able to explain during incidents (and at other times) to fellow security experts, to other technologists, and to the general public in a clear and effective way just what it is that we do, how we do it, and how they benefit from our work. For this conversation, simple appeals for better security are too trivial, but detailed analyses of cryptographic key lengths are too fine-grained.

There have been several attempts at assembling glossaries of terms in the field. Although these have been useful contributions, glossaries are inherently unable to give form and direction to a field. A glossary is generally a collection of known terms and should be inclusive in scope. This means that it naturally includes contradictory or subtly overlapping terms, leaving it to the reader to decide which to use and which to discard. Independent practitioners will innocently make different choices, and suddenly we’re in comp.tower.of.babel.

It is the nature of an active technical field that there be continuing change. New systems are built, old ones are modified, and both have new vulnerabilities. Attacks are developed that exploit these vulnerabilities, letting bad guys wreak a certain amount of havoc before we can mobilize and close them off. Some of these exploits are not conceptually new: we’ve seen them before and we can classify them with other like things. This helps us predict outcomes and set expectations. Other things truly are new: we must name them so that we can talk about them later. What’s missing is a broader context that we can use to organize our thinking and discussion.

What the field needs is an ontology — a set of descriptions of the most important concepts and the relationships among them. Such an ontology would include at least these concepts: data, secrecy, privacy, availability, integrity, threats, exploits, vulnerabilities, detection, defense, cost, policy, encryption, response, value, owner, authorization, authentication, roles, methods, and groups. It should also contain these relationships: “owns,” “is an instance of,” “acts on,” “controls,” “values,” “characterizes,” “makes sets of,” “identifies,” and “quantifies.” A good ontology will help us organize our thinking and writing about the field and help us teach our students and communicate with our clients. A great ontology will help us report incidents more effectively, share data and information across organizations, and discuss issues among ourselves. Just as students of medicine must learn a medical ontology as part of their education, to avoid mistakes and improve the quality of care, so ultimately should all information technologists learn the meanings and implications of these terms and their relationships.

Conclusion

There has been a substantial amount of good work along the lines of developing an ontology, starting at least with the Orange Book. However, recent rapid growth in the field has left the old ontology behind; as a result, it increasingly feels like we’re entering the precincts of the Tower of Babel. We need a good ontology. Maybe we can set the example by building our ontology in a machine-usable form in using XML and developing it collaboratively. Is there a Linnaeus, a father of taxonomy, for our field waiting in the wings somewhere?

[Here is a PDF file of the original editorial.]