From the Editors: Reading (with) the Enemy

[Originally published in the January/February 2009 issue (Volume 7 number 1) of IEEE Security & Privacy magazine.]

Back in the July/August 2006 issue of IEEE Security & Privacy, the editors of the Book Reviews department wrote an essay entitled,  “Why We Won’t Review Books by Hackers.”  They argued that to review such books would be to “tacitly endorse a convicted criminal who now wants to pass himself off as a consultant.” We published two letters to the editor in the subsequent issue, and that was the end of the topic. Or so you thought.

In this issue, I argue that whether S&P reviews them, you should read the writings of bad guys, with the usual caveat that you should do so if they have something useful to say and are well written. This topic has been debated for many years, and the positions boil down to one of four basic arguments:

  • The writings of bad guys are morally tainted.
  • We should not reward bad guys for bad behavior.
  • The writings of bad guys provide “how to” information for the next generation of bad guys.
  • The writings of bad guys glamorize bad behavior and should be eschewed along with other attractive nuisances (to steal a term from the legal community).

If the moral taint disqualification fails for Mein Kampf, then there’s no reason we should let it stop us reading the works of lesser criminals. Fundamentally, any writing that gives the good guys an insight into the behavior of the bad guys is useful.

In the case of black hat computer adventurers, there’s no legitimate employment, so a book’s economic importance to the bad guy might be quite significant. On balance, however, this is a red herring. Negligibly few books are so popular that they change the fortunes of their authors. Most books have no more than modest success that, in the best case, produces a few hundreds or perhaps thousands of dollars for the author. This isn’t enough to make a real behavioral difference. Moreover, if a book becomes incredibly successful, it’s likely that the book’s value to society outweighs the harm that comes from rewarding the bad guy. A more subtle argument is that bad guys write books to market their skills for later employment as security experts. This argument is similarly bogus because it’s really “moral taint” in disguise. Without getting into an imponderable debate on ethics, this argument comes down to the assertion that a bad guy can never be reformed and that skills learned from bad behavior should never be used for gain.

The third argument — that bad-guy writing passes evil skills on to future bad guys — falls apart similarly on deeper analysis. It reduces to the old security through obscurity chestnut, which our community has been on the forefront of rebutting. Besides, cybercrime is a fast-paced arms race, and most of last week’s tools and techniques are ineffective and irrelevant this week. Of course, the more general techniques that bad guys use to develop attacks are as valuable to defenders as they are to attackers.

The last argument (about attractive nuisance) is an interesting one. The world of cybercriminal-authored books clearly breaks into two parts — those whose authors have been caught and convicted and those whose authors have not. All the bad-guy books I can think of have been written by convicted criminals. Books written by unconvicted criminals lack a certain–to put it delicately–credibility, wouldn’t you say? After all, it’s hard to believe that an uncaught and unconvicted bad guy would reveal all the vulnerabilities he knew. And if you want to trade time in jail and the permanent status of a convicted criminal for the dubious chance at fame that writing a true cybercrime book brings, then you probably already have severe problems.

Most fundamentally, however, the department editors noted that the book they were refusing to review was uninformative and badly written. This makes the book a waste of time by violating my rule that bad-guy books should be “useful and well written” to be worth reading. So if you hear about a good book by a bad guy, by all means read it.

From the Editors: Cyberassault on Estonia

[This editorial was published originally in “Security & PrivacyVolume 5 Number 4 July/August 2007]

Estonia recently survived a massive distributed denial-of-service (DDoS) attack that came on the heels of the Estonian government’s relocation of a statue commemorating Russia’s 1940s wartime role. This action inflamed the feelings of the substantial Russian population in Estonia, as well as those of various elements in Russia itself.
Purple prose then boiled over worldwide, with apocalyptic announcements that a “cyberwar” had been unleashed on the Estonians. Were the attacks initiated by hot-headed nationalists or by a nation state? Accusations and denials have flown, but no nation state has claimed authorship.

It’s not really difficult to decide if this was cyberwarfare or simple criminality. Current concepts of war require people in uniforms or a public declaration. There’s no evidence that such was the case. In addition, there’s no reason to believe that national resources were required to mount the attack. Michael Lesk’s piece on the Estonia attacks in this issue (see the Digital Protection department on p. 76) include estimates that, at current botnet leasing prices, the entire attack could have been accomplished for US$100,000, a sum so small that any member of the upper middle class in Russia, or elsewhere, could have sponsored it.

Was there national agency? It’s highly doubtful that Russian President Vladimir Putin or anyone connected to him authorized the attacks. If any Russian leader had anything to say about the Estonians, it was more likely an intemperate outburst like Henry II’s exclamation about Thomas Becket, “Will no one rid me of this troublesome priest?”

We can learn from this, however: security matters, even for trivial computers. A few tens of thousands of even fairly negligible PCs, when attached by broadband connections to the Internet and commanded in concert, can overwhelm all modestly configured systems—and most substantial ones.

Engineering personal systems so that they can’t be turned into zombies is a task that requires real attention. In the meantime, the lack of quality-of-service facilities in our network infrastructure will leave them vulnerable to future botnet attacks. Several avenues are available to address the weaknesses in our current systems, and we should be exploring all of them. Faced with epidemic disease, financial panic, and other mass threats to the common good, we’re jointly and severally at risk and have a definite and legitimate interest in seeing to it that the lower limits of good behavior aren’t violated.

From the Estonia attacks, we’ve also learned that some national military institutions are, at present, hard-pressed to defend their countries’ critical infrastructures and services. Historically, military responses to attacks have involved applying kinetic energy to the attacking forces or to the attackers’ infrastructure. But when the attacking force is tens or hundreds of thousands of civilian PCs hijacked by criminals, what is the appropriate response? Defense is left to the operators of the services and of the infrastructure, with the military relegated to an advisory role—something that both civilians and military must find uncomfortable. Of course, given the murky situations involved in cyberwar, we’ll probably never fully learn what the defense establishments could or did do.

Pundits have dismissed this incident, arguing that this is a cry of “wolf!” that should be ignored (see www.nytimes.com/2007/06/24/weekinreview/24schwartz.html). Although it’s true that we’re unlikely to be blinded to an invasion by the rebooting of our PCs, it’s naïve to suggest that our vulnerability to Internet disruptions has passed its peak. Cyberwar attacks, as demonstrated in 2003 by Slammer, have the potential to disable key infrastructures. To ignore that danger is criminally naïve. Nevertheless, all is not lost.
Conclusion

Events like this have been forecast for several years, and as of the latest reports, there were no surprises in this attack. The mobilization of global expertise to support Estonia’s network defense was heartening and will probably be instructive to study. Planners of information defenses and drafters of future cyberdefense treaties should be contemplating these events very carefully. This wasn’t the first such attack—and it won’t be the last.

From the Editors: Insecurity through Obscurity

[This editorial was published originally in “Security & PrivacyVolume 4 Number 5 September/October 2006]

Settling on a design for a system of any sort involves finding a workable compromise among functionality, feasibility, and finance. Does it do enough of what the sponsor wants? Can it be implemented using understood and practical techniques? Is the projected cost reasonable when set against the anticipated revenue or savings?
In the case of security projects, functionality is generally stated in terms of immunity or resistance to attacks that seek to exploit known vulnerabilities. The first step in deciding whether to fund a security project is to assess whether its benefits outweigh the costs. This is easy to state but hard to achieve.

What are the benefits? Some set of exploits will be thwarted. But how likely would they be to occur if we did nothing? And how likely will they be to occur if we implement the proposed remedy? What is the cost incurred per incident to repair the damage if we do nothing? Armed with the answers to these often unanswerable questions, we can get some sort of quantitative handle on the benefits of implementation in dollars-and-cents terms.

What are the costs? Specification, design, implementation, deployment, and operation of the solution represent the most visible costs. What about the efficiency penalty that stems from the increased operational complexity the solution imposes? This represents an opportunity cost in production that you might have achieved if you hadn’t implemented the solution.

In the current world of security practice, it’s far too common, when faced with vast unknowns about benefits, to fall back on one of two strategies: either spend extravagantly to protect against all possible threats or ignore threats too expensive to fix. Protection against all possible threats is an appropriate goal when securing nuclear weapons or similar assets for which failure is unacceptable, but for most other situations, a more pragmatic approach is indicated.

Unfortunately, as an industry, we’re afflicted with a near complete lack of quantitative information about risks. Most of the entities that experience attacks and deal with the resultant losses are commercial enterprises concerned with maintaining their reputation for care and caution. This leads them to the observation that disclosing factual data can assist their attackers and provoke anxiety in their clients. The lack of data-sharing arrangements has resulted in a near-complete absence of incident documentation standards; as such, even if organizations want to compare notes, they face a painful exercise in converting apples to oranges.

If our commercial entities have failed, is there a role for foundations or governments to act? Can we parse the problem into smaller pieces, solve them separately, and make progress that way? Other fields, notably medicine and public health, have addressed this issue more successfully than we have. What can we learn from their experiences? Doctors almost everywhere in the world are required to report the incidence of certain diseases and have been for many years. California’s SB 1386, which requires disclosure of computer security breaches, is a fascinating first step, but it’s just that—a first step. Has anyone looked closely at the public health incidence reporting standards and attempted to map them to the computer security domain? The US Federal Communications Commission (FCC) implemented telephone outage reporting requirements in 1991 after serious incidents and in 2004 increased their scope to include all the communications platforms it regulates. What did it learn from those efforts, and how can we apply them to our field?

The US Census Bureau, because it’s required to share much of the data that it gathers, has developed a relatively mature practice in anonymizing data. What can we learn from the Census Bureau that we can apply to security incident data sharing? Who is working on this? Is there adequate funding?

Conclusion

These are all encouraging steps, but they’re long in coming and limited in scope. Figuring out how to gather and share data might not be as glamorous as cracking a tough cipher or thwarting an exploit, but it does have great leverage.

From the Editors: The Impending Debate

[This editorial was published originally in “Security & PrivacyVolume 4 Number 2 March/April 2006]

There’s some scary stuff going on in the US right now. President Bush says that he has the authority to order, without a warrant, eavesdropping on telephone calls and emails from and to people who have been identified as terrorists. The question of whether the president has this authority will be resolved by a vigorous debate among the government’s legislative, executive, and judicial branches, accompanied, if history is any guide, by copious quantities of impassioned rhetoric and perhaps even the rending of garments and tearing of hair. This is as it should be.

The president’s assertion is not very far, in some ways, from Google’s claims that although its Gmail product examines users’ email for the purpose of presenting to them targeted advertisements, user privacy isn’t violated because no natural person will examine your email. The ability of systems to mine vast troves of data for information has now arrived, but policy has necessarily lagged behind. The clobbering of Darpa’s Total Information Awareness initiative (now renamed Terrorism Information Awareness; http://searchsecurity.techtarget.com/sDefinition/0,,sid14_gci874056,00.html) in 2004 was a lost opportunity to explore these topics in a policy debate, an opportunity we may now regain. Eavesdropping policy conceived in an era when leaf-node monitoring was the only thing possible isn’t necessarily the right one in this era of global terrorism. What the correct policy should be, however, requires deep thought and vigorous debate lest the law of unintended consequences take over.

Although our concerns in IEEE Security & Privacy are perhaps slightly less momentous, we are, by dint of our involvement with and expertise in the secure transmission and storage of information, particularly qualified to advise the participants in the political debate about the realities and the risks associated with specific assumptions such as what risks are presented by data mining. As individuals, we’ll be called on to inform and advise both the senior policymakers who will engage in this battle and our friends and neighbors who will watch it and worry about the outcome. It behooves us to do two things to prepare for this role. One, we should take the time now to inform ourselves of the technical facts, and two, we should analyze the architectural options and their implications.

Unlike classical law enforcement wiretapping technology (covered in depth in S&P’s November/December 2005 issue), which operates at the leaves of the communication interconnection tree, this surveillance involves operations at or close to the root. When monitoring information at the leaves, only information directed to the specific leaf node is subject to scrutiny. It’s difficult when monitoring at the root to see only communications involving specific players—monitoring at the root necessarily involves filtering out the communications not being monitored, something that involves looking at them. When examining a vast amount of irrelevant information, we haven’t yet demonstrated a clear ability to separate signal (terrorist communication, in this case) from noise (innocuous communication). By tracking down false leads, we waste expensive skilled labor, and might even taint innocent people with suspicion that could feed hysteria in some unfortunate future circumstance.

Who’s involved in the process of examining communications and what are the possible and likely outcomes of engaging in this activity? The security and privacy community has historically developed scenario analysis techniques in which we hypothesize several actors, both well- and ill-intentioned, and contemplate their actions toward one another as if they were playing a game. Assume your adversary makes his best possible move. Now assume you make your best possible response. And so on. In the case of examining communications at the root, we have at least four actors to consider.

One is the innocent communicator whom we’re trying to protect, another is the terrorist whom we’re trying to thwart. The third is the legitimate authority working to protect the innocent from the terrorist, and the fourth, whom we ignore at our peril, is the corrupted authority who, for some unknown reason, is tempted to abuse the information available to him to the detriment of the innocent. We could choose, in recognition of the exigencies of a time of conflict, to reduce our vigilance toward the corrupted authority, but history has taught us that to ignore the concept puts us and our posterity in mortal peril.

Conclusion

Our community’s challenge in the coming debate is to participate effectively, for we occupy two roles at once. We are technical experts to whom participants turn for unbiased fact-based guidance and insight, and we are simultaneously concerned global citizens for whom this debate is meaningful and important. We must avoid the temptation to use our expertise to bias the debate, but we must also avoid being passive bystanders. We must engage thoughtfully and creatively. We owe this to our many countries, our colleagues, our neighbors, our friends, our families, and ourselves.

From the Editors: There Ain’t No Inside, There Ain’t No Outside …

[This editorial was published originally in “Security & PrivacyVolume 3 Number 5 September/October 2005]

There ain’t no good guys, there ain’t no bad guys,
There’s only you and me and we just disagree
—”We Just Disagree,” words and music by Jim Krueger

Although Jim Krueger might be right that there are no good guys or bad guys in a romantic disagreement, in computer security, there are definitely good guys and bad guys. What there isn’t, however, is an inside or an outside.

In the 1990s, before Web browsers emerged, corporations owned or licensed all of the information on a corporate computer display—it was inside. Being inside was a proxy for trusted, whereas being outside meant being untrusted … for everything. Security experts warned that these simple solutions were too coarse, and that the principles of least privilege and separation of concerns should be adopted, but everyone ignored this advice.

After the Web became pervasive, however, users could reach outside from the corporate network and its computer systems. It became easier and cheaper to share information with clients and suppliers. Corporations embraced these opportunities to speed up their businesses, increase their reach, and cut their costs. Economics drove businesses to implement these drives to efficiency with outsourcing, supply-chain management, and other business innovations, and governments supported them with deregulation. In the process, all this activity eliminated the distinction between inside and outside on corporate networks: “Oh, that application is hosted at a partner colo.” “Yeah, we outsourced our statement and confirm printing.” “That server is owned by a data provider—we let them station it in our machine room.”

Businesses increased their reliance on external partners, and the number of exceptions to the “only employees inside the firewall” rule grew rapidly. Nonetheless, we blithely carried on as if the “Tootsie Roll Pop” security model—hard crunchy outside, soft chewy inside—was still meaningful. That’s partly because we didn’t have mature alternatives (a few consultants and vendors notwithstanding) and because the threats were still manageable despite the progressive failure of our core approach. Moreover, we still don’t really have any idea how to secure our systems’ components so that they’re secure but still robust, flexible, and easy to use.

A few years ago, Stu Feldman of IBM Research observed that when working with one or two people, the problems are computer science, but when working with 1,000 people, the issues are sociology. (Stu claims he actually said “crowd control.”) Translated, this means that getting our system architectures adapted to an inside-less world involves more than just figuring out how to secure a database server or control access to an application host. It means educating the larger community of people who use and rely on our systems about the implications of architectural choices and the costs and timescales of system migrations. The recent press coverage of major corporations losing large volumes of data contributes to that educational process. One of the larger losses, that by CardSystems, was of data held in the course of its work that supported the clearance of credit-card transactions. CardSystems didn’t own the data it held, and the hacking incident exposed the importance of a flaw in its data-retention policies. If it hadn’t held on to information that it didn’t need and wasn’t authorized to retain, the loss of information suffered when its system was cracked would have had much less impact.

Although we might scorn the subsequent press coverage as superficial and sensational, it tells the public that data loss is happening and that we all better be concerned. Suddenly, nonspecialists are realizing that such arcane and abstract things as data-management policies and security architectures can and should matter to them.

Conclusion

So, how are we doing? Unfortunately, we’re still too focused on computer science topics and not enough on sociology ones. Ralph Gomory, back when he directed IBM Research, said that real problems, the ones you encounter when you rub shoulders with people out in the world, stimulate the most interesting research advances. What can you do to help? Get out of your office. Collaborate with someone who’s not a computer scientist or engineer. Think about the overall outcomes and interactions of technology, policy, and the behavior of people, and then act accordingly.

From the Editors: What’s in a Name?

[This editorial was published originally in “Security & Privacy” Volume 3 Number 2 March/April 2005]

“What’s in a name? That which we call a rose

By any other name would smell as sweet;”
—Romeo and Juliet, Act II, Scene ii

In ancient times, when the economy was agrarian and people almost never traveled more than a few miles from their places of birth, most people made do with a single personal name. Everyone you met generally knew you, and if there did happen to be two Percivals in town, people learned to distinguish between “tall Percival” and “short Percival.”

The development of travel and trade increased the number of different people you might meet in a life time and led to more complex names. By the Greek classical period, an individual’s name had become a three-part structure including a personal name, a patronymic, and a demotic, which identified the person’s deme—roughly, one’s village or clan.

This represented the end of the line in the evolution of names for several thousand years. During that time, people developed a range of concepts to enrich names with extra capabilities. Letters of introduction enabled travelers to enter society in a distant city almost as if they were locals. Renaissance banking developed the early ancestors of the letter of credit and the bank account, allowing money to be transferred from place to place without the attendant risk of physically carrying the gold. In response to these innovations, clever people invented novel ways to manage their names, for both legitimate and illegitimate purposes, giving us the alias, the doing business as, and the cover name. Americans invented personal reinvention, or at least made it a central cultural artifact, and developed a strong distaste for central management of the personal namespace.

Enter the computer

With the computer era came the user ID: first one, then two, and then infinity. With the Internet boom, we got retail e-commerce and the proliferation of user IDs and passwords. The venerable letter of introduction reemerged as an identity certificate, and the bank account evolved into dozens of different glittering creatures. While enabling online services to an increasingly mobile population, this explosion in user IDs created inconvenience and risk for people and institutions. As shopping and banking moved online, identity theft went high tech. We responded with two- and three-factor authentication, public key infrastructure, cryptographically strong authentication, and single-sign-on technologies such as Microsoft’s Passport and federated authentication from the Liberty Alliance.

We’re currently trapped between Scylla and Charybdis. On one side, civil libertarians warn that a centralized authentication service comprising a concentration of power and operational and systemic risk represents an unacceptable threat to a free society. On the other, we have a chaotic morass of idiosyncratic user ID and password implementations that inconvenience people and invite attack.

The King is dead! Long live the King!

With its controversial Passport technology, Microsoft attempted to address the visible need by offering a single user ID and password framework to sites across the Internet. With eBay’s recent defection, it’s increasingly clear that Passport isn’t winning large ecommerce sites. Ultimately, Passport failed commercially not because of competitors’ hostility or civil libertarians’ skepticism—or even because of the technical problems in the software—but rather because enterprises proved unwilling to cede management of their clients’ identities to a third party. This is an important lesson, but not a reason to give up on the effort to create a usable framework.
Who or what will step up and make the next attempt to meet the need? Did we learn enough from the debate about Passport to clearly identify the salient characteristics of what comes next? Have we made enough progress toward a consensus on the need for “a” solution that the next company up to bat will be willing to hazard the amount of treasure that Microsoft spent on Passport? Now is the time for a vigorous dialogue to get clarity. We aren’t likely again to see a comparable exercise of courage, however misguided, so it behooves us to reduce the risk for the next round of competitors.
A successful Internet identity service framework must include admitting multiple independent authorities. Some industries have a strong need to establish a common identity and will insist on controlling the credential. Some governments will decide to do likewise, whereas others will leave it to the private sector. But identity services shouldn’t be tied to any individual vendor, country, or technology. They should allow the dynamic assembly of sets of privileges, permitting participating systems to assign rights and augment verification requirements.
Thus, a level of proof sufficient for my ISP to permit me to send a social email could be overlaid with an extra layer by my bank before allowing me to transfer money. It should be possible to migrate my identity from one ISP to another without losing all of my privileges, although I might have to re-verify them. It should be possible to easily firewall segments of my identity from others so that losing control over one component doesn’t result in the loss of the others.

Conclusion

This can’t be all that’s required, or we wouldn’t still be scratching our heads about it at this late date. It’s clear that there are thorny policy issues in addition to some very challenging technical questions. Getting to a workable Internet identity framework will take hard work, so let’s get going.

A Fine Tunnel

Some years ago I went skiing with some Italian friends.  I flew to their home in Pisa and we drove north along the west coast of the country to Sestriere, at the triple juncture of Switzerland, France, and Italy, a lovely ski area.

The road we took went through the area of Genoa.  This particular road is a high-speed limited access highway.  Probably the A12 according to modern maps.  In this section the coast is very steep, almost cliffs running down to the compact city of Genoa.  One  section of the road is particularly dramatic, an alternating sequence of bridges and tunnels through the steeply inclined terrain above Genoa.

Anyway, as we drove north we went through one of the longer tunnels.  As we were in the middle of this particular tunnel I saw a sign that gave me a peculiar Alice in Wonderland sensation.  The sign, a professionally executed one with all the hallmarks of the highway system, showed an outline of a cup of coffee, a little wisp of steam proceeding from its top.  With the picture of the cup of coffee was the text, “A fine tunnel.”

Yes, I reflected for a moment, it is indeed a fine tunnel, but why have such a sign?  Shortly thereafter I realized that I had read the sign in the wrong language.  It was not praising the qualities of the tunnel in English but rather alerting tired drivers to the fact that there was a place to get a cup of coffee at the end of the tunnel (read it aloud as ‘ah feen-eh toon-nel’ to hear how it would sound to Italians).

My host, the driver of the car, and I laughed at my initial reaction.  A fine tunnel, indeed!

From the Editors: Charge of the Light Brigade

[Originally published in the January/February 2008 issue (Volume 6 number 1) of IEEE Security & Privacy magazine.]

In 1970, the late Per Brinch Hansen wrote a seminal article (“The Nucleus of a Multiprogramming System”) that articulated and justified what today we call policy/mechanism separation. He introduced the concept in the context of an operating system’s design at a time when experts felt we lacked a clear understanding of what the ultimate shape of operating systems would be. The concept, like other powerful memes, was so compelling that it took on a life of its own and is now an article of faith in CS education—taught without reference to the original context.

The idea isn’t original to computer science—it has existed for thousands of years. In martial terms, it’s reflected in the popular paraphrase from Alfred, Lord Tennyson‘s poem, Charge of the Light Brigade: “ours is not to reason why; ours is but to do and die.” Separation of policy and mechanism has become an article of faith in our field because it’s so powerful at helping us distinguish between the things that we can decide now and the things that we might need to change later. The key is identifying the knobs and dials of potential policy choices during system design and implementing a flexible way to allow those choices to be bound late.

In this issue of IEEE Security & Privacy, the article “Risking Communications Security: Potential Hazards of the Protect America Act” (p. 24) explores some of the hazards associated with a blind application of this principle to large infrastructures such as the Internet and the telephone system. Although this analysis is conducted in terms of a specific US law, it raises universal questions faced by free societies when considering the tension between individual privacy rights and the collective “right” to security.

What we see at play here are three large policy objectives in conflict: first, allowing the security establishment to scrutinize communications they legitimately believe would-be terrorists could use; second, protecting innocent people’s privacy using communications networks; and third, safeguarding the integrity of the communications systems so that their continuing operational integrity isn’t jeopardized. The article cites several recent examples from around the world in which the introduction of ill-considered monitoring systems has led to disastrous unintended consequences.

To date, the public debate on the Protect America Act has focused on the first two issues: the trade-off between security and privacy. Whether a piece of legislation drafted in haste by one of the most divided US congresses in history could have found a wise balance will be learned only retrospectively. What the authors of “Risking Communications Security” clearly demonstrate is that the third policy objective certainly isn’t addressed in the law.

But that’s okay, you might be tempted to say: policy should be set independently from how it is implemented. As Tennyson illustrates, it’s a good theory—but in practice, the unintended consequences can be shocking. One of the preconditions to doing a good job of separating policy from mechanism is that the knobs and dials offered to policy writers can be implemented. As the article’s authors observe, the law assumes that communications networks can deliver high-quality information about the geographic location of end points. This isn’t easy and might not be possible for many modes of communication, particularly for voice over IP (VoIP), cell phones, and WiFi connections.

Leaving issues to be addressed later, as this law does, is a time-honored tradition in legislation. The Protect America Act expires in February 2008, so there will be at least one chance to renegotiate the terms. This means that this specific act won’t affect capital spending by infrastructure builders in the US until after the rewrite. It gives interested people in the US and elsewhere a chance to mitigate the systemic risk by influencing the next rewrite, assuming that a more rational political conversation emerges in the US Congress sometime in the next few years. This article should be a key input to that conversation.

Hornblower and Aubrey

How much did C. S. Forester’s successful Horatio Hornblower series have to do with the launching of Patrick O’Brien’s even more successful Aubrey/Maturin series?

Back in the late 1980s a friend introduced me to Patrick O’Brien’s Aubrey/Maturin novels. I started Master and Commander but did not get excited about the story and abandoned the book after a chapter or two. Some years later in an airport about to board a plane and desperate for something to read I picked up a copy of The Surgeon’s Mate. This time I was hooked. I devoured the first seventeen novels over the next few years, and then hung around the bookstore door impatiently as the rest were published, snatching first editions of the final three novels practically from the hands of the bookbinders.

After O’Brien’s death in 2000 I despaired. No more stories of life aboard wooden ships. Finally, I decided to try C. S. Forester’s Horatio Hornblower stories. I had rejected the recommendation that I read Forester in my teens, partly because I thought the name Hornblower to be particularly silly and feared that the books might be satires or farces or worse. With low expectations I found a copy of Beat to Quarters (the US title of the first novel, published in the UK as The Happy Return). To my delight, the book was enjoyable.

The dynamic of Forester’s writing was quite different from O’Brien’s, of course. Forester was never the scholar of the era in terms of science, diplomacy, cuisine, and fashion that O’Brien proved himself to be, so the stories are not as rich and textured. Beyond that, Hornblower was a solitary creature, always alone in command, whereas Aubrey always had Maturin as a friend and confidante, thus providing the reader with a perspective that Forester could never give. We are aware of Hornblower’s internal agonies at times as he wrestles with decision, but we never hear him articulate issues nor explain himself.

Ironically, it’s Aubrey who seems the more self-assured of the two fictional captains. I’m not sure if this is the result of the divergent styles of the two writers or is somehow a contradictory consequence of our insight into Aubrey’s mind provided by the conversations with Maturin.

Anyway, after reading the Hornblower stories I reflected on the relationship between the two series. There is some evidence to suggest, purely circumstantially, that the suggestion to O’Brien that he write the Aubrey/Maturin stories was triggered by Forester’s passing. In the rest of this blog post I’ll outline the evidence.

In the author’s note that introduces The Far Side of the World, the tenth Aubrey/Maturin novel, O’Brien writes, referring to himself in the third person, “Some ten or eleven years ago a respectable American publisher suggested that he should write a book about the Royal Navy of Nelson’s time …” O’Brien was already known as a good writer with a particular interest in the era, so it would have been natural to suggest that he try his hand at writing such books.

The chronology provides some even stronger support for the hypothesis. Forester died in 1966 and the final Hornblower stories were published posthumously the next year. If the 1984 author’s note had been penned a few years before, then it might have referred to the time between 1967 and 1970, when Master and Commander appeared for the first time.

So while I have seen no documentary evidence to prove that O’Brien’s opportunity was urged on him by a publisher mindful of the success of Forester’s Hornblower books, it is no great stretch to connect the easily available dots and conclude that O’Brien was asked to fill the gap left by Forester’s passing.

Patch Management – Bits, Bad Guys, and Bucks!

(This article was originally published in 2003 by Secure Business Quarterly, a now-defunct publication.  Not having an original copy handy and not being able to refer people to the original site, I have retrieved a copy from the Internet Archive Wayback Machine (dated 2006 in their archive).  The text of the original article is reproduced here for convenience.)

After the flames from Slammer’s attack were doused and the technology industry caught up on its lost sleep, we started asking questions. Why did this happen? Could we have prevented it? What can we do to keep such a thing from happening again?

These are questions we ask after every major security incident, of course. We quickly learned that the defect in SQL Server had been identified and patches prepared for various platforms more than six months before, so attention turned to system administrators. Further inquiry, however, shows that things are more complex.

There were several complicating factors that conspired to make success at patching this system problematical. First of all, there were several different patches out, none of which had been widely or well publicized. In addition, there were confusing version incompatibilities that made the patching of some systems into much larger endeavors, as chains of dependencies had to be unraveled and entire sets of patches applied and tested. And finally, to add insult to injury, at least one patch introduced a memory leak into SQL Server.

As if that weren’t enough, MSDE includes an invisible SQL Server.  MSDE comes with a component of Visual Studio, which made that product vulnerable even though it neither included an explicit SQL Server license nor any DBA visibility. That shouldn’t have added risk, except that some pieces of software were shipped with MSDE and other no-longer-needed parts of the development environment included. As we all know, many software products are shipped with development artifacts intertwined with the production code because disk space is cheaper than keeping track of and subsequently removing all of the trash lying around in the development tree. And those development tools are really useful when tech support has to diagnose a problem.

To compound the challenge, patches in general can’t be trusted without testing. A typical large environment runs multiple versions of desktop operating systems, say NT4, 2000, XP Home, and XP Pro. If the patch addresses multiple issues across several versions of a common application, you’re talking about a product that has about fifty configuration permutations. Testing that many cases represents significant time and cost.

Finally, there’s the sheer volume of patches flowing from the vendor community. There’s no easy way for an administrator to tell whether a particular patch is ‘really serious,’ ‘really, really, serious,’ or ‘really, really, really, serious.’ The industry hasn’t yet figured out how to normalize all of the verbiage. Even so, knowing that a weakness exists doesn’t give any insight into how virulent a particular exploit of that weakness might prove to be. Slammer was remarkably virulent, but its patch went out to the systems community along with thousands of remedies for other weaknesses that haven’t been exploited nearly so effectively.

Unfortunately, an aggressive strategy of applying all patches is one that is uneconomical with the current operating model of the industry. Let’s look at some numbers. These numbers are benchmark numbers that are broadly typical of costs and performance in the entire industry, not specific to any individual company. The state of automation in the desktop OS world has improved dramatically in the last ten years. A decade ago an upgrade to a large population of desktop machines required a human visit to each machine. Today the automated delivery of software is dramatically superior, but not where it ultimately needs to be. Let’s say, for the purpose of argument, that automated patch installation for a large network is 90% successful.  (A colleague suggests that today a more realistic number is 80%, “even assuming no restriction on network capacity and using the latest version of SMS”; he characterized 90% as the “go out and get drunk” level of success.)  A person must visit each of the 10% of machines for which the automated installation failed. This person must figure out what went wrong and install the patch by hand. For a large corporate network with, say, 50,000 machines to be patched, that translates to 5,000 individual visits. A benchmark number for human support at the desktop is about $50 per visit. Thus, a required patch in a modern environment translates into a $250,000 expenditure. That’s not a trivial amount of money and it makes the role of a system manager, who faces tight budgets and skeptical customers, even more challenging.

The costs aside, how close to 100% is required to close a loophole? Informal comments from several CISOs suggest that Slammer incapacitated corporate networks with roughly 50,000 hosts by infecting only about two hundred machines. Thats 0.4%. With NIMDA, it was worse: one enterprise disconnected itself from the Internet for two weeks because it had two copies of NIMDA that were actually triggered. What can we do to make our systems less vulnerable and reduce both the probability of another Slammer incident and, more importantly, the harm that such an incident threatens? We can work together in the industry to improve the automated management of systems. Every 1% improvement in the performance of automated patch installation systems translates directly into $25,000 cash savings for the required patches in our example. An improvement of 9%, from 90% to 99%, translates into a savings of $225,000 for each patch that must be distributed. Do that a few times, and, as Everett Dirksen noted, “pretty soon you’re talking about real money.”

Improving the effectiveness of patch application requires that we improve both our software packaging and our patch distribution and execution automation. After we improve packaging and distribution, we can figure out a way to easily and quickly tell what components reside on each system, ideally by asking the system to tell us. Databases get out of date but the system itself usually won’t lie. We can build automated techniques to identify all of the patches from all relevant vendors that need to be applied to a given system — and then apply them. We can either simplify our system configurations, which has obvious benefits, or figure out ways to ensure that components are better behaved, which reduces the combinatorial complexity of applying and testing the applied patches.

The methods and practices of the security industry, particularly the high technology security world, have been for years derived from those developed for national security problems. These are problems for which the cost of failure is so enormous, as in the theft or misuse of nuclear weapons, that failure is not an option. For this class of problem, the commercial practice of balancing risk and cost is impossible, except in the vacuous limiting case in which cost is infinite. Now we are working through practical security management in a commercial environment and we are beginning to get our hands around some of the quantitative aspects of the problems we face. If getting a patch out to all computers in our environment will cost us a quarter of a million dollars and we have X patches per year, then we have to weigh that cost against the cost of the harm suffered and the cleanup expense incurred if we don’t distribute all of the patches. It tells us to spend some money, though not an arbitrarily large amount; on improving the quality of our automated patch distribution and application processes, with an objective of absolute 100% coverage for automatic updates. It tells us to work for a better system of quantification of threat severity. It tells us to spend effort on strengthening our incident response capabilities.