From the Editors: Insecurity through Obscurity

[This editorial was published originally in “Security & PrivacyVolume 4 Number 5 September/October 2006]

Settling on a design for a system of any sort involves finding a workable compromise among functionality, feasibility, and finance. Does it do enough of what the sponsor wants? Can it be implemented using understood and practical techniques? Is the projected cost reasonable when set against the anticipated revenue or savings?
In the case of security projects, functionality is generally stated in terms of immunity or resistance to attacks that seek to exploit known vulnerabilities. The first step in deciding whether to fund a security project is to assess whether its benefits outweigh the costs. This is easy to state but hard to achieve.

What are the benefits? Some set of exploits will be thwarted. But how likely would they be to occur if we did nothing? And how likely will they be to occur if we implement the proposed remedy? What is the cost incurred per incident to repair the damage if we do nothing? Armed with the answers to these often unanswerable questions, we can get some sort of quantitative handle on the benefits of implementation in dollars-and-cents terms.

What are the costs? Specification, design, implementation, deployment, and operation of the solution represent the most visible costs. What about the efficiency penalty that stems from the increased operational complexity the solution imposes? This represents an opportunity cost in production that you might have achieved if you hadn’t implemented the solution.

In the current world of security practice, it’s far too common, when faced with vast unknowns about benefits, to fall back on one of two strategies: either spend extravagantly to protect against all possible threats or ignore threats too expensive to fix. Protection against all possible threats is an appropriate goal when securing nuclear weapons or similar assets for which failure is unacceptable, but for most other situations, a more pragmatic approach is indicated.

Unfortunately, as an industry, we’re afflicted with a near complete lack of quantitative information about risks. Most of the entities that experience attacks and deal with the resultant losses are commercial enterprises concerned with maintaining their reputation for care and caution. This leads them to the observation that disclosing factual data can assist their attackers and provoke anxiety in their clients. The lack of data-sharing arrangements has resulted in a near-complete absence of incident documentation standards; as such, even if organizations want to compare notes, they face a painful exercise in converting apples to oranges.

If our commercial entities have failed, is there a role for foundations or governments to act? Can we parse the problem into smaller pieces, solve them separately, and make progress that way? Other fields, notably medicine and public health, have addressed this issue more successfully than we have. What can we learn from their experiences? Doctors almost everywhere in the world are required to report the incidence of certain diseases and have been for many years. California’s SB 1386, which requires disclosure of computer security breaches, is a fascinating first step, but it’s just that—a first step. Has anyone looked closely at the public health incidence reporting standards and attempted to map them to the computer security domain? The US Federal Communications Commission (FCC) implemented telephone outage reporting requirements in 1991 after serious incidents and in 2004 increased their scope to include all the communications platforms it regulates. What did it learn from those efforts, and how can we apply them to our field?

The US Census Bureau, because it’s required to share much of the data that it gathers, has developed a relatively mature practice in anonymizing data. What can we learn from the Census Bureau that we can apply to security incident data sharing? Who is working on this? Is there adequate funding?

Conclusion

These are all encouraging steps, but they’re long in coming and limited in scope. Figuring out how to gather and share data might not be as glamorous as cracking a tough cipher or thwarting an exploit, but it does have great leverage.

From the Editors: The Impending Debate

[This editorial was published originally in “Security & PrivacyVolume 4 Number 2 March/April 2006]

There’s some scary stuff going on in the US right now. President Bush says that he has the authority to order, without a warrant, eavesdropping on telephone calls and emails from and to people who have been identified as terrorists. The question of whether the president has this authority will be resolved by a vigorous debate among the government’s legislative, executive, and judicial branches, accompanied, if history is any guide, by copious quantities of impassioned rhetoric and perhaps even the rending of garments and tearing of hair. This is as it should be.

The president’s assertion is not very far, in some ways, from Google’s claims that although its Gmail product examines users’ email for the purpose of presenting to them targeted advertisements, user privacy isn’t violated because no natural person will examine your email. The ability of systems to mine vast troves of data for information has now arrived, but policy has necessarily lagged behind. The clobbering of Darpa’s Total Information Awareness initiative (now renamed Terrorism Information Awareness; http://searchsecurity.techtarget.com/sDefinition/0,,sid14_gci874056,00.html) in 2004 was a lost opportunity to explore these topics in a policy debate, an opportunity we may now regain. Eavesdropping policy conceived in an era when leaf-node monitoring was the only thing possible isn’t necessarily the right one in this era of global terrorism. What the correct policy should be, however, requires deep thought and vigorous debate lest the law of unintended consequences take over.

Although our concerns in IEEE Security & Privacy are perhaps slightly less momentous, we are, by dint of our involvement with and expertise in the secure transmission and storage of information, particularly qualified to advise the participants in the political debate about the realities and the risks associated with specific assumptions such as what risks are presented by data mining. As individuals, we’ll be called on to inform and advise both the senior policymakers who will engage in this battle and our friends and neighbors who will watch it and worry about the outcome. It behooves us to do two things to prepare for this role. One, we should take the time now to inform ourselves of the technical facts, and two, we should analyze the architectural options and their implications.

Unlike classical law enforcement wiretapping technology (covered in depth in S&P’s November/December 2005 issue), which operates at the leaves of the communication interconnection tree, this surveillance involves operations at or close to the root. When monitoring information at the leaves, only information directed to the specific leaf node is subject to scrutiny. It’s difficult when monitoring at the root to see only communications involving specific players—monitoring at the root necessarily involves filtering out the communications not being monitored, something that involves looking at them. When examining a vast amount of irrelevant information, we haven’t yet demonstrated a clear ability to separate signal (terrorist communication, in this case) from noise (innocuous communication). By tracking down false leads, we waste expensive skilled labor, and might even taint innocent people with suspicion that could feed hysteria in some unfortunate future circumstance.

Who’s involved in the process of examining communications and what are the possible and likely outcomes of engaging in this activity? The security and privacy community has historically developed scenario analysis techniques in which we hypothesize several actors, both well- and ill-intentioned, and contemplate their actions toward one another as if they were playing a game. Assume your adversary makes his best possible move. Now assume you make your best possible response. And so on. In the case of examining communications at the root, we have at least four actors to consider.

One is the innocent communicator whom we’re trying to protect, another is the terrorist whom we’re trying to thwart. The third is the legitimate authority working to protect the innocent from the terrorist, and the fourth, whom we ignore at our peril, is the corrupted authority who, for some unknown reason, is tempted to abuse the information available to him to the detriment of the innocent. We could choose, in recognition of the exigencies of a time of conflict, to reduce our vigilance toward the corrupted authority, but history has taught us that to ignore the concept puts us and our posterity in mortal peril.

Conclusion

Our community’s challenge in the coming debate is to participate effectively, for we occupy two roles at once. We are technical experts to whom participants turn for unbiased fact-based guidance and insight, and we are simultaneously concerned global citizens for whom this debate is meaningful and important. We must avoid the temptation to use our expertise to bias the debate, but we must also avoid being passive bystanders. We must engage thoughtfully and creatively. We owe this to our many countries, our colleagues, our neighbors, our friends, our families, and ourselves.

From the Editors: There Ain’t No Inside, There Ain’t No Outside …

[This editorial was published originally in “Security & PrivacyVolume 3 Number 5 September/October 2005]

There ain’t no good guys, there ain’t no bad guys,
There’s only you and me and we just disagree
—”We Just Disagree,” words and music by Jim Krueger

Although Jim Krueger might be right that there are no good guys or bad guys in a romantic disagreement, in computer security, there are definitely good guys and bad guys. What there isn’t, however, is an inside or an outside.

In the 1990s, before Web browsers emerged, corporations owned or licensed all of the information on a corporate computer display—it was inside. Being inside was a proxy for trusted, whereas being outside meant being untrusted … for everything. Security experts warned that these simple solutions were too coarse, and that the principles of least privilege and separation of concerns should be adopted, but everyone ignored this advice.

After the Web became pervasive, however, users could reach outside from the corporate network and its computer systems. It became easier and cheaper to share information with clients and suppliers. Corporations embraced these opportunities to speed up their businesses, increase their reach, and cut their costs. Economics drove businesses to implement these drives to efficiency with outsourcing, supply-chain management, and other business innovations, and governments supported them with deregulation. In the process, all this activity eliminated the distinction between inside and outside on corporate networks: “Oh, that application is hosted at a partner colo.” “Yeah, we outsourced our statement and confirm printing.” “That server is owned by a data provider—we let them station it in our machine room.”

Businesses increased their reliance on external partners, and the number of exceptions to the “only employees inside the firewall” rule grew rapidly. Nonetheless, we blithely carried on as if the “Tootsie Roll Pop” security model—hard crunchy outside, soft chewy inside—was still meaningful. That’s partly because we didn’t have mature alternatives (a few consultants and vendors notwithstanding) and because the threats were still manageable despite the progressive failure of our core approach. Moreover, we still don’t really have any idea how to secure our systems’ components so that they’re secure but still robust, flexible, and easy to use.

A few years ago, Stu Feldman of IBM Research observed that when working with one or two people, the problems are computer science, but when working with 1,000 people, the issues are sociology. (Stu claims he actually said “crowd control.”) Translated, this means that getting our system architectures adapted to an inside-less world involves more than just figuring out how to secure a database server or control access to an application host. It means educating the larger community of people who use and rely on our systems about the implications of architectural choices and the costs and timescales of system migrations. The recent press coverage of major corporations losing large volumes of data contributes to that educational process. One of the larger losses, that by CardSystems, was of data held in the course of its work that supported the clearance of credit-card transactions. CardSystems didn’t own the data it held, and the hacking incident exposed the importance of a flaw in its data-retention policies. If it hadn’t held on to information that it didn’t need and wasn’t authorized to retain, the loss of information suffered when its system was cracked would have had much less impact.

Although we might scorn the subsequent press coverage as superficial and sensational, it tells the public that data loss is happening and that we all better be concerned. Suddenly, nonspecialists are realizing that such arcane and abstract things as data-management policies and security architectures can and should matter to them.

Conclusion

So, how are we doing? Unfortunately, we’re still too focused on computer science topics and not enough on sociology ones. Ralph Gomory, back when he directed IBM Research, said that real problems, the ones you encounter when you rub shoulders with people out in the world, stimulate the most interesting research advances. What can you do to help? Get out of your office. Collaborate with someone who’s not a computer scientist or engineer. Think about the overall outcomes and interactions of technology, policy, and the behavior of people, and then act accordingly.

From the Editors: What’s in a Name?

[This editorial was published originally in “Security & Privacy” Volume 3 Number 2 March/April 2005]

“What’s in a name? That which we call a rose

By any other name would smell as sweet;”
—Romeo and Juliet, Act II, Scene ii

In ancient times, when the economy was agrarian and people almost never traveled more than a few miles from their places of birth, most people made do with a single personal name. Everyone you met generally knew you, and if there did happen to be two Percivals in town, people learned to distinguish between “tall Percival” and “short Percival.”

The development of travel and trade increased the number of different people you might meet in a life time and led to more complex names. By the Greek classical period, an individual’s name had become a three-part structure including a personal name, a patronymic, and a demotic, which identified the person’s deme—roughly, one’s village or clan.

This represented the end of the line in the evolution of names for several thousand years. During that time, people developed a range of concepts to enrich names with extra capabilities. Letters of introduction enabled travelers to enter society in a distant city almost as if they were locals. Renaissance banking developed the early ancestors of the letter of credit and the bank account, allowing money to be transferred from place to place without the attendant risk of physically carrying the gold. In response to these innovations, clever people invented novel ways to manage their names, for both legitimate and illegitimate purposes, giving us the alias, the doing business as, and the cover name. Americans invented personal reinvention, or at least made it a central cultural artifact, and developed a strong distaste for central management of the personal namespace.

Enter the computer

With the computer era came the user ID: first one, then two, and then infinity. With the Internet boom, we got retail e-commerce and the proliferation of user IDs and passwords. The venerable letter of introduction reemerged as an identity certificate, and the bank account evolved into dozens of different glittering creatures. While enabling online services to an increasingly mobile population, this explosion in user IDs created inconvenience and risk for people and institutions. As shopping and banking moved online, identity theft went high tech. We responded with two- and three-factor authentication, public key infrastructure, cryptographically strong authentication, and single-sign-on technologies such as Microsoft’s Passport and federated authentication from the Liberty Alliance.

We’re currently trapped between Scylla and Charybdis. On one side, civil libertarians warn that a centralized authentication service comprising a concentration of power and operational and systemic risk represents an unacceptable threat to a free society. On the other, we have a chaotic morass of idiosyncratic user ID and password implementations that inconvenience people and invite attack.

The King is dead! Long live the King!

With its controversial Passport technology, Microsoft attempted to address the visible need by offering a single user ID and password framework to sites across the Internet. With eBay’s recent defection, it’s increasingly clear that Passport isn’t winning large ecommerce sites. Ultimately, Passport failed commercially not because of competitors’ hostility or civil libertarians’ skepticism—or even because of the technical problems in the software—but rather because enterprises proved unwilling to cede management of their clients’ identities to a third party. This is an important lesson, but not a reason to give up on the effort to create a usable framework.
Who or what will step up and make the next attempt to meet the need? Did we learn enough from the debate about Passport to clearly identify the salient characteristics of what comes next? Have we made enough progress toward a consensus on the need for “a” solution that the next company up to bat will be willing to hazard the amount of treasure that Microsoft spent on Passport? Now is the time for a vigorous dialogue to get clarity. We aren’t likely again to see a comparable exercise of courage, however misguided, so it behooves us to reduce the risk for the next round of competitors.
A successful Internet identity service framework must include admitting multiple independent authorities. Some industries have a strong need to establish a common identity and will insist on controlling the credential. Some governments will decide to do likewise, whereas others will leave it to the private sector. But identity services shouldn’t be tied to any individual vendor, country, or technology. They should allow the dynamic assembly of sets of privileges, permitting participating systems to assign rights and augment verification requirements.
Thus, a level of proof sufficient for my ISP to permit me to send a social email could be overlaid with an extra layer by my bank before allowing me to transfer money. It should be possible to migrate my identity from one ISP to another without losing all of my privileges, although I might have to re-verify them. It should be possible to easily firewall segments of my identity from others so that losing control over one component doesn’t result in the loss of the others.

Conclusion

This can’t be all that’s required, or we wouldn’t still be scratching our heads about it at this late date. It’s clear that there are thorny policy issues in addition to some very challenging technical questions. Getting to a workable Internet identity framework will take hard work, so let’s get going.

A Fine Tunnel

Some years ago I went skiing with some Italian friends.  I flew to their home in Pisa and we drove north along the west coast of the country to Sestriere, at the triple juncture of Switzerland, France, and Italy, a lovely ski area.

The road we took went through the area of Genoa.  This particular road is a high-speed limited access highway.  Probably the A12 according to modern maps.  In this section the coast is very steep, almost cliffs running down to the compact city of Genoa.  One  section of the road is particularly dramatic, an alternating sequence of bridges and tunnels through the steeply inclined terrain above Genoa.

Anyway, as we drove north we went through one of the longer tunnels.  As we were in the middle of this particular tunnel I saw a sign that gave me a peculiar Alice in Wonderland sensation.  The sign, a professionally executed one with all the hallmarks of the highway system, showed an outline of a cup of coffee, a little wisp of steam proceeding from its top.  With the picture of the cup of coffee was the text, “A fine tunnel.”

Yes, I reflected for a moment, it is indeed a fine tunnel, but why have such a sign?  Shortly thereafter I realized that I had read the sign in the wrong language.  It was not praising the qualities of the tunnel in English but rather alerting tired drivers to the fact that there was a place to get a cup of coffee at the end of the tunnel (read it aloud as ‘ah feen-eh toon-nel’ to hear how it would sound to Italians).

My host, the driver of the car, and I laughed at my initial reaction.  A fine tunnel, indeed!

From the Editors: Charge of the Light Brigade

[Originally published in the January/February 2008 issue (Volume 6 number 1) of IEEE Security & Privacy magazine.]

In 1970, the late Per Brinch Hansen wrote a seminal article (“The Nucleus of a Multiprogramming System”) that articulated and justified what today we call policy/mechanism separation. He introduced the concept in the context of an operating system’s design at a time when experts felt we lacked a clear understanding of what the ultimate shape of operating systems would be. The concept, like other powerful memes, was so compelling that it took on a life of its own and is now an article of faith in CS education—taught without reference to the original context.

The idea isn’t original to computer science—it has existed for thousands of years. In martial terms, it’s reflected in the popular paraphrase from Alfred, Lord Tennyson‘s poem, Charge of the Light Brigade: “ours is not to reason why; ours is but to do and die.” Separation of policy and mechanism has become an article of faith in our field because it’s so powerful at helping us distinguish between the things that we can decide now and the things that we might need to change later. The key is identifying the knobs and dials of potential policy choices during system design and implementing a flexible way to allow those choices to be bound late.

In this issue of IEEE Security & Privacy, the article “Risking Communications Security: Potential Hazards of the Protect America Act” (p. 24) explores some of the hazards associated with a blind application of this principle to large infrastructures such as the Internet and the telephone system. Although this analysis is conducted in terms of a specific US law, it raises universal questions faced by free societies when considering the tension between individual privacy rights and the collective “right” to security.

What we see at play here are three large policy objectives in conflict: first, allowing the security establishment to scrutinize communications they legitimately believe would-be terrorists could use; second, protecting innocent people’s privacy using communications networks; and third, safeguarding the integrity of the communications systems so that their continuing operational integrity isn’t jeopardized. The article cites several recent examples from around the world in which the introduction of ill-considered monitoring systems has led to disastrous unintended consequences.

To date, the public debate on the Protect America Act has focused on the first two issues: the trade-off between security and privacy. Whether a piece of legislation drafted in haste by one of the most divided US congresses in history could have found a wise balance will be learned only retrospectively. What the authors of “Risking Communications Security” clearly demonstrate is that the third policy objective certainly isn’t addressed in the law.

But that’s okay, you might be tempted to say: policy should be set independently from how it is implemented. As Tennyson illustrates, it’s a good theory—but in practice, the unintended consequences can be shocking. One of the preconditions to doing a good job of separating policy from mechanism is that the knobs and dials offered to policy writers can be implemented. As the article’s authors observe, the law assumes that communications networks can deliver high-quality information about the geographic location of end points. This isn’t easy and might not be possible for many modes of communication, particularly for voice over IP (VoIP), cell phones, and WiFi connections.

Leaving issues to be addressed later, as this law does, is a time-honored tradition in legislation. The Protect America Act expires in February 2008, so there will be at least one chance to renegotiate the terms. This means that this specific act won’t affect capital spending by infrastructure builders in the US until after the rewrite. It gives interested people in the US and elsewhere a chance to mitigate the systemic risk by influencing the next rewrite, assuming that a more rational political conversation emerges in the US Congress sometime in the next few years. This article should be a key input to that conversation.

Hornblower and Aubrey

How much did C. S. Forester’s successful Horatio Hornblower series have to do with the launching of Patrick O’Brien’s even more successful Aubrey/Maturin series?

Back in the late 1980s a friend introduced me to Patrick O’Brien’s Aubrey/Maturin novels. I started Master and Commander but did not get excited about the story and abandoned the book after a chapter or two. Some years later in an airport about to board a plane and desperate for something to read I picked up a copy of The Surgeon’s Mate. This time I was hooked. I devoured the first seventeen novels over the next few years, and then hung around the bookstore door impatiently as the rest were published, snatching first editions of the final three novels practically from the hands of the bookbinders.

After O’Brien’s death in 2000 I despaired. No more stories of life aboard wooden ships. Finally, I decided to try C. S. Forester’s Horatio Hornblower stories. I had rejected the recommendation that I read Forester in my teens, partly because I thought the name Hornblower to be particularly silly and feared that the books might be satires or farces or worse. With low expectations I found a copy of Beat to Quarters (the US title of the first novel, published in the UK as The Happy Return). To my delight, the book was enjoyable.

The dynamic of Forester’s writing was quite different from O’Brien’s, of course. Forester was never the scholar of the era in terms of science, diplomacy, cuisine, and fashion that O’Brien proved himself to be, so the stories are not as rich and textured. Beyond that, Hornblower was a solitary creature, always alone in command, whereas Aubrey always had Maturin as a friend and confidante, thus providing the reader with a perspective that Forester could never give. We are aware of Hornblower’s internal agonies at times as he wrestles with decision, but we never hear him articulate issues nor explain himself.

Ironically, it’s Aubrey who seems the more self-assured of the two fictional captains. I’m not sure if this is the result of the divergent styles of the two writers or is somehow a contradictory consequence of our insight into Aubrey’s mind provided by the conversations with Maturin.

Anyway, after reading the Hornblower stories I reflected on the relationship between the two series. There is some evidence to suggest, purely circumstantially, that the suggestion to O’Brien that he write the Aubrey/Maturin stories was triggered by Forester’s passing. In the rest of this blog post I’ll outline the evidence.

In the author’s note that introduces The Far Side of the World, the tenth Aubrey/Maturin novel, O’Brien writes, referring to himself in the third person, “Some ten or eleven years ago a respectable American publisher suggested that he should write a book about the Royal Navy of Nelson’s time …” O’Brien was already known as a good writer with a particular interest in the era, so it would have been natural to suggest that he try his hand at writing such books.

The chronology provides some even stronger support for the hypothesis. Forester died in 1966 and the final Hornblower stories were published posthumously the next year. If the 1984 author’s note had been penned a few years before, then it might have referred to the time between 1967 and 1970, when Master and Commander appeared for the first time.

So while I have seen no documentary evidence to prove that O’Brien’s opportunity was urged on him by a publisher mindful of the success of Forester’s Hornblower books, it is no great stretch to connect the easily available dots and conclude that O’Brien was asked to fill the gap left by Forester’s passing.

Patch Management – Bits, Bad Guys, and Bucks!

(This article was originally published in 2003 by Secure Business Quarterly, a now-defunct publication.  Not having an original copy handy and not being able to refer people to the original site, I have retrieved a copy from the Internet Archive Wayback Machine (dated 2006 in their archive).  The text of the original article is reproduced here for convenience.)

After the flames from Slammer’s attack were doused and the technology industry caught up on its lost sleep, we started asking questions. Why did this happen? Could we have prevented it? What can we do to keep such a thing from happening again?

These are questions we ask after every major security incident, of course. We quickly learned that the defect in SQL Server had been identified and patches prepared for various platforms more than six months before, so attention turned to system administrators. Further inquiry, however, shows that things are more complex.

There were several complicating factors that conspired to make success at patching this system problematical. First of all, there were several different patches out, none of which had been widely or well publicized. In addition, there were confusing version incompatibilities that made the patching of some systems into much larger endeavors, as chains of dependencies had to be unraveled and entire sets of patches applied and tested. And finally, to add insult to injury, at least one patch introduced a memory leak into SQL Server.

As if that weren’t enough, MSDE includes an invisible SQL Server.  MSDE comes with a component of Visual Studio, which made that product vulnerable even though it neither included an explicit SQL Server license nor any DBA visibility. That shouldn’t have added risk, except that some pieces of software were shipped with MSDE and other no-longer-needed parts of the development environment included. As we all know, many software products are shipped with development artifacts intertwined with the production code because disk space is cheaper than keeping track of and subsequently removing all of the trash lying around in the development tree. And those development tools are really useful when tech support has to diagnose a problem.

To compound the challenge, patches in general can’t be trusted without testing. A typical large environment runs multiple versions of desktop operating systems, say NT4, 2000, XP Home, and XP Pro. If the patch addresses multiple issues across several versions of a common application, you’re talking about a product that has about fifty configuration permutations. Testing that many cases represents significant time and cost.

Finally, there’s the sheer volume of patches flowing from the vendor community. There’s no easy way for an administrator to tell whether a particular patch is ‘really serious,’ ‘really, really, serious,’ or ‘really, really, really, serious.’ The industry hasn’t yet figured out how to normalize all of the verbiage. Even so, knowing that a weakness exists doesn’t give any insight into how virulent a particular exploit of that weakness might prove to be. Slammer was remarkably virulent, but its patch went out to the systems community along with thousands of remedies for other weaknesses that haven’t been exploited nearly so effectively.

Unfortunately, an aggressive strategy of applying all patches is one that is uneconomical with the current operating model of the industry. Let’s look at some numbers. These numbers are benchmark numbers that are broadly typical of costs and performance in the entire industry, not specific to any individual company. The state of automation in the desktop OS world has improved dramatically in the last ten years. A decade ago an upgrade to a large population of desktop machines required a human visit to each machine. Today the automated delivery of software is dramatically superior, but not where it ultimately needs to be. Let’s say, for the purpose of argument, that automated patch installation for a large network is 90% successful.  (A colleague suggests that today a more realistic number is 80%, “even assuming no restriction on network capacity and using the latest version of SMS”; he characterized 90% as the “go out and get drunk” level of success.)  A person must visit each of the 10% of machines for which the automated installation failed. This person must figure out what went wrong and install the patch by hand. For a large corporate network with, say, 50,000 machines to be patched, that translates to 5,000 individual visits. A benchmark number for human support at the desktop is about $50 per visit. Thus, a required patch in a modern environment translates into a $250,000 expenditure. That’s not a trivial amount of money and it makes the role of a system manager, who faces tight budgets and skeptical customers, even more challenging.

The costs aside, how close to 100% is required to close a loophole? Informal comments from several CISOs suggest that Slammer incapacitated corporate networks with roughly 50,000 hosts by infecting only about two hundred machines. Thats 0.4%. With NIMDA, it was worse: one enterprise disconnected itself from the Internet for two weeks because it had two copies of NIMDA that were actually triggered. What can we do to make our systems less vulnerable and reduce both the probability of another Slammer incident and, more importantly, the harm that such an incident threatens? We can work together in the industry to improve the automated management of systems. Every 1% improvement in the performance of automated patch installation systems translates directly into $25,000 cash savings for the required patches in our example. An improvement of 9%, from 90% to 99%, translates into a savings of $225,000 for each patch that must be distributed. Do that a few times, and, as Everett Dirksen noted, “pretty soon you’re talking about real money.”

Improving the effectiveness of patch application requires that we improve both our software packaging and our patch distribution and execution automation. After we improve packaging and distribution, we can figure out a way to easily and quickly tell what components reside on each system, ideally by asking the system to tell us. Databases get out of date but the system itself usually won’t lie. We can build automated techniques to identify all of the patches from all relevant vendors that need to be applied to a given system — and then apply them. We can either simplify our system configurations, which has obvious benefits, or figure out ways to ensure that components are better behaved, which reduces the combinatorial complexity of applying and testing the applied patches.

The methods and practices of the security industry, particularly the high technology security world, have been for years derived from those developed for national security problems. These are problems for which the cost of failure is so enormous, as in the theft or misuse of nuclear weapons, that failure is not an option. For this class of problem, the commercial practice of balancing risk and cost is impossible, except in the vacuous limiting case in which cost is infinite. Now we are working through practical security management in a commercial environment and we are beginning to get our hands around some of the quantitative aspects of the problems we face. If getting a patch out to all computers in our environment will cost us a quarter of a million dollars and we have X patches per year, then we have to weigh that cost against the cost of the harm suffered and the cleanup expense incurred if we don’t distribute all of the patches. It tells us to spend some money, though not an arbitrarily large amount; on improving the quality of our automated patch distribution and application processes, with an objective of absolute 100% coverage for automatic updates. It tells us to work for a better system of quantification of threat severity. It tells us to spend effort on strengthening our incident response capabilities.

2012 Five Borough Bike Tour – 6 May 2012

Last year I rode in the 2011 Five Borough Bike Tour and blogged about it.  The photo service that took pictures of riders got three very good pictures of me, which I purchased and published on my Picasa page, suitable for blackmailing me in the future :-).

I rode again in 2012 with the BronxWorks team (Tamara [unofficial captain], Jane, Declan, Julio, Josh, Cristina, and me).  We raised money for BronxWorks, a wonderful settlement house in the Bronx that runs programs to support homeless families with children.  Several of the riders on the BronxWorks team  volunteer in programs at the organization’s facilities in the Bronx.  All of the riders raised money to support the organization’s activities, including me.

(left to right) Marc, Tamara, Jane, Declan, Julio, Josh, and Cristina.

This year’s ride was on Sunday 6 May 2012.  The weather was cool and overcast in the morning, clearing and warming by mid-afternoon when I got home.  Conditions were perfect for the ride.  Cool enough to help riders dissipate the heat of the exertion, while clear and dry to keep the riders safe during the ride.

Unofficial captain Tamara rendezvoused with teammates Julio, Jane, and me in the Upper West Side at 6:20 AM.  Registration materials this year had to be picked up in person at South Street Seaport, something that Tamara had organized, so she brought Jane and me our identification bibs and rider numbers, which we donned on the street corner.  We then rode down to the starting point in TriBeCa, arriving at about 7:10 AM for the 7:45 AM start.

Starting ceremonies began at about 7:30 AM with a series of dignitaries addressing the crowd.  The starting gun, well, actually the starting bursts of flame, came at 7:45 sharp and we were off.

The 2012 Five Boro Bike Tour starting line from the charity riders’ starting area.
2012 Five Boro Bike Tour starting location – Church and Leonard
2012 Five Boro Bike Tour starting gun

One of the benefits of riding for an organized charity is the starting position.  The over-30,000 riders in the Five Borough Bike Tour are organized into several tiers.  The VIP tier of several hundred riders starts at the very front.  Right behind the VIP tier is the charity group, another several hundred riders whose sponsoring organization, in our case BronxWorks, has arranged for them to start next.  After that is the vast majority of riders.

Last year there were numerous points on the tour where traffic bottlenecked and we were forced to pause for long periods of time, standing still or walking our bikes.  Last year’s bottlenecks included the first mile or two after the start, the mile of Sixth Avenue before we entered Central Park, and somewhere on the BQE where construction forced the route onto a narrow ramp.  I know that one of my friends who rode last year in a non-charity group, spent over an hour after the starting gun before he got moving.

This year, by contrast, the bike tour operators implemented a collection of improvements to the starting process and the route that resulted in essentially no bottlenecks anywhere.  As a result our team finished fifteen minutes ahead of our finishing time from last year, despite two equipment mishaps.

The first mishap affected me in Central Park.  As we started a long descent I decided to shift from the low range to the high range on my 31-year-old Motobecane Jubile Sport.  The cable connecting the shift lever to the derailleur slipped loose and my chain popped off the gear and hung up around the axle of the crankshaft.  I pulled over and lost five minutes getting things sorted out.  I was soon back on the road.  I could still pedal fine, and I could shift with the rear derailleur, so I had a fine six-speed bike on which I could easily finish the ride.  At the same time I noticed a little vibration in my rear wheel that signaled that some of the spokes were loosening, a more alarming situation.

Aside – my road bike

At the BronxWorks rendezvous for the 2012 Five Boro Bike Tour, Marc’s 1981 Motobecane Jubile Sport.

Mine is a steel-framed road bike that I bought new in 1981 or 1982 and that I still ride to commute to work.  It’s a beautiful bike with gorgeous lines made possible by its steel construction and a lovely aquamarine paint job.  The old Motobecane company was a French maker of bicycles and motorscooters that went bankrupt in 1981, about the time that I bought the bike.  Their market niche in France was low-end, but in the US they were a premium brand catering to the upper end of the cycling crowd.  The Jubile Sport that I bought in 1981 (or 1982) when I was a grad student was a midrange road bike for the time.  It had very good components, though there were better ones, and a reasonably light frame, though there were lighter ones.  Today the bike is completely dated, but it remains an eye-catcher with its beautiful lines and color.

Rejoining the team

After crossing the Queensboro Bridge (aka the 59th Street Bridge aka the Ed Koch Bridge) I rejoined the team just outside the Astoria rest stop.  There one of my teammates, Declan, executed a miraculous set of repairs to my bike, restoring the front derailleur to functionality and sorting out the loose spokes in my rear wheel.  The result was a smoothly functioning bike that allowed me to finish my ride comfortably and without anxiety.

After the rest stop we resumed and rode across Queens and Brooklyn.  Along the way somewhere in Brooklyn Christina ran over some debris on the road and got a flat on her bike’s rear wheel.  The whole BronxWorks team stopped and, working together, made quick work of changing out the inner tube and reinflating the tire.  We were spinning down the road again within five minutes of the flat.  There was something stimulating about addressing the flat as a team, even though most of us did nothing more than stand by watch the action.

The rest of the ride to the Verrazano-Narrows Bridge and across to Staten Island, where the ride ended, was uneventful.  We took a break at the festival grounds at the end of the route and ate a box lunch provided as part of the charity rider package.  We then rode another three or four miles to the Staten Island Ferry terminus and boarded the John F. Kennedy, on which we rode back to Manhattan.

This year, rather than carrying our bikes on to the subway, we decided to ride up the west side bike path.  This was a bit confusing in the early stages, since the bike path is a bit incomplete and confusing near Battery Park, but we were able to navigate it.  The final fifteen minutes of the ride were, as a result, the same as my ride on the days when I commute to work by bicycle, which was curiously comforting.

Tamara, our unofficial team captain, used a rather cool GPS device to track the ride and distributed a wonderful map to the riders this morning. I captured it in Google Maps and include it here.

Here are some statistics about the ride from Tamara’s application:

  • Duration: 3:19
  • Distance: 38.29 miles
  • Average speed: 11.5 mph (!!!)
  • Maximum speed: 22.3 mph
  • Pace: 5:11 minutes/mile
  • Elevation gain: 6844 feet
  • Elevation loss: 6909 feet

All in all, it was a lot of fun.  It was work, but except for the first seconds after my front derailleur mishap, I never felt like I was unlikely to be able to finish and when I was done I was very tired but quite content that I’d done a good thing.

I would like particularly to thank my sixteen incredibly generous donors (David, Ron, Igor, Michael, William, Maxine, Trevor, Hal, Steve, Jon, Stu, Satish, David, Silvia, Amy, and Lucy).  Thanks to you, homeless Bronx children and their families will have access to a range of wonderful supportive programs.

The Kindle Update

So 2011 represents my second year of Kindle use, and it’s been quite an eventful year. In 2011 I adopted a policy of not buying dead-tree books any more. And, while I had intended to sustain my use of the Nook, it didn’t really work out and I’m not even sure where my Nook is any more. I still like the Nook’s business model better than the Kindle’s, but my momentum is with the Kindle.

I bought 60 books for the Kindle in 2011 and, as before, read some but not all. I have been reading my Kindle library on a wide range of devices: on my Kindle, of course, as well as on Kindle software for our iPad, our two Android tablets, my Android cellphone, my wife’s iPhone, on all of our Macs, and on the Chrome browser. This really makes it much more attractive for me to continue to acquire books for the Kindle than for any other medium because my library is available to essentially any device I end up using.

Title Author Read
Fight Club: A Novel Palahniuk, Chuck Yes
Loyal Character Dancer Xiaolong, Qiu Yes
Using Google App Engine Severance, Charles Some
Programming Google App Engine Sanderson, Dan Some
The Next 100 Years Friedman, George Yes
The Devil in the White City Larson, Erik
The Gun Chivers, C. J. Yes
The Innocents Abroad Twain, Mark Some
Unless It Moves the Human Heart Rosenblatt, Roger
Practical Chess Exercises Cheng, Ray Some
They Are Us Hamill, Pete Some
Alone Together Turkle, Sherry Some
The Second Self Turkle, Sherry
Anathem Stephenson, Neal Yes
The Mao Case Xiaolong, Qiu Yes
American Gods Gaiman, Neil
Real-time Control of Walking Donner, M.D.
A Short History of Nearly Everything Bryson, Bill Some
The Fifth Servant: A Novel Wishnia, Kenneth
All Your Base Are Belong to Us Goldberg, Harold
Quo Vadis Sienkiewicz, Henryk Yes
Berlin Noir by Philip Kerr | Summary & Study Guide BookRags.com Some
The Flaw of Averages Savage, Sam L. Some
The Age of Wonder Holmes, Richard
Drive Pink, Daniel H.
Nemesis Roth, Philip
The Quiet War McAuley, Paul J.
Symposium Plato
The Republic Plato
Among Others Walton, Jo Yes
Altered Carbon Morgan, Richard K.
Bullfighting: Stories Doyle, Roddy
Consider Phlebas Banks, Iain M. Yes
Germinal Zola, Emile
JavaScript: The Definitive Guide Flanagan, David Some
JavaScript: The Good Parts Crockford, Douglas Some
Onward Schultz, Howard, Joanne Gordon
Rule 34 (Halting State) Stross, Charles
Selected Stories of Philip K. Dick Dick, Philip K. Some
The Complete Stories of Evelyn Waugh Waugh, Evelyn
The Player of Games Banks, Iain M. Yes
The Quantum Story : A history in 40 moments Baggott, Jim Some
Uncle Tom’s Cabin Stowe, Harriet Beecher
Wireless Stross, Charles
Works of James Joyce Joyce, James Some
jQuery Cookbook (Animal Guide) Lindley, Cody Some
Studio Ghibli: The Films of Hayao Miyazaki and Isao Takahata Odell, Michelle Le Blanc Colin Some
Francis Galton: Pioneer of Heredity and Biometry Bulmer, Michael Some
The Great Stagnation Cowen, Tyler Yes
In the Garden of Beasts Larson, Erik Some
Debt: The First 5,000 Years Graeber, David Yes
Use of Weapons BANKS, Iain M. Yes
Exploring Online Games: Cheating Massively Distributed Systems Hoglund, Greg, McGraw, Gary
The Children of the Sky Vinge, Vernor
Ready Player One Cline, Ernest
Food Rules: An Eater’s Manual Pollan, Michael
Embers Marai, Sandor
Reamde: A Novel Stephenson, Neal
The Unlikely Spy Silva, Daniel
Berlin Noir Kerr, Philip Yes

I had several interesting adventures with my kindle library this year, some of which I’ll summarize here.

Earlier in the year my brother-in-law recommended the book “Berlin Noir” to me. It is a trio of meticulously researched police procedurals set in Berlin. The first two are set in the early years of the Nazi era, while the third is set a few years after the end of the war. They all feature Bernie Gunther, a German ex-policeman turned private detective. Bernie quit the police force in disgust when the Nazis took over. Bernie isn’t a holier-than-thou boy scout – he’s not above the odd bit of vigilante justice and he is definitely looking out for himself whenever he can. But he has standards went out on his own when it became clear what was going on.

But I digress. After Gary told me about the books I went to the Kindle Store on my Kindle and ordered the book. It was delivered, at which point I realized that I’d been fooled. What I had bought was a study guide, like Cliff Notes, from a company called BookRags. I then looked for a Kindle edition of the book but did not find it. Some time later I did discover a Kindle edition and bought it. The Kindle edition is hard to find, however, and the obvious searches do not turn it up. And on the Kindle Store on the Kindle it was very easy to think I was buying the book when I was not. By the way, after finishing two of the three novels I browsed the study guide, which I found to be truly abominable. The glossary was full of inaccuracies and errors that indicated that the person who wrote it probably hadn’t read the book or had not read it carefully. Oh well.

Another adventure involved the reasons that I am now on my third Kindle device. The first Kindle, which was given to me as a Christmas present at the end of 2009, became a fixture of my life after a while. One day in 2010 I was flying to California on business. My seat, in coach, was close to the bathroom. At one point I got up to use the bathroom, leaving the Kindle on my seat. When I got back from the bathroom I found that the glass was cracked. Obviously someone waiting to use the bathroom had sat down on it and broken it. Oh well, when I got to California I got a new one at Best Buy and was reading again.

That Kindle lasted until March of 2011 when my wife and son and I went to Chile on vacation. My wife had taken to reading the New York Times on my Kindle while we traveled because it was the only way she could get the paper. She was walking with my son back from the lounge one day and accidentally dropped the Kindle into a decorative fountain in one of the lobbies. So I ordered a new one from Amazon and it was waiting at my apartment when we returned to New York. I was a bit crippled by the loss, but was able to keep reading on my laptop for the rest of the vacation.

The third, and most odd, adventure involved my own book. I have written a number of reviews of products on Amazon.com over the years and at one point in 2011 I wanted to find one to forward to a friend, so I searched for my own name. To my surprise I discovered that my book, which has been out of print since 1997 and only shows up as available used from non-Amazon sources, was listed as available as a Kindle book for an absurd price, over $80. Just to verify that it was my book, I bought a copy. It was, in fact. It looks like someone took the scan of the book that is available on Google Books and made a very low quality Kindle book out of it.

I wrote an email to Amazon protesting the offer of my book, whose copyright had reverted to me after the book went out of print. They sent me a form page instructing me to write them a paper letter asserting my claim to the copyright. I did so and after several weeks I got an email from one of their lawyers informing me that they had taken the book down and that they had fulfilled their obligations to me.

I checked, and they had not taken the book down, so I wrote her back and said that the book was not gone and reiterating my request for an accounting for all of the sales they had made of my book. I’m sure that at $80+ the only sale they had made was to me, but I wanted to see the accounting. They didn’t answer. A friend, who is a senior partner at a law firm specializing in intellectual property matters, wrote them a letter demanding an accounting, but they ignored this letter as well.

Sort of sad, since this behavior really trashed my admiration for Amazon dating back over ten years.

[Update: Since first writing this entry and putting it up on my blog, my lawyer friend got a response from Amazon to his letter about my book. It seems that the content was submitted to them in error by Springer. They made only one sale, according to their response. So everything is cleared up and I am very happy to restore Amazon’s good guy status in my heart.]

Anyway, this year I gave a Kindle Fire to a good friend and he loves it. And at the holidays all of the parental generation of the extended family conspired together and gave Kindles to all of the children, a total of six shiny new Kindle Touch devices. My son loves his … I see him reading it regularly now, which encourages me that he may yet become a reader by choice.