Cyberassault on Estonia

[This editorial was published originally in “Security & Privacy” Volume 5 Number 4 July/August 2007]

Estonia recently survived a massive distributed denial-of-service (DDoS) attack that came on the heels of the Estonian government’s relocation of a statue commemorating Russia’s 1940s wartime role. This action inflamed the feelings of the substantial Russian population in Estonia, as well as those of various elements in Russia itself.

Purple prose then boiled over worldwide, with apocalyptic announcements that a “cyberwar” had been unleashed on the Estonians. Were the attacks initiated by hot-headed nationalists or by a nation state? Accusations and denials have flown, but no nation state has claimed authorship.

It’s not really difficult to decide if this was cyberwarfare or simple criminality. Current concepts of war require people in uniforms or a public declaration. There’s no evidence that such was the case. In addition, there’s no reason to believe that national resources were required to mount the attack. Michael Lesk’s piece on the Estonia attacks in this issue (see the Digital Protection department on p. 76) include estimates that, at current botnet leasing prices, the entire attack could have been accomplished for US$100,000, a sum so small that any member of the upper middle class in Russia, or elsewhere, could have sponsored it.

Was there national agency? It’s highly doubtful that Russian President Vladimir Putin or anyone connected to him authorized the attacks. If any Russian leader had anything to say about the Estonians, it was more likely an intemperate outburst like Henry II’s exclamation about Thomas Becket, “Will no one rid me of this troublesome priest?”

We can learn from this, however: security matters, even for trivial computers. A few tens of thousands of even fairly negligible PCs, when attached by broadband connections to the Internet and commanded in concert, can overwhelm all modestly configured systems — and most substantial ones.

Engineering personal systems so that they can’t be turned into zombies is a task that requires real attention. In the meantime, the lack of quality-of-service facilities in our network infrastructure will leave them vulnerable to future botnet attacks. Several avenues are available to address the weaknesses in our current systems, and we should be exploring all of them. Faced with epidemic disease, financial panic, and other mass threats to the common good, we’re jointly and severally at risk and have a definite and legitimate interest in seeing to it that the lower limits of good behavior aren’t violated.

From the Estonia attacks, we’ve also learned that some national military institutions are, at present, hard-pressed to defend their countries’ critical infrastructures and services. Historically, military responses to attacks have involved applying kinetic energy to the attacking forces or to the attackers’ infrastructure. But when the attacking force is tens or hundreds of thousands of civilian PCs hijacked by criminals, what is the appropriate response? Defense is left to the operators of the services and of the infrastructure, with the military relegated to an advisory role‚Äîsomething that both civilians and military must find uncomfortable. Of course, given the murky situations involved in cyberwar, we’ll probably never fully learn what the defense establishments could or did do.

Pundits have dismissed this incident, arguing that this is a cry of “wolf!” that should be ignored (see www.nytimes.com/2007/06/24/weekinreview/24schwartz.html). Although it’s true that we’re unlikely to be blinded to an invasion by the rebooting of our PCs, it’s na√Øve to suggest that our vulnerability to Internet disruptions has passed its peak. Cyberwar attacks, as demonstrated in 2003 by Slammer, have the potential to disable key infrastructures. To ignore that danger is criminally naive. Nevertheless, all is not lost.

Conclusion

Events like this have been forecast for several years, and as of the latest reports, there were no surprises in this attack. The mobilization of global expertise to support Estonia’s network defense was heartening and will probably be instructive to study. Planners of information defenses and drafters of future cyberdefense treaties should be contemplating these events very carefully. This wasn’t the first such attack — and it won’t be the last.

[Here is a PDF file of the original editorial.]

Insecurity through Obscurity

[This editorial was published originally in “Security & Privacy” Volume 4 Number 5 September/October 2006]

Settling on a design for a system of any sort involves finding a workable compromise among functionality, feasibility, and finance. Does it do enough of what the sponsor wants? Can it be implemented using understood and practical techniques? Is the projected cost reasonable when set against the anticipated revenue or savings?

In the case of security projects, functionality is generally stated in terms of immunity or resistance to attacks that seek to exploit known vulnerabilities. The first step in deciding whether to fund a security project is to assess whether its benefits outweigh the costs. This is easy to state but hard to achieve.

What are the benefits? Some set of exploits will be thwarted. But how likely would they be to occur if we did nothing? And how likely will they be to occur if we implement the proposed remedy? What is the cost incurred per incident to repair the damage if we do nothing? Armed with the answers to these often unanswerable questions, we can get some sort of quantitative handle on the benefits of implementation in dollars-and-cents terms.

What are the costs? Specification, design, implementation, deployment, and operation of the solution represent the most visible costs. What about the efficiency penalty that stems from the increased operational complexity the solution imposes? This represents an opportunity cost in production that you might have achieved if you hadn’t implemented the solution.

In the current world of security practice, it’s far too common, when faced with vast unknowns about benefits, to fall back on one of two strategies: either spend extravagantly to protect against all possible threats or ignore threats too expensive to fix. Protection against all possible threats is an appropriate goal when securing nuclear weapons or similar assets for which failure is unacceptable, but for most other situations, a more pragmatic approach is indicated.

Unfortunately, as an industry, we’re afflicted with a near complete lack of quantitative information about risks. Most of the entities that experience attacks and deal with the resultant losses are commercial enterprises concerned with maintaining their reputation for care and caution. This leads them to the observation that disclosing factual data can assist their attackers and provoke anxiety in their clients. The lack of data-sharing arrangements has resulted in a near-complete absence of incident documentation standards; as such, even if organizations want to compare notes, they face a painful exercise in converting apples to oranges.

If our commercial entities have failed, is there a role for foundations or governments to act? Can we parse the problem into smaller pieces, solve them separately, and make progress that way? Other fields, notably medicine and public health, have addressed this issue more successfully than we have. What can we learn from their experiences? Doctors almost everywhere in the world are required to report the incidence of certain diseases and have been for many years. California’s SB 1386, which requires disclosure of computer security breaches, is a fascinating first step, but it’s just that — a first step. Has anyone looked closely at the public health incidence reporting standards and attempted to map them to the computer security domain? The US Federal Communications Commission (FCC) implemented telephone outage reporting requirements in 1991 after serious incidents and in 2004 increased their scope to include all the communications platforms it regulates. What did it learn from those efforts, and how can we apply them to our field?

The US Census Bureau, because it’s required to share much of the data that it gathers, has developed a relatively mature practice in anonymizing data. What can we learn from the Census Bureau that we can apply to security incident data sharing? Who is working on this? Is there adequate funding?

Conclusion

These are all encouraging steps, but they’re long in coming and limited in scope. Figuring out how to gather and share data might not be as glamorous as cracking a tough cipher or thwarting an exploit, but it does have great leverage.

[Here is a PDF file of the original editorial.]