The Internet is, by very definition, an interconnected network of networks. The resilience of the way in which the interconnection system works is fundamental to the resilience of the Internet. Thus far the Internet has coped well with disasters such as 9/11 and Hurricane Katrina – which have had very significant local impact, but the global Internet has scarcely been affected. Assorted technical problems in the interconnection system have caused a few hours of disruption but no long term effects.
But have we just been lucky ? A major new report, just published by ENISA (the European Network and Information Security Agency) tries to answer this question.
The report was written by Chris Hall, with the assistance of Ross Anderson and Richard Clayton at Cambridge and Panagiotis Trimintzios and Evangelos Ouzounis at ENISA. The full report runs to 238 pages, but for the time-challenged there’s a shorter 31 page executive summary and there will be a more ‘academic’ version of the latter at this year’s Workshop on the Economics of Information Security (WEIS 2011).
Internet interconnectivity is a complex ecosystem with many interdependent layers. Its operation is governed by the collective self-interest of the Internet’s networks, but there is no central Network Operation Centre (NOC), staffed with technicians to leap into action when trouble occurs. The open and decentralised organisation that is the very essence of the ecosystem is essential to the success and resilience of the Internet. Yet there are a number of concerns.
First, the Internet is vulnerable to various kinds of common mode technical failures where systems are disrupted in many places simultaneously; service could be substantially disrupted by failures of other utilities, particularly the electricity supply; a flu pandemic could cause the people on whose work it depends to stay at home, just as demand for home working by others was peaking; and finally, because of its open nature, the Internet is at risk of intentionally disruptive attacks.
Second, there are concerns about sustainability of the current business models. Internet service is cheap, and becoming rapidly cheaper, because the costs of service provision are mostly fixed costs; the marginal costs are low, so competition forces prices ever downwards. Some of the largest operators – the ‘Tier 1’ transit providers – are losing substantial amounts of money, and it is not clear how future capital investment will be financed. There is a risk that consolidation might reduce the current twenty-odd providers to a handful, at which point regulation may be needed to prevent monopoly pricing.
Third, dependability and economics interact in potentially pernicious ways. Most of the things that service providers can do to make the Internet more resilient, from having excess capacity to route filtering, benefit other providers much more than the firm that pays for them, leading to a potential ‘tragedy of the commons’. Similarly, security mechanisms that would help reduce the likelihood and the impact of malice, error and mischance are not implemented because no-one has found a way to roll them out that gives sufficiently incremental and sufficiently local benefit.
Fourth, there is remarkably little reliable information about the size and shape of the Internet infrastructure or its daily operation. This hinders any attempt to assess its resilience in general and the analysis of the true impact of incidents in particular. The opacity also hinders research and development of improved protocols, systems and practices by making it hard to know what the issues really are and harder yet to test proposed solutions.
So there may be significant troubles ahead which could present a real threat to economic and social welfare and lead to pressure for regulators to act. Yet despite the origin of the Internet in DARPA-funded research, the more recent history of government interaction with the Internet has been unhappy. Various governments have made ham-fisted attempts to impose censorship or surveillance, while others have defended local telecommunications monopolies or have propped up other industries that were disrupted by the Internet. As a result, Internet Service Providers (ISPs), whose good will is essential for effective regulation, have little confidence in the likely effectiveness of state action, and many would expect it to make things worse.
Any policy makers should therefore proceed with caution. At this stage, there are four types of activity that can be useful at the European (and indeed the global) level.
The first is to understand failures better, so that all may learn the lessons. This means consistent, thorough, investigation of major outages and the publication of the findings. It also means understanding the nature of success better, by supporting long term measurement of network performance, and by sustaining research in network performance.
The second is to fund key research in topics such as inter-domain routing – with an emphasis not just on the design of security mechanisms, but also on traffic engineering, traffic redirection and prioritisation, especially during a crisis, and developing an understanding of how solutions can be deployed in the real world.
The third is to promote good practice. Diverse service provision can be encouraged by explicit terms in public sector contracts, and by auditing practices that draw attention to reliance on systems that lack diversity. The public section might also promote the independent testing of equipment and protocols.
The fourth is public engagement. Greater transparency may help Internet users to be more discerning customers, creating incentives for improvement, and the public should be engaged in discussions on potentially controversial issues such as traffic prioritisation in an emergency. And finally, Private Public Partnerships (PPPs) of relevant stakeholders, operators, vendors, public actors etc. are important for self-regulation to be effective. Additionally, should more formal regulation become necessary in the future, more informed policy makers who are already engaged with industry will be able to make better decisions.
So if you’ve ever wondered how the Internet is glued together, and how it might come apart – or if you’re interested in learning about yet another area where computer security and economics interact – then this report will be fascinating reading.
Thank-your for highlighting some of these issues. I’ve been worried about the lack of information about the internet for a while. The lack of basic facts makes it very difficult to understand the growth of data traffic, and the consequent arguments about usage capping, net neutrality up to and including the calls for a redesign of the internet. Anything that can address the “Lack of Information” point is to be welcomed.
There is an interesting initiative in financial services to exchange anonymized data about operational risk.
“ORX was founded in 2002 with the primary objective of creating a platform for the secure and anonymised exchange of high-quality operational risk loss data.”
http://www.orx.org/about-orx
Would a similar approach like this work for the internet as a way of getting ISPs to exchange information in an anonymized way? (Full disclosure: I work for the company that operates ORX).
– Zygmunt
I’d like to point out that the ENISA white paper has used the definition of resilience from our Computer Networks Journal paper “Resilience and Survivability in Communication Networks: Strategies, Principles, and Survey of Disciplines” without attribution. It is the first paper available on https://wiki.ittc.ku.edu/resilinets/ResiliNets_Publications. Actually we’ve been using it since 2006 in various publications, this paper is now the definitive source.
A addition to my last post; after checking we’ve been using it since 2004 when David Hutchison at Lancaster and I came up with it in the context of the FP6 ANA project, and it became the basis of the ResiliNets initiative at The University of Kansas and Lancaster University and FP7 ResumeNet project. Our first published use of this definition is in the 2005 IFIP IWAN proceedings.
Sorry James, you’re quite right and it’s entirely our fault. One of the authors added that text during a revision cycle but in the later editorial process we failed to put in the appropriate citation. We’ll correct this in the conference version of the report, which has been accepted for WEIS 2011, and in any version we subsequently send to an archival journal. Sorry again
Thanks!
James
A good article with a complete summary of technical issues.
But what about the “not_so_hidden” ENISA intentions on an unintended wish of control on EU internet area? In this strange times of Anonymous retaliations, major filtering like HADOPI (in France), and so many things like that (yesterday: Europe thinks about a great Firewall) the complete overview should be shown!
@readers, please take a look into my special article (May, 6) about ENISA concerns and possible intentions: http://translate.google.com/translate?hl=fr&sl=fr&tl=en&u=http%3A%2F%2Fsi-vis.blogspot.com
@RichardClayton: thank you, Sir!
> “…regulation may be needed to prevent monopoly pricing.”
In the real word, the actual effect of regulation is to enhance and maintain monopolies, which cannot exist without it.
This is what happens when people with an extreme statist bias encounter functioning anarchistic societies — they cannot believe that they exist, and if they do, their continuing existence must be a fluke.
The fact of the matter is that the Internet has been functioning without significant government intervention for more than 20 years, and there is no reason we should expect it to suddenly stop working. Focusing upon a few giant “Tier One” corporations’ losses obscures the larger picture of a healthy ecosystem with a myriad of smaller independent participants.
In the US, this statist impulse is manifested in calls for “Network Neutrality”…in the EU, it is manifested in calls for a “Schengen Firewall”.
Comment number four’s source URL is… AMAZING! Wow! Thank you! I have rarely found such a wealth of content on a single webpage.
For Mr. Richard Clayton. Thank you for a fine post! I bookmarked it over a year ago, yet it remains relevant and will continue to be, I am certain. ENISA is doing good work, soldiering on, bringing to light critical dependencies that are not covered elsewhere. Case in point: ENISA’s November 2011 report on maritime cybersecurity.
Regarding comment 8: You call it “statist bias”, I call it “working together for the common good”. Although I agree with you in some contexts e.g. I hope the IPv6 conversion will not require any governmental prodding. However, the fact that Western Europe and the People’s Republic of China are so far ahead of us, in the U.S.A., is curious.
Last thought regarding internet resiliency: ISC seems to be making an effort to address this recently, with Arborist
https://kb.isc.org/article/AA-00692/171/About-Arborist.html
* Arborist is not affiliated with Arbor Networks (the company). Sometimes I worry about ISC though. I hope that the F-root server that they run for IANA https://www.isc.org/community/f-root
is well-cared for, vigilantly watched.
And a few years later, we read of China Telecom having used hijacking more than once in circumstances that suggest it may have become a tool of state intelligence: https://scholarcommons.usf.edu/mca/vol3/iss1/7/
And then of the use of BGP hijacking to conduct eight-figure ad fraud. Looks like reality is finally catching up with our warnings
ENISA have moved the report to here.