All posts by Richard Clayton

Random isn't always useful

It’s common to think of random numbers as being an essential building block in security systems. Cryptographic session keys are chosen at random, then shared with the remote party. Security protocols use “nonces” for “freshness”. In addition, randomness can slow down information gathering attacks, although here they are seldom a panacea. However, as George Danezis and I recently explained in “Route Fingerprinting in Anonymous Communications” randomness can lead to uniqueness — exactly the property you don’t want in an anonymity system.
Continue reading Random isn't always useful

Security Theater at the Grand Coulee Dam

Security theater” is the term that Bruce Schneier uses to describe systems that look very exciting and dramatic (and make people feel better) but entirely miss the point in delivering any actual real world security. The world is full of systems like this and since 9/11 they’ve been multiplying.

Bruce also recently ran a competition for a “movie plot” security threat — the winner described an operation to fly planes laden with explosives into Grand Coulee Dam.

As it happens, I was recently actually at Grand Coulee Dam as a tourist — one of the many places I visited as I filled in the time between the SRUTI and CEAS academic conferences. Because this is a Federal site, provision was made from the beginning for visitors to see how their tax dollars were spent, and you can go on tours of the “3rd Power House” (an extra part of the dam, added between 1966 and 1974, and housing six of the largest hydroelectric generators ever made).

Until 9/11 you could park on top of the dam itself and wander around on a self-guided tour. Now, since the site is of such immense economic significance, you have to park outside the site and go on guided tours, of limited capacity. You walk in for about 800 yards (a big deal for Americans I understand) and must then go through an airport style metal detector. You are not allowed to take in backpacks or pointy things — you can however keep your shoes on. The tour is very interesting and I recommend it. You get to appreciate the huge scale of the place, the tiny looking blue generators are 33 feet across!, and you go up close to one of the generators as it spins in front of you, powering most of the NorthWest and a fair bit of California as well.

The security measures make some sense; although doubtless the place the bad guys would really like to damage is the control center and that isn’t on the tour. However….

… on the other side of the valley, a quarter of a mile from the dam itself, is a “visitor arrival center“. This contains a number of displays about the history of the dam and its construction, and if you have the time, there’s films to watch as well. On summer nights they project a massive laser light show from there (a little tacky in places, but they run white water over the dam to project onto, which is deeply impressive). You don’t have to go through any security screening to get into the center. However, and that’s the security theater I promised — you cannot take in any camera bags, backpacks etc!

No purses, backpacks, bags, fannypacks, camera cases or packages of any kind allowed in the visitor center.

What’s the threat here? I went to a dozen other visitor centers (in National Parks such as Yellowstone, Grand Teton, Glacier, Mt. Rainier and Crater Lake) that were generally far more busy than this one. Terrorists don’t usually blow up museums, and if, deity forbid, they blew up this one, it’s only the laser lights that would go out.

Ignoring the "Great Firewall of China"

The Great Firewall of China is an important tool for the Chinese Government in their efforts to censor the Internet. It works, in part, by inspecting web traffic to determine whether or not particular words are present. If the Chinese Government does not approve of one of the words in a web page (or a web request), perhaps it says “f” “a” “l” “u” “n”, then the connection is closed and the web page will be unavailable — it has been censored.

This user-level effect has been known for some time… but up until now, no-one seems to have looked more closely into what is actually happening (or when they have, they have misunderstood the packet level events).

It turns out [caveat: in the specific cases we’ve closely examined, YMMV] that the keyword detection is not actually being done in large routers on the borders of the Chinese networks, but in nearby subsidiary machines. When these machines detect the keyword, they do not actually prevent the packet containing the keyword from passing through the main router (this would be horribly complicated to achieve and still allow the router to run at the necessary speed). Instead, these subsiduary machines generate a series of TCP reset packets, which are sent to each end of the connection. When the resets arrive, the end-points assume they are genuine requests from the other end to close the connection — and obey. Hence the censorship occurs.

However, because the original packets are passed through the firewall unscathed, if both of the endpoints were to completely ignore the firewall’s reset packets, then the connection will proceed unhindered! We’ve done some real experiments on this — and it works just fine!! Think of it as the Harry Potter approach to the Great Firewall — just shut your eyes and walk onto Platform 9¾.

Ignoring resets is trivial to achieve by applying simple firewall rules… and has no significant effect on ordinary working. If you want to be a little more clever you can examine the hop count (TTL) in the reset packets and determine whether the values are consistent with them arriving from the far end, or if the value indicates they have come from the intervening censorship device. We would argue that there is much to commend examining TTL values when considering defences against denial-of-service attacks using reset packets. Having operating system vendors provide this new functionality as standard would also be of practical use because Chinese citizens would not need to run special firewall-busting code (which the authorities might attempt to outlaw) but just off-the-shelf software (which they would necessarily tolerate).

There’s a little more to this story (but not much) and all is revealed in our academic paper (Clayton, Murdoch, Watson) which will be presented at the 6th Workshop on Privacy Enhancing Technologies being held here in Cambridge this week.

NB: There’s also rather more to censorship in China than just the “Great Firewall” keyword detecting system — some sites are blocked unconditionally, and it is necessary to use other techniques, such as proxies, to deal with that. However, these static blocks are far more expensive for the Chinese Government to maintain, and are inherently more fragile and less adaptive to change as content moves around. So there remains real value in exposing the inadequacy of the generic system.

The bottom line though, is that a great deal of the effectiveness of the Great Chinese Firewall depends on systems agreeing that it should work … wasn’t there once a story about the Emperor’s New Clothes ?

The Rising Tide: DDoS by Defective Designs and Defaults

Dedicated readers will recall my article about how I tracked down the “DDoS” attack on stratum 1 time servers by various D-Link devices. I’ve now had a paper accepted at the 2nd Workshop on Steps to Reducing Unwanted Traffic on the Internet (SRUTI’06) which runs in California in early July.

The paper (PDF version available here and HTML here) gives rather more details about the problems with the D-Link firmware. More significantly, it puts this incident into context as one of a number of problems suffered by stratum 1 time servers over the past few years AND shows that these time server problems are just one example of a number of incidents involving different types of system that have been “attacked” by defective designs or poorly chosen defaults.

My paper is fairly gloomy about the prospects for improvement going forward. ISPs are unlikely to be interested in terminating customers who are running “reputable” systems which just happen to contribute to a DDoS on some remote system. There’s no evidence that system designers are learning from past mistakes — and the deskilling of program development is meaning that ever more clueless people are involved. Economic and legal approaches don’t seem especially promising — it may have cost D-Link (and Netgear before them) real dollars, but I doubt that the cost been high enough yet to scare other companies into auditing their systems before they too cause a similar problem.

As to the title… I suggest that if a classic, zombie-originated, DDoS attack is like directing a firehose onto a system; and if a “flash crowd” (or “slashdotting”) is like a flash flood; then the sort of “attack” that I describe is like a steadily rising tide, initially easy to ignore and not very significant, but it can still drown you just the same.

Hence it’s important to make sure that your security approach — be it dams and dikes, swimming costumes and life-jackets, or wetsuits and scuba gear (or of course their Internet anti-DDoS equivalents) — is suitable for dealing with all of these threats.

D-Link settles!

All the fuss about D-Link’s usage of the Danish-based stratum 1 time server seems to have had one good result. Poul-Henning Kamp’s web page has the following announcement this morning:

“D-Link and Poul-Henning Kamp announced today that they have amicably resolved their dispute regarding access to Mr. Kamp’s GPS.Dix.dk NTP Time Server site. D-Link’s existing products will have authorized access to Mr. Kamp’s server, but all new D-Link products will not use the GPS.Dix.dk NTP time server. D-Link is dedicated to remaining a good corporate and network citizen.”

which was nice.

Time will tell if D-Link has arranged their firmware to avoid sending undesirable traffic to other stratum 1 time servers as well, but at least the future well-being of Poul-Henning’s machine is assured.

When firmware attacks! (DDoS by D-Link)

Last October I was approached by Poul-Henning Kamp, a self-styled “Unix guru at large”, and one of the FreeBSD developers. One of his interests is precision timekeeping and he runs a stratum 1 timeserver which is located at DIX, the neutral Danish IX (Internet Exchange Point). Because it provides a valuable service (extremely accurate timing) to Danish ISPs, the charges for his hosting at DIX are waived.

Unfortunately, his NTP server has been coming under constant attack by a stream of Network Time Protocol (NTP) time request packets coming from random IP addresses all over the world. These were disrupting the gentle flow of traffic from the 2000 or so genuine systems that were “chiming” against his master system, and also consuming a very great deal of bandwidth. He was very interested in finding out the source of this denial of service attack — and making it stop!
Continue reading When firmware attacks! (DDoS by D-Link)

Award winners

Congratulations to Steven J. Murdoch and George Danezis who were recently awarded the Computer Laboratory Lab Ring (the local alumni association) award for the “most notable publication” (that’s notable as in jolly good) for the past year, written by anyone in the whole lab.

Their paper, “Low cost traffic analysis of Tor”, was presented at the 2005 IEEE Symposium on Security and Privacy (Oakland 2005). It demonstrates a feasible attack, within the designer’s threat model, on the anonymity provided by Tor, the second generation onion routing system.

George was recently back in Cambridge for a couple of days (he’s currently a post-doc visiting fellow at the Katholieke Universiteit Leuven) so we took a photo to commemorate the event (see below). As it happens, Steven will be leaving us for a while as well, to work as an intern at Microsoft Research for a few months… one is reminded of the old joke about the Scotsman coming south of the border and thereby increasing the average intelligence of both countries 🙂

George Danezis and Steven J. Murdoch, most notable publication 2006
George Danezis and Steven J. Murdoch, most notable publication 2006

Towards a market price for insecurity

There’s been a certain amount of research into the value of security holes in the past few years (for a starter bibliography see the “Economics of vulnerabilities” section on Ross Anderson’s “Economics and Security Resource Page”).

Both TippingPoint and iDefense who currently run vulnerability markets for zero day exploits are somewhat coy about saying what they currently pay (and they both have frequent contributor programmes to try and persuade people not to stick with one buyer, which will distort the market).

The idea is that the firms will bid for the vulnerability, pay the finder (who will keep it quiet) and then work with the vendor to get the hole fixed. In the meantime the firm’s customers will get protection (maybe by a firewall rule) for the new threat — which should attract more customers, and will hopefully pay for buying the vulnerabilities in the first place. The rest of the world gets to hear about it when the vendor finally ships a fix in the form of patches.

It was reported that when TippingPoint came in (giving the impression that they’d be paying out various multiples of $1000) iDefense promptly indicated they’d be doubling what they paid… which one source indicated was usually around $300 to $1000. So competition seems to have affected the market; but the prices paid are still quite low.

However, last December eWEEK reported that some enterprising Russians were offering a 0-day exploit for the Microsoft WMF vulnerability for $4,000 (and it might not have been exclusive, they might sell it to several people).

And now — until the end of March — iDefense are offering an extra $10,000 on top of what they’d normally pay if when Microsoft eventually issue a patch they label a vulnerability as “critical” (viz: you could use it to construct a worm that ran without user interaction).

eWEEK have an interesting article on this, the quotes in which deserve some attention for the (non)grasp of economics that appears to be involved. First off they quote Microsoft as saying “We do not believe that offering compensation for vulnerability information is the best way [researchers] can help protect customers”. That’s an interesting viewpoint — perhap’s they will be submitting a paper to support their view to WEIS 2006?

eWEEK say (they don’t have an exact quote) that Michael Sutton of iDefense “dismissed the notion that paying for vulnerabilities helps to push up the price for hackers who sell flaws on the illegal underground markets”. That suggests either a market in which communication of pricing information is extremely poor; or that Sutton has a new economic theory that will influence the Nobel committee!

In the same article, Peter Mell from NIST is quoted as saying it was “unfair” to concentrate on a single vendor (though I expect iDefense chose Microsoft for their market share and not by tossing a coin!). He was also apparently concerned about the influence on Bill Gates’ fortune, “A third party with a lot of money could cause stock price shifts if they want to”. That’s just “Stock Exchange Operations 101” so I think we can discount that as a specific worry (though WEIS 2005 attendees will of course recall that security holes do affect share prices).

Complexities in criminalising denial of service attacks

Last autumn I wrote a background paper on “Complexities in criminalising denial of service attacks” for the Internet Crime Forum (ICF) Legal subgroup. The idea was to give the lawyers some understanding of what DoS and DDoS attacks were all about, and how it can be hard to pin down concepts such as authorisation when one looks at how we use Internet resources today.

The Home Office has now brought forward the Police and Justice Bill, which contains amendments to Section 3 of the Computer Misuse Act 1990 to deal (they hope) with denial-of-service attacks. Thus events have overtaken the document – so there is little value in progressing the document through the ICF procedures needed to make it an Official Publication. Hence I’ve made it available on my own website, so as to provide a background resource to those considering whether the Home Office have got it right!

EarthLink has just 31 challenge-response CAPTCHAs

EarthLink, the US ISP, provides its users with a number of spam blocking and filtering systems. One of these systems, deployed since 2003 or so, is called “Suspect Email Blocking” and is one of those tedious and ineffective “Challenge-Response” systems. They might have made sense once, but now they just send out their challenges to the third parties whose identity has been stolen by the spammers.

Since the spammers have been stealing my identity a LOT recently — and since Earthlink is failing to detect their emails as spam — I have received several hundred of these Challenge-Response emails 🙁 Effectively, EarthLink customers are dumping their spam filtering costs onto me.

Well I’m now mad as hell and not going to take it any more. So I’ve been responding to these challenges, and whenever possible I’ve been sending along a message that indicates the practical effect of the system. Of course this will mean that the spam will be delivered (and the forged email address will be whitelisted in future) which is hardly what is desired! Since this should be quite noticeable, if everyone was to spend a few minutes each day responding to the challenges then Challenge-Response systems would die out overnight! So please join in!!

Howver, responding is rather tedious (the idea, after all, is that the spammers won’t be able to afford to do it — though in practice they would be able to keep sending their more profitable spam by using labour from the Third World). To avoid this tedium I’ve been working on the automation of my responses. However, the EarthLink web page on which you respond contains a visual CAPTCHA — specifically so as to prevent automatic responses to the challenges. Nevertheless, I got a lot slicker at answering the questions when I wrote some Perl and put up a little Tk widget to collect the answer to the CAPTCHAs.

TK widget for EarthLink CAPTCHAs

The idea was to move on to some fancy image processing since there’s been a lot of success at this (see here and here for starters)… However, that won’t be necessary. It turns out, nearly 300 challenges later, that EarthLink only have 31 CAPTCHAs in total… although since some turn up a great deal more more rarely than others, it may be that there’s a few more to be collected!

01 EarthLink CAPTCHA 01 02 EarthLink CAPTCHA 02 03 EarthLink CAPTCHA 03
04 EarthLink CAPTCHA 04 05 EarthLink CAPTCHA 05 06 EarthLink CAPTCHA 06
07 EarthLink CAPTCHA 07 08 EarthLink CAPTCHA 08 09 EarthLink CAPTCHA 09
10 EarthLink CAPTCHA 10 11 EarthLink CAPTCHA 11 12 EarthLink CAPTCHA 12
13 EarthLink CAPTCHA 13 14 EarthLink CAPTCHA 14 15 EarthLink CAPTCHA 15
16 EarthLink CAPTCHA 16 17 EarthLink CAPTCHA 17 18 EarthLink CAPTCHA 18
19 EarthLink CAPTCHA 19 20 EarthLink CAPTCHA 20 21 EarthLink CAPTCHA 21
22 EarthLink CAPTCHA 22 23 EarthLink CAPTCHA 23 24 EarthLink CAPTCHA 24
25 EarthLink CAPTCHA 25 26 EarthLink CAPTCHA 26 27 EarthLink CAPTCHA 27
28 EarthLink CAPTCHA 28 29 EarthLink CAPTCHA 29 30 EarthLink CAPTCHA 30
31 EarthLink CAPTCHA 31

For rather more detail, and the current totals for each CAPTCHA (some have turned up nearly 30 times, some just once) please see the detailed account which I’ve placed on my own webspace.

By the way: If you’re an EarthLink user reading this — then please turn OFF “Suspect Email Blocking”! You’re just annoying everyone else 🙁