All posts by Steven J. Murdoch

About Steven J. Murdoch

I am Professor of Security Engineering and Royal Society University Research Fellow in the Information Security Research Group of the Department of Computer Science at University College London (UCL), and a member of the UCL Academic Centre of Excellence in Cyber Security Research. I am also a bye-fellow of Christ’s College, Innovation Security Architect at the OneSpan, Cambridge, a member of the Tor Project, and a Fellow of the IET and BCS. I teach on the UCL MSc in Information Security. Further information and my papers on information security research is on my personal website. I also blog about information security research and policy on Bentham's Gaze.

The two faces of Privila

We have discussed the Privila network on Light Blue Touchpaper before. Richard explained how Privila solicit links and I described how to map the network. Since then, Privila’s behavior has changed. Previously, their pages were dominated by adverts, but included articles written by unpaid interns. Now the articles have been dropped completely, leaving more room for the adverts.

This change would appear to harm Privila’s search rankings — the articles, carefully optimized to include desirable keywords, would no longer be indexed. However, when Google download the page, the articles re-appear and the adverts are gone. The web server appears to be configured to give different pages, depending on the “User-Agent” header in the HTTP request.

For example, here’s how soccerlove.com appears in Firefox, Netscape, Opera and Internet Explorer — lots of adverts, and no article:
Soccerlove (Firefox)

In contrast, by setting the browser’s user-agent to match that of Google’s spider, the page looks very different — a prominent article and no adverts:
Soccerlove (Google)

Curiously, the Windows Live Search, and Yahoo! spiders are presented with an almost empty page: just a header but neither adverts nor articles (see update 2). You can try this yourself, by using the User Agent Switcher Firefox extension and a list of user-agent strings.

I expect the interns who wrote these articles will be displeased that their articles are hidden from view. Google will doubtlessly be interested too, since their webmaster guidelines recommend against such behavior. BMW and Ricoh were delisted for similar reasons. Fortunately for Google, I’ve already shown how to build a complete list of Privila’s sites.

Update 1 (2008-03-08):
It looks like Google has removed the Privila sites from their index. For example, searches of soccerlove.com, ammancarpets.com, and canadianbattery.com all return zero results.

Update 2 (2008-03-11):
Privila appear to have fixed the problem that led to Yahoo! and Windows Live Search bots being presented with a blank page. Both of these spiders are being shown the same content as Google’s — the article with no adverts. Normal web browsers are still being sent adverts with no article.

Update 3 (2008-03-11):
Shortly after the publication of an article about Privila’s browser tricks on The Register, Privila has restored articles on the pages shown to normal web browsers. Pages presented to search engines still are not identical — they don’t contain the adverts.

Relay attacks on card payment: vulnerabilities and defences

At this year’s Chaos Communication Congress (24C3), I presented some work I’ve been doing with Saar Drimer: implementing a smart card relay attack and demonstrating that it can be prevented by distance bounding protocols. My talk (abstract) was filmed and the video can be found below. For more information, we produced a webpage and the details can be found in our paper.

[ slides (PDF 9.6M) | video (BitTorrent — MPEG4, 106M) ]

Update 2008-01-15:
Liam Tung from ZDNet Australia has written an article on my talk: Bank card attack: Only Martians are safe.

Other highlights from the conference…

Index on Censorship: Shifting Borders

The latest issue of the journal “Index on Censorship” is dedicated to the topic of Internet censorship and features an article, “Shifting Borders”, by Ross Anderson and me. In it, we argue that it is wrong to claim that the Internet is free from barriers. They exist, and while often aligning with national boundaries they are hopefully lower.

However, the changing nature of the end-to-end principle is increasing the significance of barriers that stem from industry structure — which companies are hosting controversial information, where they do business, what markets do they compete in and what corporate partnerships are involved. The direction these take will have a significant impact on the scale of Internet censorship.

The rest of the journal is well worth reading, with authors including Xeni Jardin, David Weinberger and Jimmy Wales. I can especially recommend taking a look at Nart Villeneuve’s article, “Evasion Tactics”, also published on his blog. Unfortunately access to the full online version is restricted to subscribers.

Covert channel vulnerabilities in anonymity systems

My PhD thesis — “Covert channel vulnerabilities in anonymity systems” — has now been published:

The spread of wide-scale Internet surveillance has spurred interest in anonymity systems that protect users’ privacy by restricting unauthorised access to their identity. This requirement can be considered as a flow control policy in the well established field of multilevel secure systems. I apply previous research on covert channels (unintended means to communicate in violation of a security policy) to analyse several anonymity systems in an innovative way.

One application for anonymity systems is to prevent collusion in competitions. I show how covert channels may be exploited to violate these protections and construct defences against such attacks, drawing from previous covert channel research and collusion-resistant voting systems.

In the military context, for which multilevel secure systems were designed, covert channels are increasingly eliminated by physical separation of interconnected single-role computers. Prior work on the remaining network covert channels has been solely based on protocol specifications. I examine some protocol implementations and show how the use of several covert channels can be detected and how channels can be modified to resist detection.

I show how side channels (unintended information leakage) in anonymity networks may reveal the behaviour of users. While drawing on previous research on traffic analysis and covert channels, I avoid the traditional assumption of an omnipotent adversary. Rather, these attacks are feasible for an attacker with limited access to the network. The effectiveness of these techniques is demonstrated by experiments on a deployed anonymity network, Tor.

Finally, I introduce novel covert and side channels which exploit thermal effects. Changes in temperature can be remotely induced through CPU load and measured by their effects on crystal clock skew. Experiments show this to be an effective attack against Tor. This side channel may also be usable for geolocation and, as a covert channel, can cross supposedly infallible air-gap security boundaries.

This thesis demonstrates how theoretical models and generic methodologies relating to covert channels may be applied to find practical solutions to problems in real-world anonymity systems. These findings confirm the existing hypothesis that covert channel analysis, vulnerabilities and defences developed for multilevel secure systems apply equally well to anonymity systems.

Steven J. Murdoch, Covert channel vulnerabilities in anonymity systems, Technical report UCAM-CL-TR-706, University of Cambridge, Computer Laboratory, December 2007.

Privacy Enhancing Technologies Symposium (PETS 2008)

I am on the program committee for the Privacy Enhancing Technologies Symposium (previously the PET Workshop), which this year will be held in Leuven, Belgium, 23–25 July 2008. PETS is one of the leading venues for research in privacy, so if you have any relevant research, I can thoroughly recommend submitting it here.

In addition to the main paper session, a new feature this year is HotPETS, which gives the opportunity for short presentations on new and exciting ideas that are potentially not yet mature enough for publication. As usual, proposals for panels are also invited.

The deadline for submissions is 19 February 2008 (except for HotPETS, which is 11 April 2008). More details can be found in the Call For Papers.

Theme is back

Dan Cvrček has very kindly ported over the old Blix-based theme to be compatible with WordPress 2.3 (and also hopefully more maintainable). There are a few bugs to be ironed out, for example the Authors and About pages don’t work yet, but these are being worked on. If you spot any other problems, please leave a comment on this post, or email lbt-admin @cl.cam.ac.uk.

Update 2007-11-28: Authors and About should now work.

WordPress cookie authentication vulnerability

In my previous post, I discussed how I analyzed the recent attack on Light Blue Touchpaper. What I did not disclose was how the attacker gained access in the first place. It turned out to incorporate a zero-day exploit, which is why I haven’t mentioned it until now.

As a first step, the attacker exploited an SQL injection vulnerability. When I noticed the intrusion, I upgraded WordPress then restored the database and files from off-server backups. WordPress 2.3.1 was released less than a day before my upgrade, and was supposed to fix this vulnerability, so I presumed I would be safe.

I was therefore surprised when the attacker broke in again, the following day (and created himself an administrator account). After further investigation, I discovered that he had logged into the “admin” account — nobody knows the password for this because I set it to a long random string. Neither me nor other administrators ever used that account, so it couldn’t have been XSS or another cookie stealing attack. How was this possible?

From examining the WordPress authentication code I discovered that the password hashing was backwards! While the attacker couldn’t have obtained the password from the hash stored in the database, by simply hashing the entry a second time, he generated a valid admin cookie. On Monday I posted a vulnerability disclosure (assigned CVE-2007-6013) to the BugTraq and Full-Disclosure mailing lists, describing the problem in more detail.

It is disappointing to see that people are still getting this type of thing wrong. In their 1978 summary, Morris and Thompson describe the importance of one way hashing and password salting (neither of which WordPress does properly). The issue is currently being discussed on LWN.net and the wp-hackers mailing list. Hopefully some progress will be made at getting it right this time around.

Google as a password cracker

One of the steps used by the attacker who compromised Light Blue Touchpaper a few weeks ago was to create an account (which he promoted to administrator; more on that in a future post). I quickly disabled the account, but while doing forensics, I thought it would be interesting to find out the account password. WordPress stores raw MD5 hashes in the user database (despite my recommendation to use salting). As with any respectable hash function, it is believed to be computationally infeasible to discover the input of MD5 from an output. Instead, someone would have to try out all possible inputs until the correct output is discovered.

So, I wrote a trivial Python script which hashed all dictionary words, but that didn’t find the target (I also tried adding numbers to the end). Then, I switched to a Russian dictionary (because the comments in the shell code installed were in Russian) but that didn’t work either. I could have found or written a better password cracker, which varies the case of letters, and does common substitutions (e.g. o → 0, a → 4) but that would have taken more time than I wanted to spend. I could also improve efficiency with a rainbow table, but this needs a large database which I didn’t have.

Instead, I asked Google. I found, for example, a genealogy page listing people with the surname “Anthony”, and an advert for a house, signing off “Please Call for showing. Thank you, Anthony”. And indeed, the MD5 hash of “Anthony” was the database entry for the attacker. I had discovered his password.

In both the webpages, the target hash was in a URL. This makes a lot of sense — I’ve even written code which does the same. When I needed to store a file, indexed by a key, a simple option is to make the filename the key’s MD5 hash. This avoids the need to escape any potentially dangerous user input and is very resistant to accidental collisions. If there are too many entries to store in a single directory, by creating directories for each prefix, there will be an even distribution of files. MD5 is quite fast, and while it’s unlikely to be the best option in all cases, it is an easy solution which works pretty well.

Because of this technique, Google is acting as a hash pre-image finder, and more importantly finding hashes of things that people have hashed before. Google is doing what it does best — storing large databases and searching them. I doubt, however, that they envisaged this use though. 🙂

Upgrade and new theme

Regular readers may have noticed that Light Blue Touchpaper was down most of today. This was due to the blog being compromised through several WordPress vulnerabilities. I’ve now cleaned this up, restored from last night’s backups and upgraded WordPress. A downside is that our various customizations need substantial modification before working again, most notably the theme, which is based on Blix and has not been updated since WordPress 1.5. Email also will not work due to this bug. I am working on a fix to this and other problems, so please accept my apologies in the mean time.

Embassy email accounts breached by unencrypted passwords

When it rains, it pours. Following the fuss over the Storm worm impersonating Tor, today Wired and The Register are covering the story of a Dan Egerstad, who intercepted embassy email account passwords by setting up 5 Tor exit nodes, then published the results online. People have been sniffing passwords on Tor before, and one even published a live feed. However, the sensitivity of embassies as targets and initial mystery over how the passwords were snooped, helped drum up media interest.

That unencrypted traffic can be read by Tor exit nodes is an unavoidable fact – if the destination does not accept encrypted information then there is nothing Tor can do to change this. The download page has a big warning, recommending users adopt end-to-end encryption. In some cases this might not be possible, for example browsing sites which do not support SSL, but for downloading email, not using encryption with Tor is inexcusable.

Looking at who owns the IP addresses of the compromised email accounts, I can see that they are mainly commercial ISPs, generally in the country where the embassy is located, so probably set up by the individual embassy and not subject to any server-imposed security policies. Even so, it is questionable whether such accounts should be used for official business, and it is not hard to find providers which support encrypted access.

The exceptions are Uzbekistan, and Iran whose servers are controlled by the respective Ministry of Foreign Affairs, so I’m surprised that secure access is not mandated (even my university requires this). I did note that the passwords of the Uzbek accounts are very good, so might well be allocated centrally according to a reasonable password policy. In contrast, the Iranian passwords are simply the name of the embassy, so guessable not only for these accounts, but any other one too.

In general, if you are sending confidential information over the Internet unencrypted you are at risk, and Tor does not change this fact, but it does move those risks around. Depending on the nature of the secrets, this could be for better or for worse. Without Tor, data can be intercepted near the server, near the client and also in the core of the Internet; with Tor data is encrypted near the client, but can be seen by the exit node.

Users of unknown Internet cafés or of poorly secured wireless are at risk of interception near the client. Sometimes there is motivation to snoop traffic there but not at the exit node. For example, people may be curious what websites their flatmates browse, but it is not interesting to know that an anonymous person is browsing a controversial site. This is why, at conferences, I tunnel my web browsing via Cambridge. I know that without end-to-end encryption my data can be intercepted, but the sysadmins at Cambridge have far less incentive misbehave than some joker sitting behind me.

Tor has similar properties, but when used with unencrypted data the risks need to be carefully evaluated. When collecting email, be it over Tor, using wireless, or via any other untrustworthy media, end-to-end encryption is essential. The fact that embassies, who are supposed to be security conscious, do not appreciate this is disappointing to learn.

Although I am a member of the Tor project, the views expressed here are mine alone and not those of Tor.