All posts by Steven J. Murdoch

About Steven J. Murdoch

I am Professor of Security Engineering and Royal Society University Research Fellow in the Information Security Research Group of the Department of Computer Science at University College London (UCL), and a member of the UCL Academic Centre of Excellence in Cyber Security Research. I am also a bye-fellow of Christ’s College, Innovation Security Architect at the OneSpan, Cambridge, a member of the Tor Project, and a Fellow of the IET and BCS. I teach on the UCL MSc in Information Security. Further information and my papers on information security research is on my personal website. I also blog about information security research and policy on Bentham's Gaze.

Analysis of the Storm Javascript exploits

On Monday I formally joined the Tor project and it certainly has been an interesting week. Yesterday, on both the Tor internal and public mailing lists, we received several reports of spam emails advertising Tor. Of course, this wasn’t anything to do with the Tor project and the included link was to an IP address (it varied across emails). On visiting this webpage (below), the user was invited to download tor.exe which was not Tor, but instead a trojan which if run would recruit a computer into the Storm (aka Peacomm and Nuwar) botnet, now believed to be the worlds largest supercomputer.

Spoofed Tor download site

Ben Laurie, amongst others, has pointed out that this attack shows that Tor must have a good reputation for it to be considered worthwhile to impersonate. So while dealing with this incident has been tedious, it could be considered a milestone in Tor’s progress. It has also generated some publicity on a few blogs. Tor has long promoted procedures for verifying the authenticity of downloads, and this attack justifies the need for such diligence.

One good piece of advice, often mentioned in relation to the Storm botnet, is that recipients of spam email should not click on the link. This is because there is malicious Javascript embedded in the webpage, intended to exploit web-browser vulnerabilities to install the trojan without the user even having to click on the download link. What I did not find much discussion of is how the exploit code actually worked.

Notably, the malware distribution site will send you different Javascript depending on the user-agent string sent by the browser. Some get Javascript tailored for vulnerabilities in that browser/OS combination while the rest just get the plain social-engineering text with a link to the trojan. I took a selection of popular user-agent strings, and investigated what I got back on sending them to one of the malware sites.

Continue reading Analysis of the Storm Javascript exploits

Mapping the Privila network

Last week, Richard Clayton described his investigation of the Privila internship programme. Unlike link farms, Privila doesn’t link to its own websites. Instead, they apparently solely depend on the links made to the site before they took over the domain name, and new ones solicited through spamming. This means that normal mapping techniques, just following links, will not uncover Privila sites. This might be one reason they took this approach, or perhaps it was just to avoid being penalized by search engines.

The mapping approach which I implemented, as suggested by Richard, was to exploit the fact that Privila authors typically write for several websites. So, starting with one seed site, you can find more by searching for the names of authors. I used the Yahoo search API to automate this process, since the Google API has been discontinued. From the new set of websites discovered, the list of authors is extracted, allowing yet more sites to be found. These steps are repeated until no new sites are discovered (effectively a breadth first search).

The end result was that starting from bustem.com, I found 294 further sites, with a total of 3 441 articles written by 124 authors (these numbers are lower than the ones in the previous post since duplicates have now been properly removed). There might be even more undiscovered sites, with a disjoint set of authors, but the current network is impressive in itself.

I have implemented an interactive Java applet visualization (using the Prefuse toolkit) so you can explore the network yourself. Both the source code, and the data used to construct the graph can also be downloaded.

Screenshot of PrivilaView applet

Electoral Commission releases e-voting and e-counting reports

Today, the Electoral Commission released their evaluation reports on the May 2007 e-voting and e-counting pilots held in England. Each of the pilot areas has a report from the Electoral Commission and the e-counting trials are additionally covered by technical reports from Ovum, the Electoral Commission’s consultants. Each of the changes piloted receives its own summary report: electronic counting, electronic voting, advanced voting and signing in polling stations. Finally, there are a set of key findings, both from the Electoral Commission and from Ovum.

Richard Clayton and I acted as election observers for the Bedford e-counting trial, on behalf of the Open Rights Group, and our discussion of the resulting report can be found in an earlier post. I also gave a talk on a few of the key points.

The Commission’s criticism of e-counting and e-voting was scathing; concerning the latter saying that the “security risk involved was significant and unacceptable.” They recommend against further trials until the problems identified are resolved. Quality assurance and planning were found to be inadequate, predominantly stemming from insufficient timescales. In the case of the six e-counting trials, three were abandoned, two were delayed, leaving only one that could be classed as a success. Poor transparency and value for money are also cited as problems. More worryingly, the Commission identify a failure to learn from the lessons of previous pilot programmes.

The reports covering the Bedford trials largely match my personal experience of the count and add some details which were not available to the election observers (in particular, explaining that the reason for some of the system shutdowns was to permit re-configuration of the OCR algorithms, and that due to delays at the printing contractor, no testing with actual ballot papers was performed). One difference is that the Ovum report was more generous than the Commission report regarding the candidate perceptions, saying “Apart from the issue of time, none of the stakeholders questioned the integrity of the system or the results achieved.” This discrepancy could be because the Ovum and Commission representatives left before the midnight call for a recount, by candidates who had lost confidence in the integrity of the results.

There is much more detail to the reports than I have been able to summarise here, so if you are interested in electronic elections, I suggest you read them yourselves.

The Open Rights Group has in general welcomed the Electoral Commission’s report, but feel that the inherent problems resulting from the use of computers in elections have not been fully addressed. The results of the report have also been covered by the media, such as the BBC: “Halt e-voting, says election body” and The Guardian: “Electronic voting not safe, warns election watchdog”.

Economics of Tor performance

Currently the performance of the Tor anonymity network is quite poor. This problem is frequently stated as a reason for people not using anonymizing proxies, so improving performance is a high priority of their developers. There are only about 1 000 Tor nodes and many are on slow Internet connections so in aggregate there is about 1 Gbit/s shared between 100 000 or so users. One way to improve the experience of Tor users is to increase the number of Tor nodes (especially high-bandwidth ones). Some means to achieve this goal are discussed in Challenges in Deploying Low-Latency Anonymity, but here I want to explore what will happen when Tor’s total bandwidth increases.

If Tor’s bandwidth doubled tomorrow, the naïve hypothesis is that users would experience twice the throughput. Unfortunately this is not true, because it assumes that the number of users does not vary with bandwidth available. In fact, as the supply of the Tor network’s bandwidth increases, there will be a corresponding increase in the demand for bandwidth from Tor users. This fact will apply just as well for other networks, but for the purposes of this post, I’ll use Tor as an example. Simple economics shows that performance of Tor is controlled by how the number of users scales with available bandwidth, which can be represented by a demand curve.

I don’t claim this is a new insight; in fact between me starting this draft and now, Andreas Pfitzmann made a very similar observation while answering a question following the presentation of Performance Comparison of Low-Latency Anonymisation Services from a User Perspective at the PET Symposium. He said, as I recall, that the performance of the anonymity network is the slowest tolerable speed for people who care about their privacy. Despite this, I couldn’t find anyone who had written a succinct description anywhere, perhaps because it is too obvious. Equally, I have heard the naïve version stated occasionally, so I think it’s helpful to publish something people can point at. The rest of this post will discuss the consequences of modelling Tor user behaviour in this way, and the limitations of the technique.

Continue reading Economics of Tor performance

The role of software engineering in electronic elections

Many designs for trustworthy electronic elections use cryptography to assure participants that the result is accurate. However, it is a system’s software engineering that ensures a result is declared at all. Both good software engineering and cryptography are thus necessary, but so far cryptography has drawn more attention. In fact, the software engineering aspects could be just as challenging, because election systems have a number of properties which make them almost a pathological case for robust design, implementation, testing and deployment.

Currently deployed systems are lacking in both software robustness and cryptographic assurance — as evidenced by the English electronic election fiasco. Here, in some cases the result was late and in others the electronic count was abandoned due to system failures resulting from poor software engineering. However, even where a result was returned, the black-box nature of auditless electronic elections brought the accuracy of the count into doubt. In the few cases where cryptography was used it was poorly explained and didn’t help verify the result either.

End-to-end cryptographically assured elections have generated considerable research interest and the resulting systems, such as Punchscan and Prêt à Voter, allow voters to verify the result while maintaining their privacy (provided they understand the maths, that is — the rest of us will have to trust the cryptographers). These systems will permit an erroneous result to be detected after the election, whether caused by maliciousness or more mundane software flaws. However should this occur, or if a result is failed to be returned at all, the election may need to fall back on paper backups or even be re-run — a highly disruptive and expensive failure.

Good software engineering is necessary but, in the case of voting systems, may be especially difficult to achieve. In fact, such systems have more similarities to the software behind rocket launches than more conventional business productivity software. We should thus expect the consequential high costs and, despite all this extra effort, that the occasional catastrophe will be inevitable. The remainder of this post will discuss why I think this is the case, and how manually-counted paper ballots circumvent many of these difficulties.

Continue reading The role of software engineering in electronic elections

Recent talks: Chip & PIN, traffic analysis, and voting

In the past couple of months, I’ve presented quite a few talks, and in the course of doing so, travelled a lot too (Belgium and Canada last month; America and Denmark still to come). I’ve now published my slides from these talks, which might also be of interest to Light Blue Touchpaper readers, so I’ll summarize the contents here.

Two of the talks were on Chip & PIN, the UK deployment of EMV. The first presentation — “Chip and Spin” — was for the Girton village Neighbourhood Watch meeting. Girton was hit by a spate of card-cloning, eventually traced back to a local garage, so they invited me to give a fairly non-technical overview of the problem. The slides served mainly as an introduction to a few video clips I showed, taken from TV programmes in which I participated. [slides (PDF 1.1M)]

The second Chip & PIN talk was to the COSIC research group at K.U. Leuven. Due to the different audience, this presentation — “EMV flaws and fixes: vulnerabilities in smart card payment systems” — was much more technical. I summarized the EMV protocol, described a number of weaknesses which leave EMV open to attack, along with corresponding defences. Finally, I discussed the more general problem with EMV — that customers are in a poor position to contest fraudulent transactions — and how this situation can be mitigated. [slides (PDF 1.4M)]

If you are interested in further details, much of the material from both of my Chip & PIN talks is discussed in papers from our group, such as “Chip and SPIN“, “The Man-in-the-Middle Defence” and “Keep Your Enemies Close: Distance bounding against smartcard relay attacks

Next I went to Ottawa for the PET Workshop (now renamed the PET Symposium). Here, I gave three talks. The first was for a panel session — “Ethics in Privacy Research”. Since this was a discussion, the slides aren’t particularly interesting but it will hopefully be the subject of an upcoming paper.

Then I gave a short talk at WOTE, on my experiences as an election observer. I summarized the conclusions of the Open Rights Group report (released the day before my talk) and added a few personal observations. Richard Clayton discussed the report in the previous post. [slides (PDF 195K)]

Finally, I presented the paper written by Piotr Zieliński and me — “Sampled Traffic Analysis by Internet-Exchange-Level Adversaries”, which I previously mentioned in a recent post. In the talk I gave a graphical summary of the paper’s key points, which I hope will aid in understanding the motivation of the paper and the traffic analysis method we developed. [slides (PDF 2.9M)]

Sampled Traffic Analysis by Internet-Exchange-Level Adversaries

Users of the Tor anonymous communication system are at risk of being tracked by an adversary who can monitor both the traffic entering and leaving the network. This weakness is well known to the designers and currently there is no known practical way to resist such attacks, while maintaining the low-latency demanded by applications such as web browsing. For this reason, it seems intuitively clear that when selecting a path through the Tor network, it would be beneficial to select the nodes to be in different countries. Hopefully government-level adversaries will find it problematic to track cross-border connections as mutual legal assistance is slow, if it even works at all. Non-government adversaries might also find that their influence drops off at national boundaries too.

Implementing secure IP-based geolocation is hard, but even if it were possible, the technique might not help and could perhaps even harm security. The PET Award nominated paper, “Location Diversity in Anonymity Networks“, by Nick Feamster and Roger Dingledine showed that international Internet connections cross a comparatively small number of tier-1 ISPs. Thus, by forcing one or more of these companies to co-operate, a large proportion of connections through an anonymity network could be traced.

The results of Feamster and Dingledine’s paper suggest that it may be better to bounce anonymity traffic around within a country, because it is less likely that there will be a single ISP monitoring incoming and outgoing traffic to several nodes. However, this only appears to be the case because they used BGP data to build a map of Autonomous Systems (ASes), which roughly correspond to ISPs. Actually, inter-ISP traffic (especially in Europe) might travel through an Internet eXchange (IX), a fact not apparent from BGP data. Our paper, “Sampled Traffic Analysis by Internet-Exchange-Level Adversaries“, by Steven J. Murdoch and Piotr Zieliński, examines the consequences of this observation.

Continue reading Sampled Traffic Analysis by Internet-Exchange-Level Adversaries

Results of global Internet filtering survey

At their conference in Oxford, the OpenNet Initiative have released the results from their first global Internet filtering survey. This announcement has been widely covered in the media.

Out of the 41 countries surveyed, 25 were found to impose filtering, though the topics blocked and extent of blocking varies dramatically.

Results can be seen on the filtering map and an URL checker. The full report, including detailed country and region summaries, will be published in the book “Access Denied: The Practice and Policy of Global Internet Filtering“.

Devote your day to democracy

The Open Rights Group are looking for volunteers to observe electronic voting/counting pilots, being tested in eleven areas around the UK during the May 3, 2007 elections. Richard and I have volunteered for Bedford pilot, but there are still many other areas that need help. If you have the time to spare, find out the details and sign the pledge. You will need to be fast; the deadline for registering as an observer is April 4, 2007.

The e-voting areas are:

  • Rushmoor
  • Sheffield
  • Shrewsbury & Atcham
  • South Bucks
  • Swindon (near Wroughton, Draycot Foliat, Chisledon)

and the e-counting pilot areas are:

  • Bedford
  • Breckland
  • Dover
  • South Bucks
  • Stratford-upon-Avon
  • Warwick (near Leek Wootton, Old Milverton, Leamington)

One of the strongest objections against e-voting and e-counting is the lack of transparency. The source code for the voting computers is rarely open to audit, and even if it is, voters have no assurance that the device they are using has been loaded with the same software as was validated. To try to find out more about how the e-counting system will work, I sent a freedom of information request to Bedford council.

If you would like to find out more about e-voting and e-counting systems, you might like to consider making your own request, but remember that public bodies are permitted 20 working days (about a month) to reply, so there is not much time before the election. For general information on the Freedom of Information Act, see the guide book from the Campaign for Freedom of Information.

Financial Ombudsman on Chip & PIN infallibility

The Financial Ombudsman Service offers to adjudicate disputes between banks and their customers who claim to have been treated unfairly. We were forwarded a letter written by the Ombudsman concerning a complaint by a Halifax customer over unauthorised ATM withdrawals. I am not familiar with the details of this particular case, but the letter does give a good illustration of how the complaint procedure is stacked against customers.

The customer had requested further information from Halifax (the Firm) and the Financial Ombudsman Service (this Service) had replied:

However this Service has already been presented with the evidence you have requested from the Firm and I comment on it as follows. Although you have requested this information from the Firm yourself (and I consider that it is not obliged to provide it to you) I conclude that this will not make any difference, because this Service has already reviewed this information.

The right of parties in dispute to see the evidence involved is a basic component of justice systems, but the Financial Ombudsman has clearly not heard of this, but then again they are funded by the banks. While the bank can have their own experts examine the evidence, the customer cannot do the same. Although the Financial Ombudsman service can review the evidence, giving it to the customer would allow them to pursue further investigation on their own.

The Firm has provided an ‘audit trail’ of the transactions disputed by you. This shows the location and times of the transactions and evidences that the card used was ‘CHIP’ read.

Without access to the audit trail and information concerning how it was produced, it is almost impossible for the customer to know the precise details of the transaction. Based solely on the letter, there are still a number of important unanswered questions. For example:

Was the card in question SDA or DDA?
SDA cards can be cloned to produce yes cards, which will accept any PIN and still work in offline transactions, where the terminal or ATM does not contact the bank. This type of fraud has been seen in France (pp. 5–10).
Was the ATM online or offline at the time of the transaction?
Although ATMs are generally online, if Chip & PIN terminals fail to dial up the bank they may continue to work offline and so accept SDA clones. Could this have happened with this ATM?
What was the application cryptogram presented in this transaction?
When a Chip & PIN card authorises a transaction, it produces an application cryptogram which allows the bank to verify that the card is legitimate. A yes card would not produce the correct application cryptogram.
What is the key for the card?
The application cryptogram is produced using a cryptographic key known only by the card and bank. With this and some other information the customer could confirm that the application cryptogram really came from his card. Since the card has long since been cancelled, releasing this key should not be a security risk. If the banks are not storing this information, how can they be sure that their systems are operating correctly?

It seems unlikely that the Financial Ombudsman knew which of these events have occurred either, otherwise I would have expected them to say so in their letter.

As we have already advised you, since the advent of CHIP and PIN, this Service is not aware of any incidents where a card with a ‘CHIP’ has been successfully cloned by fraudsters so that it could be used by them successfully in a cash machine.

Besides the scenarios mentioned above, our demonstration for Watchdog showed how, even without cloning a card, a Chip & PIN terminal could be fooled into accepting a counterfeit. Assuming this ATM read the chip rather than the magnetic stripe, our attack would work just as well there. The situation surrounding this particular case might preclude a relay attack, but it is one of many possibilities that ought to be eliminated in a serious investigation.

Although you question The Firm’s security systems, I consider that the audit trail provided is in a format utilised by several major banks and therefore can be relied upon.

The format of the audit trail is no indication of whether the information it records is a true and complete representation of what actually happened and it is almost ludicrous to suggest that. Even if it were, the fact that several banks are using it is no indication of its security. To actually establish these facts, external scrutiny is required and, without access to bank’s systems, customers are not a position to arrange for this.

So the banking dispute resolution process works well for the banks, by reducing their litigation costs, but not well for their customers. If customers go to the Ombudsman, they risk being asked to prove their innocence without being given access to the information necessary to do so. Instead, they could go directly to the courts, but while the bank might accuse customers of not following proper procedures, if they win there they can at least send in the bailiffs.