Category Archives: Security engineering

Bad security, good security, case studies, lessons learned

Wikileaks, security research and policy

A number of media organisations have been asking us about Wikileaks. Fifteen years ago we kicked off the study of censorship resistant systems, which inspired the peer-to-peer movement; we help maintain Tor, which provides the anonymous communications infrastructure for Wikileaks; and we’ve a longstanding interest in information policy.

I have written before about governments’ love of building large databases of sensitive data to which hundreds of thousands of people need access to do their jobs – such as the NHS spine, which will give over 800,000 people access to our health records. The media are now making the link. Whether sensitive data are about health or about diplomacy, the only way forward is compartmentation. Medical records should be kept in the surgery or hospital where the care is given; and while an intelligence analyst dealing with Iraq might have access to cables on Iraq, Iran and Saudi Arabia, he should have no routine access to stuff on Korea or Brazil.

So much for the security engineering; now to policy. No-one questions the US government’s right to try one of its soldiers for leaking the cables, or the right of the press to publish them now that they’re leaked. But why is Wikileaks treated as the leaker, rather than as a publisher?

This leads me to two related questions. First, does a next-generation censorship-resistant system need a more resilient technical platform, or more respectable institutions? And second, if technological change causes respectable old-media organisations such as the Guardian and the New York Times to go bust and be replaced by blogs, what happens to freedom of the press, and indeed to freedom of speech?

The Smart Card Detective: a hand-held EMV interceptor

During my MPhil within the Computer Lab (supervised by Markus Kuhn) I developed a card-sized device (named Smart Card Detective – in short SCD) that can monitor Chip and PIN transactions. The main goal of the SCD was to offer a trusted display for anyone using credit cards, to avoid scams such as tampered terminals which show an amount on their screen but debit the card another (see this paper by Saar Drimer and Steven Murdoch). However, the final result is a more general device, which can be used to analyse and modify any part of an EMV (protocol used by Chip and PIN cards) transaction.

Using the SCD we have successfully shown how the relay attack can be mitigated by showing the real amount on the trusted display. Even more, we have tested the No PIN vulnerability (see the paper by Murdoch et al.) with the SCD. A reportage on this has been shown on Canal+ (video now available here).

After the “Chip and PIN is broken” paper was published some contra arguments referred to the difficulty of setting up the attack. The SCD can also show that such assumptions are many times incorrect.

More details on the SCD are on my MPhil thesis available here. Also important, the software is open source and along with the hardware schematics can be found in the project’s page. The aim of this is to make the SCD a useful tool for EMV research, so that other problems can be found and fixed.

Thanks to Saar Drimer, Mike Bond, Steven Murdoch and Sergei Skorobogatov for the help in this project. Also thanks to Frank Stajano and Ross Anderson for suggestions on the project.

Passwords in the wild, part IV: the future

This is the fourth and final part in a series on password implementations at real websites, based on my paper at WEIS 2010 with Sören Preibusch.

Given the problems associated with passwords on the web outlined in the past few days, for years academics have searched for new technology to replace passwords. This thinking can at times be counter-productive, as no silver bullets have yet materialised and this has distracted attention away from fixing the most pressing problems associated with passwords. Currently, the trendiest proposed solution is to use federated identity protocols to greatly reduce the number of websites which must collect passwords (as we’ve argued would be a very positive step). Much focus has been given to OpenID, yet it is still struggling to gain widespread adoption. OpenID was deployed at less than 3% of websites we observed, with only Mixx and LiveJournal giving it much prominence.

Nevertheless, we optimistically feel that real changes will happen in the next few years, as password authentication on the web seems to be becoming increasingly unsustainable due to the increasing scale and interconnectivity of websites collecting passwords. We actually think we are already in the early stages of a password revolution, just not of the type predicted by academia.

Continue reading Passwords in the wild, part IV: the future

Passwords in the wild, part III: password standards for the Web

This is the third part in a series on password implementations at real websites, based on my paper at WEIS 2010 with Joseph Bonneau.

In our analysis of 150 password deployments online, we observed a surprising diversity of implementation choices. Whilst sites can be ranked by the overall security of their password scheme, there is a vast middle group in which sites make seemingly incongruous security decisions. We also found almost no evidence of commonality in implementations. Examining the details of Web forms (variable names, etc.) and the format of automated emails, we found little evidence that sites are re-using a common code base. This lack of consistency in technical choices suggests that standards and guidelines could improve security.

Numerous RFCs concern themselves with one-time passwords and other relatively sophisticated authentication protocols. Yet, traditional password-based authentication remains the most prevalent authentication protocol on the Internet, as the International Telecommunication Union–itself a United Nations specialized agency to standardise telecommunications on a worldwide basis–observes in their ITU-T Recommendation X.1151, “Guideline on secure password-based, authentication protocol with key exchange.” Client PKI has not seen wide-spread adoption and tokens or smart-cards are prohibitively cost-inefficient or inconvenient for most websites. While passwords have many shortcomings, it is essential deploy them as carefully and securely as possible. Formal standards and guidelines of best practices are essential to help developers.

Continue reading Passwords in the wild, part III: password standards for the Web

Passwords in the wild, part II: failures in the market

This is the second part in a series on password implementations at real websites, based on my paper at WEIS 2010 with Sören Preibusch.

As we discussed yesterday, dubious practices abound within real sites’ password implementations. Password insecurity isn’t only due to random implementation mistakes, though. When we scored sites’ passwords implementations on a 10-point aggregate scale it became clear that a wide spectrum of implementation quality exists. Many web authentication giants (Amazon, eBay, Facebook, Google, LiveJournal, Microsoft, MySpace, Yahoo!) scored near the top, joined by a few unlikely standouts (IKEA, CNBC). At the opposite end were a slew of lesser-known merchants and news websites. Exploring the factors which lead to better security confirms the basic tenets of security economics: sites with more at stake tend to do better. However, doing better isn’t enough. Given users’ well-documented tendency to re-use passwords, the varying levels of security may represent a serious market failure which is undermining the security of password-based authentication.

Continue reading Passwords in the wild, part II: failures in the market

Passwords in the wild, part I: the gap between theory and implementation

Sören Preibusch and I have finalised our in-depth report on password practices in the wild, The password thicket: technical and market failures in human authentication on the web, presented in Boston last month for WEIS 2010. The motivation for our report was a lack of technical research into real password deployments. Passwords have been studied as an authentication mechanism quite intensively for the last 30 years, but we believe ours was the first large study into how Internet sites actually implement them. We studied 150 sites, including the most visited overall sites plus a random sample of mid-level sites. We signed up for free accounts with each site, and using a mixture of scripting and patience, captured all visible aspects of password deployment, from enrolment and login to reset and attacks.

Our data (which is now publicly available) gives us an interesting picture into the current state of password deployment. Because the dataset is huge and the paper is quite lengthy, we’ll be discussing our findings and their implications from a series of different perspectives. Today, we’ll focus on the preventable mistakes. In academic literature, it’s assumed that passwords will be encrypted during transmission, hashed before storage, and attempts to guess usernames or passwords will be throttled. None of these is widely true in practice.

Continue reading Passwords in the wild, part I: the gap between theory and implementation

Who controls the off switch?

We have a new paper on the strategic vulnerability created by the plan to replace Britain’s 47 million meters with smart meters that can be turned off remotely. The energy companies are demanding this facility so that customers who don’t pay their bills can be switched to prepayment tariffs without the hassle of getting court orders against them. If the Government buys this argument – and I’m not convinced it should – then the off switch had better be closely guarded. You don’t want the nation’s enemies to be able to turn off the lights remotely, and eliminating that risk could just conceivably be a little bit more complicated than you might at first think. (This paper follows on from our earlier paper On the security economics of electricity metering at WEIS 2010.)

Database state – latest!

Today sees the publication of a report by Professor Trisha Greenhalgh into the Summary Care Record (SCR). There is a summary of the report in the BMJ, which also has two discussion pieces: one by Sir Mark Walport of the Wellcome Trust arguing that the future of medical records is digital, and one by me which agrees but argues that as the SCR is unsafe and unlawful, it should be abandoned.

Two weeks ago I reported here how the coalition government planned to retain the SCR, despite pre-election promises from both its constituent parties to do away with it. These promises followed our Database State report last year which demonstrated that many of the central systems built by the previous government contravened human-rights law. The government’s U-turn provoked considerable anger among doctors. NGOs and backbench MPs, prompting health minister Simon Burns to promise a review.

Professor Greenhalgh’s review, which was in fact completed before the election, finds that the SCR fails to do what it was supposed to. It isn’t used much; it doesn’t fit in with how doctors and nurses actually work; it doesn’t make consultations shorter but longer; and the project was extremely badly managed. In fact, her report should be read by all serious students of software engineering; like the London Ambulance Service report almost twenty years ago, this document sets out in great detail what not to do.

For now, there is some press coverage in the Telegraph, the Mail, E-health Insider and Computerworld UK.

IEEE best paper award

Steven Murdoch, Saar Drimer, Mike Bond and I have just won the IEEE Security and Privacy Symposium’s Best Practical Paper award for our paper Chip and PIN is Broken. This was an unexpected pleasure, given the very strong competition this year (especially from this paper). We won this award once before, in 2008, for a paper on a similar topic.

Ross, Mike, Saar, Steven (photo by Joseph Bonneau)

Update (2010-05-28): The photo now includes the full team (original version)

Evaluating statistical attacks on personal knowledge questions

What is your mother’s maiden name? How about your pet’s name? Questions like these were a dark corner of security systems for quite some time. Most security researchers instinctively think they aren’t very secure. But they still have gained widespread deployment as a backup to password-based authentication when email-based identification isn’t available. Free webmail providers, for example, may have no other choice. Unfortunately, because most websites rely on email when passwords fail, and email providers rely on personal knowledge questions, most web authentication is no more secure than personal knowledge questions. This risk has gotten more attention recently, with high profile compromises of Paris Hilton’s phone, Sarah Palin’s email, and Twitter’s corporate Google Documents occurring due to guessed personal knowledge questions.

There’s finally been a surge of academic research into the area in the last five years. It’s been shown, for example, that these questions are easy to look up online, often found in public records, and easy for friends and acquaintances to guess. In a joint work with Mike Just and Greg Matthews from the University of Edinburgh published this week in the proceedings of Financial Cryptography 2010, we’ve examined the more basic question of how secure the underlying answer distributions are to statistical guessing. Put another way, if an attacker wants to do no target-specific work, but just guess common answers for a large number of accounts using population-wide statistics, how well can she do?

Continue reading Evaluating statistical attacks on personal knowledge questions