Category Archives: Academic papers

Financial Cryptography and Data Security 2011 — Call for Participation

Financial Cryptography and Data Security (FC 2011)
Bay Gardens Beach Resort, St. Lucia
February 28 — March 4, 2011

Financial Cryptography and Data Security is a major international forum for research, advanced development, education, exploration, and debate regarding information assurance, with a specific focus on commercial contexts. The conference covers all aspects of securing transactions and systems.

NB: Discounted hotel rate is available only until December 30, 2010

Topics include:

Anonymity and Privacy, Auctions and Audits, Authentication and Identification, Backup Authentication, Biometrics, Certification and Authorization, Cloud Computing Security, Commercial Cryptographic Applications, Transactions and Contracts, Data Outsourcing Security, Digital Cash and Payment Systems, Digital Incentive and Loyalty Systems, Digital Rights Management, Fraud Detection, Game Theoretic Approaches to Security, Identity Theft, Spam, Phishing and Social Engineering, Infrastructure Design, Legal and Regulatory Issues, Management and Operations, Microfinance and Micropayments, Mobile Internet Device Security, Monitoring, Reputation Systems, RFID-Based and Contactless Payment Systems, Risk Assessment and Management, Secure Banking and Financial Web Services, Securing Emerging Computational Paradigms, Security and Risk Perceptions and Judgments, Security Economics, Smartcards, Secure Tokens and Hardware, Trust Management, Underground-Market Economics, Usability, Virtual Economies, Voting Systems

Important Dates

Hotel room reduced rate cut-off: December 30, 2010
Reduced registration rate cut-off: January 21, 2011

Please send any questions to fc11general@ifca.ai

Continue reading Financial Cryptography and Data Security 2011 — Call for Participation

Research, public opinion and patient consent

Paul Thornton has brought to my attention some research that the Department of Health published quietly at the end of 2009 (and which undermines Departmental policy).

It is the Summary of Responses to the Consultation on the Additional Uses of Patient Data undertaken following campaigning by doctors, NGOs and others about the Secondary Uses Service (SUS). SUS keeps summaries of patient care episodes, some of them anonymised, and makes them available for secondary uses; the system’s advocates talk about research, although it is heavily used for health service management, clinical audit, answering parliamentary questions and so on. Most patients are quite unaware that tens of thousands of officials have access to their records, and the Database State report we wrote last year concluded that SUS is almost certainly illegal. (Human-rights and data-protection law require that sensitive data, including health data, be shared only with the consent of the data subject or using tightly restricted statutory powers whose effects are predictable to data subjects.)

The Department of Health’s consultation shows that most people oppose the secondary use of their health records without consent. The executive summary tries to spin this a bit, but the data from the report’s body show that public opinion remains settled on the issue, as it has been since the first opinion survey in 1997. We do see some signs of increasing sophistication: now a quarter of patients don’t believe that data can be anonymised completely, versus 15% who say that sharing is “OK if anonymised” (p 23). And the views of medical researchers and NHS administrators are completely different; see for example p 41. The size of this gap suggests the issue won’t get resolved any time soon – perhaps until there’s an Alder-Hey-type incident that causes a public outcry and forces a reform of SUS.

Capsicum: practical capabilities for UNIX

Today, Jonathan Anderson, Ben Laurie, Kris Kennaway, and I presented Capsicum: practical capabilities for UNIX at the 19th USENIX Security Symposium in Washington, DC; the slides can be found on the Capsicum web site. We argue that capability design principles fill a gap left by discretionary access control (DAC) and mandatory access control (MAC) in operating systems when supporting security-critical and security-aware applications.

Capsicum responds to the trend of application compartmentalisation (sometimes called privilege separation) by providing strong and well-defined isolation primitives, and by facilitating rights delegation driven by the application (and eventually, user). These facilities prove invaluable, not just for traditional security-critical programs such as tcpdump and OpenSSH, but also complex security-aware applications that map distributed security policies into local primitives, such as Google’s Chromium web browser, which implement the same-origin policy when sandboxing JavaScript execution.

Capsicum extends POSIX with a new capability mode for processes, and capability file descriptor type, as well as supporting primitives such as process descriptors. Capability mode denies access to global operating system namespaces, such as the file system and IPC namespaces: only delegated rights (typically via file descriptors or more refined capabilities) are available to sandboxes. We prototyped Capsicum on FreeBSD 9.x, and have extended a variety of applications, including Google’s Chromium web browser, to use Capsicum for sandboxing. Our paper discusses design trade-offs, both in Capsicum and in applications, as well as a performance analysis. Capsicum is available under a BSD license.

Capsicum is collaborative research between the University of Cambridge and Google, and has been sponsored by Google, and will be a foundation for future work on application security, sandboxing, and security usability at Cambridge and Google. Capsicum has also been backported to FreeBSD 8.x, and Heradon Douglas at Google has an in-progress port to Linux.

We’re also pleased to report the Capsicum paper won Best Student Paper award at the conference!

Passwords in the wild, part IV: the future

This is the fourth and final part in a series on password implementations at real websites, based on my paper at WEIS 2010 with Sören Preibusch.

Given the problems associated with passwords on the web outlined in the past few days, for years academics have searched for new technology to replace passwords. This thinking can at times be counter-productive, as no silver bullets have yet materialised and this has distracted attention away from fixing the most pressing problems associated with passwords. Currently, the trendiest proposed solution is to use federated identity protocols to greatly reduce the number of websites which must collect passwords (as we’ve argued would be a very positive step). Much focus has been given to OpenID, yet it is still struggling to gain widespread adoption. OpenID was deployed at less than 3% of websites we observed, with only Mixx and LiveJournal giving it much prominence.

Nevertheless, we optimistically feel that real changes will happen in the next few years, as password authentication on the web seems to be becoming increasingly unsustainable due to the increasing scale and interconnectivity of websites collecting passwords. We actually think we are already in the early stages of a password revolution, just not of the type predicted by academia.

Continue reading Passwords in the wild, part IV: the future

Passwords in the wild, part III: password standards for the Web

This is the third part in a series on password implementations at real websites, based on my paper at WEIS 2010 with Joseph Bonneau.

In our analysis of 150 password deployments online, we observed a surprising diversity of implementation choices. Whilst sites can be ranked by the overall security of their password scheme, there is a vast middle group in which sites make seemingly incongruous security decisions. We also found almost no evidence of commonality in implementations. Examining the details of Web forms (variable names, etc.) and the format of automated emails, we found little evidence that sites are re-using a common code base. This lack of consistency in technical choices suggests that standards and guidelines could improve security.

Numerous RFCs concern themselves with one-time passwords and other relatively sophisticated authentication protocols. Yet, traditional password-based authentication remains the most prevalent authentication protocol on the Internet, as the International Telecommunication Union–itself a United Nations specialized agency to standardise telecommunications on a worldwide basis–observes in their ITU-T Recommendation X.1151, “Guideline on secure password-based, authentication protocol with key exchange.” Client PKI has not seen wide-spread adoption and tokens or smart-cards are prohibitively cost-inefficient or inconvenient for most websites. While passwords have many shortcomings, it is essential deploy them as carefully and securely as possible. Formal standards and guidelines of best practices are essential to help developers.

Continue reading Passwords in the wild, part III: password standards for the Web

Passwords in the wild, part II: failures in the market

This is the second part in a series on password implementations at real websites, based on my paper at WEIS 2010 with Sören Preibusch.

As we discussed yesterday, dubious practices abound within real sites’ password implementations. Password insecurity isn’t only due to random implementation mistakes, though. When we scored sites’ passwords implementations on a 10-point aggregate scale it became clear that a wide spectrum of implementation quality exists. Many web authentication giants (Amazon, eBay, Facebook, Google, LiveJournal, Microsoft, MySpace, Yahoo!) scored near the top, joined by a few unlikely standouts (IKEA, CNBC). At the opposite end were a slew of lesser-known merchants and news websites. Exploring the factors which lead to better security confirms the basic tenets of security economics: sites with more at stake tend to do better. However, doing better isn’t enough. Given users’ well-documented tendency to re-use passwords, the varying levels of security may represent a serious market failure which is undermining the security of password-based authentication.

Continue reading Passwords in the wild, part II: failures in the market

Passwords in the wild, part I: the gap between theory and implementation

Sören Preibusch and I have finalised our in-depth report on password practices in the wild, The password thicket: technical and market failures in human authentication on the web, presented in Boston last month for WEIS 2010. The motivation for our report was a lack of technical research into real password deployments. Passwords have been studied as an authentication mechanism quite intensively for the last 30 years, but we believe ours was the first large study into how Internet sites actually implement them. We studied 150 sites, including the most visited overall sites plus a random sample of mid-level sites. We signed up for free accounts with each site, and using a mixture of scripting and patience, captured all visible aspects of password deployment, from enrolment and login to reset and attacks.

Our data (which is now publicly available) gives us an interesting picture into the current state of password deployment. Because the dataset is huge and the paper is quite lengthy, we’ll be discussing our findings and their implications from a series of different perspectives. Today, we’ll focus on the preventable mistakes. In academic literature, it’s assumed that passwords will be encrypted during transmission, hashed before storage, and attempts to guess usernames or passwords will be throttled. None of these is widely true in practice.

Continue reading Passwords in the wild, part I: the gap between theory and implementation

Who controls the off switch?

We have a new paper on the strategic vulnerability created by the plan to replace Britain’s 47 million meters with smart meters that can be turned off remotely. The energy companies are demanding this facility so that customers who don’t pay their bills can be switched to prepayment tariffs without the hassle of getting court orders against them. If the Government buys this argument – and I’m not convinced it should – then the off switch had better be closely guarded. You don’t want the nation’s enemies to be able to turn off the lights remotely, and eliminating that risk could just conceivably be a little bit more complicated than you might at first think. (This paper follows on from our earlier paper On the security economics of electricity metering at WEIS 2010.)

Database state – latest!

Today sees the publication of a report by Professor Trisha Greenhalgh into the Summary Care Record (SCR). There is a summary of the report in the BMJ, which also has two discussion pieces: one by Sir Mark Walport of the Wellcome Trust arguing that the future of medical records is digital, and one by me which agrees but argues that as the SCR is unsafe and unlawful, it should be abandoned.

Two weeks ago I reported here how the coalition government planned to retain the SCR, despite pre-election promises from both its constituent parties to do away with it. These promises followed our Database State report last year which demonstrated that many of the central systems built by the previous government contravened human-rights law. The government’s U-turn provoked considerable anger among doctors. NGOs and backbench MPs, prompting health minister Simon Burns to promise a review.

Professor Greenhalgh’s review, which was in fact completed before the election, finds that the SCR fails to do what it was supposed to. It isn’t used much; it doesn’t fit in with how doctors and nurses actually work; it doesn’t make consultations shorter but longer; and the project was extremely badly managed. In fact, her report should be read by all serious students of software engineering; like the London Ambulance Service report almost twenty years ago, this document sets out in great detail what not to do.

For now, there is some press coverage in the Telegraph, the Mail, E-health Insider and Computerworld UK.