Research, public opinion and patient consent

Paul Thornton has brought to my attention some research that the Department of Health published quietly at the end of 2009 (and which undermines Departmental policy).

It is the Summary of Responses to the Consultation on the Additional Uses of Patient Data undertaken following campaigning by doctors, NGOs and others about the Secondary Uses Service (SUS). SUS keeps summaries of patient care episodes, some of them anonymised, and makes them available for secondary uses; the system’s advocates talk about research, although it is heavily used for health service management, clinical audit, answering parliamentary questions and so on. Most patients are quite unaware that tens of thousands of officials have access to their records, and the Database State report we wrote last year concluded that SUS is almost certainly illegal. (Human-rights and data-protection law require that sensitive data, including health data, be shared only with the consent of the data subject or using tightly restricted statutory powers whose effects are predictable to data subjects.)

The Department of Health’s consultation shows that most people oppose the secondary use of their health records without consent. The executive summary tries to spin this a bit, but the data from the report’s body show that public opinion remains settled on the issue, as it has been since the first opinion survey in 1997. We do see some signs of increasing sophistication: now a quarter of patients don’t believe that data can be anonymised completely, versus 15% who say that sharing is “OK if anonymised” (p 23). And the views of medical researchers and NHS administrators are completely different; see for example p 41. The size of this gap suggests the issue won’t get resolved any time soon – perhaps until there’s an Alder-Hey-type incident that causes a public outcry and forces a reform of SUS.

Capsicum: practical capabilities for UNIX

Today, Jonathan Anderson, Ben Laurie, Kris Kennaway, and I presented Capsicum: practical capabilities for UNIX at the 19th USENIX Security Symposium in Washington, DC; the slides can be found on the Capsicum web site. We argue that capability design principles fill a gap left by discretionary access control (DAC) and mandatory access control (MAC) in operating systems when supporting security-critical and security-aware applications.

Capsicum responds to the trend of application compartmentalisation (sometimes called privilege separation) by providing strong and well-defined isolation primitives, and by facilitating rights delegation driven by the application (and eventually, user). These facilities prove invaluable, not just for traditional security-critical programs such as tcpdump and OpenSSH, but also complex security-aware applications that map distributed security policies into local primitives, such as Google’s Chromium web browser, which implement the same-origin policy when sandboxing JavaScript execution.

Capsicum extends POSIX with a new capability mode for processes, and capability file descriptor type, as well as supporting primitives such as process descriptors. Capability mode denies access to global operating system namespaces, such as the file system and IPC namespaces: only delegated rights (typically via file descriptors or more refined capabilities) are available to sandboxes. We prototyped Capsicum on FreeBSD 9.x, and have extended a variety of applications, including Google’s Chromium web browser, to use Capsicum for sandboxing. Our paper discusses design trade-offs, both in Capsicum and in applications, as well as a performance analysis. Capsicum is available under a BSD license.

Capsicum is collaborative research between the University of Cambridge and Google, and has been sponsored by Google, and will be a foundation for future work on application security, sandboxing, and security usability at Cambridge and Google. Capsicum has also been backported to FreeBSD 8.x, and Heradon Douglas at Google has an in-progress port to Linux.

We’re also pleased to report the Capsicum paper won Best Student Paper award at the conference!

Passwords in the wild, part IV: the future

This is the fourth and final part in a series on password implementations at real websites, based on my paper at WEIS 2010 with Sören Preibusch.

Given the problems associated with passwords on the web outlined in the past few days, for years academics have searched for new technology to replace passwords. This thinking can at times be counter-productive, as no silver bullets have yet materialised and this has distracted attention away from fixing the most pressing problems associated with passwords. Currently, the trendiest proposed solution is to use federated identity protocols to greatly reduce the number of websites which must collect passwords (as we’ve argued would be a very positive step). Much focus has been given to OpenID, yet it is still struggling to gain widespread adoption. OpenID was deployed at less than 3% of websites we observed, with only Mixx and LiveJournal giving it much prominence.

Nevertheless, we optimistically feel that real changes will happen in the next few years, as password authentication on the web seems to be becoming increasingly unsustainable due to the increasing scale and interconnectivity of websites collecting passwords. We actually think we are already in the early stages of a password revolution, just not of the type predicted by academia.

Continue reading Passwords in the wild, part IV: the future

Passwords in the wild, part III: password standards for the Web

This is the third part in a series on password implementations at real websites, based on my paper at WEIS 2010 with Joseph Bonneau.

In our analysis of 150 password deployments online, we observed a surprising diversity of implementation choices. Whilst sites can be ranked by the overall security of their password scheme, there is a vast middle group in which sites make seemingly incongruous security decisions. We also found almost no evidence of commonality in implementations. Examining the details of Web forms (variable names, etc.) and the format of automated emails, we found little evidence that sites are re-using a common code base. This lack of consistency in technical choices suggests that standards and guidelines could improve security.

Numerous RFCs concern themselves with one-time passwords and other relatively sophisticated authentication protocols. Yet, traditional password-based authentication remains the most prevalent authentication protocol on the Internet, as the International Telecommunication Union–itself a United Nations specialized agency to standardise telecommunications on a worldwide basis–observes in their ITU-T Recommendation X.1151, “Guideline on secure password-based, authentication protocol with key exchange.” Client PKI has not seen wide-spread adoption and tokens or smart-cards are prohibitively cost-inefficient or inconvenient for most websites. While passwords have many shortcomings, it is essential deploy them as carefully and securely as possible. Formal standards and guidelines of best practices are essential to help developers.

Continue reading Passwords in the wild, part III: password standards for the Web

Passwords in the wild, part II: failures in the market

This is the second part in a series on password implementations at real websites, based on my paper at WEIS 2010 with Sören Preibusch.

As we discussed yesterday, dubious practices abound within real sites’ password implementations. Password insecurity isn’t only due to random implementation mistakes, though. When we scored sites’ passwords implementations on a 10-point aggregate scale it became clear that a wide spectrum of implementation quality exists. Many web authentication giants (Amazon, eBay, Facebook, Google, LiveJournal, Microsoft, MySpace, Yahoo!) scored near the top, joined by a few unlikely standouts (IKEA, CNBC). At the opposite end were a slew of lesser-known merchants and news websites. Exploring the factors which lead to better security confirms the basic tenets of security economics: sites with more at stake tend to do better. However, doing better isn’t enough. Given users’ well-documented tendency to re-use passwords, the varying levels of security may represent a serious market failure which is undermining the security of password-based authentication.

Continue reading Passwords in the wild, part II: failures in the market

Passwords in the wild, part I: the gap between theory and implementation

Sören Preibusch and I have finalised our in-depth report on password practices in the wild, The password thicket: technical and market failures in human authentication on the web, presented in Boston last month for WEIS 2010. The motivation for our report was a lack of technical research into real password deployments. Passwords have been studied as an authentication mechanism quite intensively for the last 30 years, but we believe ours was the first large study into how Internet sites actually implement them. We studied 150 sites, including the most visited overall sites plus a random sample of mid-level sites. We signed up for free accounts with each site, and using a mixture of scripting and patience, captured all visible aspects of password deployment, from enrolment and login to reset and attacks.

Our data (which is now publicly available) gives us an interesting picture into the current state of password deployment. Because the dataset is huge and the paper is quite lengthy, we’ll be discussing our findings and their implications from a series of different perspectives. Today, we’ll focus on the preventable mistakes. In academic literature, it’s assumed that passwords will be encrypted during transmission, hashed before storage, and attempts to guess usernames or passwords will be throttled. None of these is widely true in practice.

Continue reading Passwords in the wild, part I: the gap between theory and implementation

Who controls the off switch?

We have a new paper on the strategic vulnerability created by the plan to replace Britain’s 47 million meters with smart meters that can be turned off remotely. The energy companies are demanding this facility so that customers who don’t pay their bills can be switched to prepayment tariffs without the hassle of getting court orders against them. If the Government buys this argument – and I’m not convinced it should – then the off switch had better be closely guarded. You don’t want the nation’s enemies to be able to turn off the lights remotely, and eliminating that risk could just conceivably be a little bit more complicated than you might at first think. (This paper follows on from our earlier paper On the security economics of electricity metering at WEIS 2010.)

Database state – latest!

Today sees the publication of a report by Professor Trisha Greenhalgh into the Summary Care Record (SCR). There is a summary of the report in the BMJ, which also has two discussion pieces: one by Sir Mark Walport of the Wellcome Trust arguing that the future of medical records is digital, and one by me which agrees but argues that as the SCR is unsafe and unlawful, it should be abandoned.

Two weeks ago I reported here how the coalition government planned to retain the SCR, despite pre-election promises from both its constituent parties to do away with it. These promises followed our Database State report last year which demonstrated that many of the central systems built by the previous government contravened human-rights law. The government’s U-turn provoked considerable anger among doctors. NGOs and backbench MPs, prompting health minister Simon Burns to promise a review.

Professor Greenhalgh’s review, which was in fact completed before the election, finds that the SCR fails to do what it was supposed to. It isn’t used much; it doesn’t fit in with how doctors and nurses actually work; it doesn’t make consultations shorter but longer; and the project was extremely badly managed. In fact, her report should be read by all serious students of software engineering; like the London Ambulance Service report almost twenty years ago, this document sets out in great detail what not to do.

For now, there is some press coverage in the Telegraph, the Mail, E-health Insider and Computerworld UK.

Workshop on the economics of information security 2010

Here is a liveblog of WEIS which is being held today and tomorrow at Harvard. It has 125 attendees: 59% academic, 15% govt/NGO, and 26% industry; the split of backgrounds of 47% CS, 35% econ/management and 18% policy/law. The paper acceptance rate was 24/72: 10 empirical papers, 8 theory and 6 on policy.

The workshop kicked off with a keynote talk from Tracey Vispoli of Chubb Insurance. In early 2000s, insurance industry thought cyber would be big. It isn’t yet, but it is starting to grow rapidly. There is still little actuarial data. But the tndustry can shape behaviour by being in the gap between risk aversion and risk tolerance. Its technical standards can make a difference (as with buildings, highways, …). So far a big factor is the insurance response to notification requirements: notification costs of $50-60 per compromised record mean that a 47m compromise like TJX is a loss you want to insure! So she expects healthy supply and demand model for cyberinsurance in coming years. This will help to shape standards, best practices and culture.

Questions: are there enough data to model? So far no company has enough; ideally we should bring data together from industry to one central shared point. Government has a role as with highways. Standards? Client prequalification is currently a fast-moving target. Insurers’ competitive advantage is understanding the intersection between standards and pricing. Reinsurance? Sure, where a single event could affect multiple policies. Tension between auditability and security in the power industry (NERC) – is there any role for insurance? Maybe, but legal penalties are in general uninsurable. How do we get insurers to come to WEIS? It would help if we had more specificity in our research papers, if we did not just talk about “breaches” but “breaches resulting in X” (the industry is not interested in national security, corporate espionage and other things that do not result in claims). Market evolution? She predicts the industry will follow its usual practice of lowballing a new market until losses mount, then cut back coverage terms. (E.g. employment liability insurance grew rapidly over last 20 years but became unprofitable because of class actions for discrimination etc – so industry cut coverage, but that was OK as it helped shape best employment practice). Data sharing by industry itself? Client confidentiality stops ad-hoc sharing but it would be good to have a properly regulated central depository. Who’s the Ralph Nader of this? Broad reform might come from the FTC; it’s surprising the SEC hasn’t done anything (HIPAA and GLB are too industry-specific). Quantifiability of best practice? Not enough data. How much of biz is cyber? At present it’s 5% of Chubb’s insurance business, but you can expect 8-9% in 2010-11 – rapid growth!

Future sessions will be covered in additional posts…