Category Archives: Privacy technology

Anonymous communication, data protection

Operational security failure

A shocking article appeared yesterday on the BMJ website. It recounts how auditors called 45 GP surgeries asking for personal information about 51 patients. In only one case were they asked to verify their identity; the attack succeeded against the other 50 patients.

This is an old problem. In 1996, when I was advising the BMA on clinical system safety and privacy, we trained the staff at one health authority to detect false-pretext phone calls, and they found 30 a week. We reported this to the Department of Health, hoping they’d introduce some operational security measures nationwide; instead the Department got furious at us for treading on their turf and ordered the HA to stop cooperating (the story’s told in my book). More recently I confronted the NHS chief executive, David Nicholson, and patient tsar Harry Cayton, with the issue at a conference early last year; they claimed there wasn’t a problem nowadays now that people have all these computers.

What will it take to get the Department of Health to care about patient privacy? Lack of confidentiality already costs lives, albeit indirectly. Will it require a really high-profile fatality?

"Covert channel vulnerabilities in anonymity systems" wins best thesis award

My PhD thesis “Covert channel vulnerabilities in anonymity systems” has been awarded this year’s best thesis prize by the ERCIM security and trust management working group. The announcement can be found on the working group homepage and I’ve been invited to give a talk at their upcoming workshop, STM 08, Trondheim, Norway, 16–17 June 2008.

Update 2007-07-07: ERCIM have also published a press release.

Second edition

The second edition of my book “Security Engineering” came out three weeks ago. Wiley have now got round to sending me the final electronic version of the book, plus permission to put half a dozen of the chapters online. They’re now available for download here.

The chapters I’ve put online cover security psychology, banking systems, physical protection, APIs, search, social networking, elections and terrorism. That’s just a sample of how our field has grown outwards in the seven years since the first edition.

Enjoy!

Stealing Phorm Cookies

Last week I gave a talk at the 80/20 Thinking organised “town hall meeting” about the Phorm targeted advertising system. You can see my slides here, and eventually there will be some video here.

One of the issues I talked about was the possibility of stealing Phorm’s cookies, which I elaborate upon in this post. I have written about Phorm’s system before, and you can read a detailed technical explanation, but for the present, what it is necessary to know is that through some sleight-of-hand, users whose ISPs deploy Phorm will end up with tracking cookies stored on their machine, one for every website they visit, but with each containing an identical copy of their unique Phorm tracking number.

The Phorm system strips out these cookies when it can, but the website can access them anyway, either by using some straightforward JavaScript to read their value and POST it back, or by the simple expedient of embedding an https image ( <img = "https://.... ) within their page. The Phorm system will not be able to remove the cookie from an encrypted image request.

Once the website has obtained the Phorm cookie value, then in countries outside the European Union where such things are allowed (almost expected!), the unique tracking number can be combined with any other information the website holds about its visitor, and sold to the highest bidder, who can collate this data with anything else they know about the holder of the tracking number.

Of course, the website can do this already with any signup information that has been provided, but the only global tracking identifier it has is the visiting IP address, and most consumer ISPs give users new IP addresses every few hours or few days. In contrast, the Phorm tracking number will last until the user decides to delete all their cookies…

A twist on this was suggested by “Barrie” in one of the comments to my earlier post. If the remote website obtains an account at the visitor’s ISP (BT, Talk Talk or Virgin in the UK), then they can construct an advert request to the Phorm system, using the Phorm identifier of one of their visitors. By inspecting the advert they receive, they will learn what Phorm thinks will interest that visitor. They can then sell this information on, or serve up their own targeted advert. Essentially, they’re reverse engineering Phorm’s business model.

There are of course things that Phorm can do about these threats, by appropriate use of encryption and traffic analysis. Whether making an already complex system still more complex will assist in the transparency they say they are seeking is, in my view, problematic.

The Phorm “Webwise'' System

Last week I spent several hours at Phorm learning how their advertising system works — this is the system that is to be deployed by the UK’s largest ISPs to pick apart your web browsing activities to try and determine what interests you.

The idea is that advertisers can be more picky in who they serve adverts to… you’ll get travel ads if you’ve been looking to go to Pamplona for the running of the bulls, car adverts if you’ve been checking out the prices of Fords (the intent is that Phorm’s method of distilling down the ten most common words on the page will allow them to distinguish between a Fiesta and a Fiesta!)

I’ve now written up the extensive technical details that they provided (10 pages worth) which you can now download from my website.

Much of the information was already known, albeit perhaps not all minutiae. However, there were a number of new things that were disclosed.

Phorm explained the process by which an initial web request is redirected three times (using HTTP 307 responses) within their system so that they can inspect cookies to determine if the user has opted out of their system, so that they can set a unique identifier for the user (or collect it if it already exists), and finally to add a cookie that they forge to appear to come from someone else’s website. A number of very well-informed people on the UKCrypto mailing list have suggested that the last of these actions may be illegal under the Fraud Act 2006 and/or the Computer Misuse Act 1990.

Phorm also explained that they inspect a website’s “robots.txt” file to determine whether the website owner has specified that search engine “spiders” and other automated processing systems should not examine the site. This goes a little way towards obtaining the permission of the website owner for intercepting their traffic — however, in my view, failing to prohibit the GoogleBot from indexing your page is rather different from permitting your page contents to be snooped upon, so that Phorm can turn a profit from profiling your visitors.

Overall, I learnt nothing about the Phorm system that caused me to change my view that the system performs illegal interception as defined by s1 of the Regulation of Investigatory Powers Act 2000.

Phorm argue, with some justification, that their system does not permit them to identify individuals and that they meet and exceed all necessary Data Protection regulations — producing a system that is superior to other advertising platforms that profile Internet users.

Mayhap, but this is to mix up data protection and privacy.

The latter to me includes the important notion that other people, even people I’ll never meet and who will never meet me, don’t get to know what I do, they don’t get to learn what I’m interested in, and they don’t get to assume that targeting their advertisements will be welcomed.

If I spend my time checking out the details of a surprise visit to Spain, I don’t want the person I’m taking with me to glance at my laptop screen and see that its covered with travel adverts, mix up cause and effect, and think — even just for a moment — that it wasn’t my idea first!

Phorm says that of course I can opt out — and I will — but just because nothing bad happens to me doesn’t mean that the deploying the system is acceptable.

Phorm assumes that their system “anonymises” and therefore cannot possibly do anyone any harm; they assume that their processing is generic and so it cannot be interception; they assume that their business processes gives them the right to impersonate trusted websites and add tracking cookies under an assumed name; and they assume that if only people understood all the technical details they’d be happy.

Well now’s your chance to see all these technical details for yourself — I have, and I’m still not happy at all.

Update (2008-04-06):

Phorm have now quoted sections of this article on their own blog: http://blog.phorm.com/?p=12. Perhaps not surprisingly, they’ve quoted the paragraph that was favourable to their cause, and failed to mention all the paragraphs that followed that were sharply critical. They then fail, again how can one be surprised? to provide a link back to this article so that people can read it for themselves. Readers are left to draw their own conclusions.

Update (2008-04-07):

Phorm have now fixed a “tech glitch” (see comment #31) and now link to my technical report. The material they quote comes from this blog article, but they point out that they link to the ORG blog, and that links to this blog article. So that’s all right then!

Securing Network Location Awareness with Authenticated DHCP

During April–June 2006, I was an intern at Microsoft Research, Cambridge. My project, supervised by Tuomas Aura and Michael Roe, was to improve the privacy and security of mobile computer users. A paper summarizing our work was published at SecureComm 2007, but I’ve only just released the paper online: “Securing Network Location Awareness with Authenticated DHCP”.

How a computer should behave depends on its network location. Existing security solutions, like firewalls, fail to adequately protect mobile users because they assume their policy is static. This results in laptop computers being configured with fairly open policies, in order to facilitate applications appropriate for a trustworthy office LAN (e.g. file and printer sharing, collaboration applications, and custom servers). When the computer is taken home or roaming, this policy leaves an excessively large attack surface.

This static approach also harms user privacy. Modern applications broadcast a large number of identifiers which may leak privacy sensitive information (name, employer, office location, job role); even randomly generated identifiers allow a user to be tracked. When roaming, a laptop should not broadcast identifiers unless necessary, and on moving location either pseudonymous identifiers should be re-used or anonymous ones generated.

Both of these goals require a computer to be able to identify which network it is on, even when an attacker is attempting to spoof this information. Our solution was to extend DHCP to include an network location identifier, authenticated by a public-key signature. I built a proof-of-concept implementation for the Microsoft Windows Server 2003 DHCP server, and the Vista DHCP client.

A scheme like this should ideally work on both small PKI-less home LANs and still permit larger networks to aggregate multiple access points into one logical network. Achieving this requires some subtle naming and key management tricks. These techniques, and how to implement the protocols in a privacy-preserving manner are described in our paper.

Covert channel vulnerabilities in anonymity systems

My PhD thesis — “Covert channel vulnerabilities in anonymity systems” — has now been published:

The spread of wide-scale Internet surveillance has spurred interest in anonymity systems that protect users’ privacy by restricting unauthorised access to their identity. This requirement can be considered as a flow control policy in the well established field of multilevel secure systems. I apply previous research on covert channels (unintended means to communicate in violation of a security policy) to analyse several anonymity systems in an innovative way.

One application for anonymity systems is to prevent collusion in competitions. I show how covert channels may be exploited to violate these protections and construct defences against such attacks, drawing from previous covert channel research and collusion-resistant voting systems.

In the military context, for which multilevel secure systems were designed, covert channels are increasingly eliminated by physical separation of interconnected single-role computers. Prior work on the remaining network covert channels has been solely based on protocol specifications. I examine some protocol implementations and show how the use of several covert channels can be detected and how channels can be modified to resist detection.

I show how side channels (unintended information leakage) in anonymity networks may reveal the behaviour of users. While drawing on previous research on traffic analysis and covert channels, I avoid the traditional assumption of an omnipotent adversary. Rather, these attacks are feasible for an attacker with limited access to the network. The effectiveness of these techniques is demonstrated by experiments on a deployed anonymity network, Tor.

Finally, I introduce novel covert and side channels which exploit thermal effects. Changes in temperature can be remotely induced through CPU load and measured by their effects on crystal clock skew. Experiments show this to be an effective attack against Tor. This side channel may also be usable for geolocation and, as a covert channel, can cross supposedly infallible air-gap security boundaries.

This thesis demonstrates how theoretical models and generic methodologies relating to covert channels may be applied to find practical solutions to problems in real-world anonymity systems. These findings confirm the existing hypothesis that covert channel analysis, vulnerabilities and defences developed for multilevel secure systems apply equally well to anonymity systems.

Steven J. Murdoch, Covert channel vulnerabilities in anonymity systems, Technical report UCAM-CL-TR-706, University of Cambridge, Computer Laboratory, December 2007.

Privacy Enhancing Technologies Symposium (PETS 2008)

I am on the program committee for the Privacy Enhancing Technologies Symposium (previously the PET Workshop), which this year will be held in Leuven, Belgium, 23–25 July 2008. PETS is one of the leading venues for research in privacy, so if you have any relevant research, I can thoroughly recommend submitting it here.

In addition to the main paper session, a new feature this year is HotPETS, which gives the opportunity for short presentations on new and exciting ideas that are potentially not yet mature enough for publication. As usual, proposals for panels are also invited.

The deadline for submissions is 19 February 2008 (except for HotPETS, which is 11 April 2008). More details can be found in the Call For Papers.

Government security failure

In breaking news, the Chancellor of the Exchequer will announce at 1530 that HM Revenue and Customs has lost the data of 15 million child benefit recipients, and that the head of HMRC has resigned.

FIPR has been saying since last November’s publication of our report on Children’s Databases for the Information Commissioner that the proposed centralisation of public-sector data on the nation’s children was not only unsafe but illegal.

But that isn’t all. The Health Select Committee recently made a number of recommendations to improve safety and privacy of electronic medical records, and to give patients more rights to opt out. Ministers dismissed these recommendations, and a poll today shows doctors are so worried about confidentiality that many will opt out of using the new shared care record system.

The report of the Lords Science and Technology Committee into Personal Internet Security also poitned out a lot of government failings in preventing electronic crime – which ministers contemptuously dismissed. It’s surely clear by now that the whole public-sector computer-security establishment is no longer fit for purpose. The next government should replace CESG with a civilian agency staffed by competent people. Ministers need much better advice than they’re currently getting.

Developing …

(added later: coverage from the BBC, the Guardian, Channel 4, the Times, Computer Weekly and e-Health Insider; and here’s the ORG Blog)

Government ignores Personal Medical Security

The Government has just published their response to the Health Committee’s report on The Electronic Patient Record. This response is shocking but not surprising.

For example, on pages 6-7 the Department reject the committee’s recommendation that sealed-envelope data should be kept out of the secondary uses service (SUS). Sealed-envelope data is the stuff you don’t want shared, and SUS is the database that lets civil servants, medical researchers others access to masses of health data. The Department’s justification (para 4 page 6) is not just an evasion but is simply untruthful: they claim that the design of SUS `ensures that patient confidentiality is protected’ when in fact it doesn’t. The data there are not pseudonymised (though the government says it’s setting up a research programme to look at this – report p 23). Already many organisations have access.

The Department also refuses to publish information about security evaluations, test results and breaches (p9) and reliability failures (p19). Their faith in security-by-obscurity is touching.

The biggest existing security problem in the NHS – that many staff carelessly give out data on the phone to anyone who asks for it – will be subject to `assessment’, which `will feed into the further implementation’. Yeah, I’m sure. But as for the recommendation that the NHS provide a substantial audit resource – as there is to detect careless and abusive disclosure from the police national computer – we just get a long-winded evasion (pp 10-11).

Finally, the fundamental changes to the NPfIT business process that would be needed to make the project work, are rejected (p14-15): Sir Humphrey will maintain central control of IT and there will be no `catalogue’ of approved systems from which trusts can choose. And the proposals that the UK participate in open standards, along the lines of the more successful Swedish or Dutch model, draw just a long evasion (p16). I fear the whole project will just continue on its slow slide towards becoming the biggest IT disaster ever.