Category Archives: Privacy technology

Anonymous communication, data protection

NHS Computer Project Failing

The House of Commons Health Select Committee has just published a Report on the Electronic Patient Record. This concludes that the NHS National Programme for IT (NPfIT), the 20-billion-pound project to rip out all the computers in the NHS and replace them with systems that store data in central server farms rather than in the surgery or hospital, is failing to meet its stated core objective – of providing clinically rich, interoperable detailed care records. What’s more, privacy’s at serious risk. Here is comment from e-Health Insider.

For the last few years I’ve been using the London Ambulance Service disaster as the standard teaching example of how things go wrong in big software projects. It looks like I will have to refresh my notes for the Software Engineering course next month!

I’ve been warning about the safety and privacy risks of the Department of Health’s repeated attempts to centralise healthcare IT since 1995. Here is an analysis of patient privacy I wrote earlier this year, and here are my older writings on the security of clinical information systems. It doesn’t give me any great pleasure to be proved right, though.

Embassy email accounts breached by unencrypted passwords

When it rains, it pours. Following the fuss over the Storm worm impersonating Tor, today Wired and The Register are covering the story of a Dan Egerstad, who intercepted embassy email account passwords by setting up 5 Tor exit nodes, then published the results online. People have been sniffing passwords on Tor before, and one even published a live feed. However, the sensitivity of embassies as targets and initial mystery over how the passwords were snooped, helped drum up media interest.

That unencrypted traffic can be read by Tor exit nodes is an unavoidable fact – if the destination does not accept encrypted information then there is nothing Tor can do to change this. The download page has a big warning, recommending users adopt end-to-end encryption. In some cases this might not be possible, for example browsing sites which do not support SSL, but for downloading email, not using encryption with Tor is inexcusable.

Looking at who owns the IP addresses of the compromised email accounts, I can see that they are mainly commercial ISPs, generally in the country where the embassy is located, so probably set up by the individual embassy and not subject to any server-imposed security policies. Even so, it is questionable whether such accounts should be used for official business, and it is not hard to find providers which support encrypted access.

The exceptions are Uzbekistan, and Iran whose servers are controlled by the respective Ministry of Foreign Affairs, so I’m surprised that secure access is not mandated (even my university requires this). I did note that the passwords of the Uzbek accounts are very good, so might well be allocated centrally according to a reasonable password policy. In contrast, the Iranian passwords are simply the name of the embassy, so guessable not only for these accounts, but any other one too.

In general, if you are sending confidential information over the Internet unencrypted you are at risk, and Tor does not change this fact, but it does move those risks around. Depending on the nature of the secrets, this could be for better or for worse. Without Tor, data can be intercepted near the server, near the client and also in the core of the Internet; with Tor data is encrypted near the client, but can be seen by the exit node.

Users of unknown Internet cafés or of poorly secured wireless are at risk of interception near the client. Sometimes there is motivation to snoop traffic there but not at the exit node. For example, people may be curious what websites their flatmates browse, but it is not interesting to know that an anonymous person is browsing a controversial site. This is why, at conferences, I tunnel my web browsing via Cambridge. I know that without end-to-end encryption my data can be intercepted, but the sysadmins at Cambridge have far less incentive misbehave than some joker sitting behind me.

Tor has similar properties, but when used with unencrypted data the risks need to be carefully evaluated. When collecting email, be it over Tor, using wireless, or via any other untrustworthy media, end-to-end encryption is essential. The fact that embassies, who are supposed to be security conscious, do not appreciate this is disappointing to learn.

Although I am a member of the Tor project, the views expressed here are mine alone and not those of Tor.

Economics of Tor performance

Currently the performance of the Tor anonymity network is quite poor. This problem is frequently stated as a reason for people not using anonymizing proxies, so improving performance is a high priority of their developers. There are only about 1 000 Tor nodes and many are on slow Internet connections so in aggregate there is about 1 Gbit/s shared between 100 000 or so users. One way to improve the experience of Tor users is to increase the number of Tor nodes (especially high-bandwidth ones). Some means to achieve this goal are discussed in Challenges in Deploying Low-Latency Anonymity, but here I want to explore what will happen when Tor’s total bandwidth increases.

If Tor’s bandwidth doubled tomorrow, the naïve hypothesis is that users would experience twice the throughput. Unfortunately this is not true, because it assumes that the number of users does not vary with bandwidth available. In fact, as the supply of the Tor network’s bandwidth increases, there will be a corresponding increase in the demand for bandwidth from Tor users. This fact will apply just as well for other networks, but for the purposes of this post, I’ll use Tor as an example. Simple economics shows that performance of Tor is controlled by how the number of users scales with available bandwidth, which can be represented by a demand curve.

I don’t claim this is a new insight; in fact between me starting this draft and now, Andreas Pfitzmann made a very similar observation while answering a question following the presentation of Performance Comparison of Low-Latency Anonymisation Services from a User Perspective at the PET Symposium. He said, as I recall, that the performance of the anonymity network is the slowest tolerable speed for people who care about their privacy. Despite this, I couldn’t find anyone who had written a succinct description anywhere, perhaps because it is too obvious. Equally, I have heard the naïve version stated occasionally, so I think it’s helpful to publish something people can point at. The rest of this post will discuss the consequences of modelling Tor user behaviour in this way, and the limitations of the technique.

Continue reading Economics of Tor performance

Recent talks: Chip & PIN, traffic analysis, and voting

In the past couple of months, I’ve presented quite a few talks, and in the course of doing so, travelled a lot too (Belgium and Canada last month; America and Denmark still to come). I’ve now published my slides from these talks, which might also be of interest to Light Blue Touchpaper readers, so I’ll summarize the contents here.

Two of the talks were on Chip & PIN, the UK deployment of EMV. The first presentation — “Chip and Spin” — was for the Girton village Neighbourhood Watch meeting. Girton was hit by a spate of card-cloning, eventually traced back to a local garage, so they invited me to give a fairly non-technical overview of the problem. The slides served mainly as an introduction to a few video clips I showed, taken from TV programmes in which I participated. [slides (PDF 1.1M)]

The second Chip & PIN talk was to the COSIC research group at K.U. Leuven. Due to the different audience, this presentation — “EMV flaws and fixes: vulnerabilities in smart card payment systems” — was much more technical. I summarized the EMV protocol, described a number of weaknesses which leave EMV open to attack, along with corresponding defences. Finally, I discussed the more general problem with EMV — that customers are in a poor position to contest fraudulent transactions — and how this situation can be mitigated. [slides (PDF 1.4M)]

If you are interested in further details, much of the material from both of my Chip & PIN talks is discussed in papers from our group, such as “Chip and SPIN“, “The Man-in-the-Middle Defence” and “Keep Your Enemies Close: Distance bounding against smartcard relay attacks

Next I went to Ottawa for the PET Workshop (now renamed the PET Symposium). Here, I gave three talks. The first was for a panel session — “Ethics in Privacy Research”. Since this was a discussion, the slides aren’t particularly interesting but it will hopefully be the subject of an upcoming paper.

Then I gave a short talk at WOTE, on my experiences as an election observer. I summarized the conclusions of the Open Rights Group report (released the day before my talk) and added a few personal observations. Richard Clayton discussed the report in the previous post. [slides (PDF 195K)]

Finally, I presented the paper written by Piotr Zieliński and me — “Sampled Traffic Analysis by Internet-Exchange-Level Adversaries”, which I previously mentioned in a recent post. In the talk I gave a graphical summary of the paper’s key points, which I hope will aid in understanding the motivation of the paper and the traffic analysis method we developed. [slides (PDF 2.9M)]

Should there be a Best Practice for censorship?

A couple of weeks ago, right at the end of the Oxford Internet Institute conference on The Future of Free Expression on the Internet, the question was raised from the platform as to whether it might be possible to construct a Best Current Practice (BCP) framework for censorship?

If — the argument ran — IF countries were transparent about what they censored, IF there was no overblocking (the literature’s jargon for collateral damage), IF it was done under a formal (local) legal framework, IF there was the right of appeal to correct inadvertent errors, IF … and doubtless a whole raft more of “IFs” that a proper effort to develop a BCP would establish. IF… then perhaps censorship would be OK.

I spoke against the notion of a BCP from the audience at the time, and after some reflection I see no reason to change my mind.

There will be many more subtle arguments — much as there are will be more IFs to consider, but I can immediately see two insurmountable objections.

The first is that a BCP will inevitably lead to far more censorship, but now with the apparent endorsement of a prestigious organisation: “The OpenNet Initiative says that blocking the political opposition’s websites is just fine!” Doubtless some of the IFs in the BCP will address open political processes, and universal human rights … but it will surely come down to quibbling about language: terrorist/freedom-fighter; assassination/murder; dissent/rebellion; opposition/traitor.

The second, and I think the most telling, objection is that it will reinforce the impression that censoring the Internet can actually be achieved! whereas the evidence piles up that it just isn’t possible. All of the schemes for blocking content can be evaded by those with technical knowledge (or access to the tools written by others with that knowledge). Proxies, VPNs, Tor, fragments, ignoring resets… the list of evasion technologies is endless.

One of the best ways of spreading data to multiple sites is to attempt to remove it, and every few years some organisation demonstrates this again. Although ad hoc replication doesn’t necessarily scale — there’s plenty of schemes in the literature for doing it on an industrial scale.

It’s cliched to trot out John Gilmore’s observation that “the Internet treats censorship as a defect and routes around it“, but over-familiarity with the phrase should not hide its underlying truth.

So, in my view, a BCP will merely be used by the wicked as a fig-leaf for their activity, and by the ignorant to prop up their belief that it’s actually possible to block the content they don’t believe should be visible. A BCP is a thoroughly bad idea, and should not be further considered.

Sampled Traffic Analysis by Internet-Exchange-Level Adversaries

Users of the Tor anonymous communication system are at risk of being tracked by an adversary who can monitor both the traffic entering and leaving the network. This weakness is well known to the designers and currently there is no known practical way to resist such attacks, while maintaining the low-latency demanded by applications such as web browsing. For this reason, it seems intuitively clear that when selecting a path through the Tor network, it would be beneficial to select the nodes to be in different countries. Hopefully government-level adversaries will find it problematic to track cross-border connections as mutual legal assistance is slow, if it even works at all. Non-government adversaries might also find that their influence drops off at national boundaries too.

Implementing secure IP-based geolocation is hard, but even if it were possible, the technique might not help and could perhaps even harm security. The PET Award nominated paper, “Location Diversity in Anonymity Networks“, by Nick Feamster and Roger Dingledine showed that international Internet connections cross a comparatively small number of tier-1 ISPs. Thus, by forcing one or more of these companies to co-operate, a large proportion of connections through an anonymity network could be traced.

The results of Feamster and Dingledine’s paper suggest that it may be better to bounce anonymity traffic around within a country, because it is less likely that there will be a single ISP monitoring incoming and outgoing traffic to several nodes. However, this only appears to be the case because they used BGP data to build a map of Autonomous Systems (ASes), which roughly correspond to ISPs. Actually, inter-ISP traffic (especially in Europe) might travel through an Internet eXchange (IX), a fact not apparent from BGP data. Our paper, “Sampled Traffic Analysis by Internet-Exchange-Level Adversaries“, by Steven J. Murdoch and Piotr Zieliński, examines the consequences of this observation.

Continue reading Sampled Traffic Analysis by Internet-Exchange-Level Adversaries

What is the unit of amplification for DoS?

Roger Needham’s work has warned us of the potential damage of Denial-of-Service attacks. Since, protocol designers try to minimize the storage committed to unauthenticated partners, as well as prevent ‘amplification effects’ that could help DoS adversaries: a single unit of network communication should only lead to at most one unit of network communication. This way an adversary cannot use network services to amplify his or her ability to flood other nodes. The key question that arises is: what is the unit of network communication?

My favorite authentication protocol, that incorporates state-of-the-art DoS prevention features, is JFKr (Just Fast Keying with responder anonymity.) An RFC of it is also available for those keen to implement it. JFKr implements a signed ephemeral Diffie-Hellman exchange, with DoS prevention cookies being used by the responder to thwart storage exhaustion DoS attacks directed against him.

The protocol goes a bit like this (full details in section 2.3 of the RFC):

Message 1, I->R: Ni, g^i

Message 2, R->I: Ni, Nr, g^r, GRPINFOr, HMAC{HKr}(Ni, Nr, g^r)

Message 3, I->R: Ni, Nr, g^i, g^r, HMAC{HKr}(Ni, Nr, g^r),
E{Ke}(IDi, sa, SIG{i}(Ni, Nr, g^i, g^r, GRPINFO)),
HMAC{Ka}(‘I’, E{Ke}(IDi, sa, SIG{i}(Ni, Nr, g^i, g^r, GRPINFO)))

Message 4, R->I: E{Ke}(IDr, sa’, SIG{r}(Ni, Nr, g^i, g^r)),
HMAC{Ka}(‘R’, E{Ke}(IDr, sa’, SIG{r}(Ni, Nr, g^i, g^r)))

Note that after message 2. is sent, ‘R’ (the responder) does not need to store anything, since all the data necessary to perform authentication will be sent back by ‘I’ in Message 3. One is also inclined to think that there is no amplification effect that ‘I’ could benefit from for DoS, since the single message 1. generates a single reply i.e. message 2. Is this the case?

We are implementing (at K.U.Leuven) a traffic analysis prevention system, where all data is transported in UDP packets of fixed length 140 bytes. As it happens message 1. is slightly shorter than message 2., since it only carries a single nonce and no cookie. (The JFKi version of the protocol has a message 2. that is even longer.) This means that we have to split the reply in message 2. over two packets carrying 280 bytes in total.

A vanilla implementation of JFKr would allow for the following DoS attack. The adversary sends many message 1. UDP packets pretending to be initiating an authentication session with an unsuspecting server ‘R’ using JFKr. Furthermore the adversary forges the source IP of the UDP packets to make them appear as if they are sent by the victim’s IP address. This costs the adversary a UDP packet of 140 bytes. The server ‘R’ follows the protocol and replies with two UDP packets of 140 bytes each, sent to the victim’s IP address. The adversary can of course perform this attack with many sessions or servers in parallel, and end up with double the number of packets and data being sent to the victim.

What went wrong here? The assumption that one message (1.) is replied with one message (2.) is not sufficient (in our case, but also in general) to guarantee the adversary cannot amplify a DoS attack. One has to count the actual bits and bytes on the wire before making this judgment.

How do we fix this? It is really easy: The responder ‘R’ requires the first message (1.) to be at least as long (in bytes) as the reply message (2.) This can be done by appending a Nonce (J) at the end of the message.

Message 1, I->R: Ni, g^i, J

This forces the attacker to spend to send, byte for byte, as much traffic as will hit the victim’s computer.

e-Government Framework is Rather Broken

FIPR colleagues and I have written a response to the recent Cabinet Office consultation on the proposed Framework for e-Government. We’re not very impressed. Whitehall’s security advisers don’t seem to understand phishing; they protect information much less when its compromise could harm private citizens, rather than government employees (which is foolish given how terrorists and violent lobby groups work nowadays); and as well as the inappropriate threat model, there are inappropriate policy models. Government departments that follow this advice are likely to build clunky, expensive, insecure systems.

23rd Chaos Communication Congress

23C3 logoThe 23rd Chaos Communication Congress will be held later this month in Berlin, Germany on 27–30 December. I will be attending to give a talk on Hot or Not: Revealing Hidden Services by their Clock Skew. Another contributor to this blog, George Danezis, will be talking on An Introduction to Traffic Analysis.

This will be my third time speaking at the CCC (I previously talked on Hidden Data in Internet Published Documents and The Convergence of Anti-Counterfeiting and Computer Security in 2004 then Covert channels in TCP/IP: attack and defence in 2005) and I’ve always had a great time but this year looks to be the best yet. Here are a few highlights from the draft programme, although I am sure there are many great talks I have missed.

It’s looking like a great line-up, so I hope many of you can make it. See you there!

Health privacy … breaking news …

The Chief Medical Officer, Sir Liam Donaldson, has written a letter to all GPs and hospital medical directors telling them that if patients try to opt out of the central collection of their medical data, the Secretary of State must be told. This follows a campaign that I’ve been helping and that has attracted strong support – in the press, from GPs and from public opinion.

This letter orders GPs to break patient confidentiality – and apparently for the noble purpose of news management. I understand that at least one GP will be reporting Sir Liam to the General Medical Council. It is entirely up to the patient to decide whether to send an opt-out letter to their GP, to Ms Hewitt, or to both. It is not for a civil servant – even a very grand one like Sir Liam – to unilaterally override the wishes of those patients who decide to write to their GP but not to Ms Hewitt. (It’s also somewhat amusing as, only a month ago, officials were telling patients who tried to opt out that their GPs would decide whether to upload data.)

Developing …