Category Archives: Academic papers

Make noise and whisper: a solution to relay attacks

About a moth ago I’ve presented at the Security Protocols Workshop a new idea to detect relay attacks, co-developed with Frank Stajano.

The idea relies on having a trusted box (which we call the T-Box as in the image below) between the physical interfaces of two communicating parties. The T-Box accepts 2 inputs (one from each party) and provides one output (seen by both parties). It ensures that none of the parties can determine the complete input of the other party.

T-Box

Therefore by connecting 2 instances of a T-Box together (as in the case of a relay attack) the message from one end to the other (Alice and Bob in the image above) gets distorted twice as much as it would in the case of a direct connection. That’s the basic idea.

One important question is how does the T-Box operate on the inputs such that we can detect a relay attack? In the paper we describe two example implementations based on a bi-directional channel (which is used for example between a smart card and a terminal). In order to help the reader understand these examples better and determine the usefulness of our idea Mike Bond and I have created a python simulation. This simulation allows you to choose the type of T-Box implementation, a direct or relay connection, as well as other parameters including the length of the anti-relay data stream and detection threshold.

In these two implementations we have restricted ourselves to make the T-Box part of the communication channel. The advantage is that we don’t rely on any party providing the T-Box since it is created automatically by communicating over the physical channel. The disadvantage is that a more powerful attacker can sample the line at twice the speed and overcome our T-Box solution.

The relay attack can be used against many applications, including all smart card based payments. There are already several ideas, including distance bounding, for detecting relay attacks. However our idea brings a new approach to the existing methods, and we hope that in the future we can find a practical implementation of our solutions, or a good scenario to use a physical T-Box which should not be affected by a powerful attacker.

Resilience of the Internet Interconnection Ecosystem

The Internet is, by very definition, an interconnected network of networks. The resilience of the way in which the interconnection system works is fundamental to the resilience of the Internet. Thus far the Internet has coped well with disasters such as 9/11 and Hurricane Katrina – which have had very significant local impact, but the global Internet has scarcely been affected. Assorted technical problems in the interconnection system have caused a few hours of disruption but no long term effects.

But have we just been lucky ? A major new report, just published by ENISA (the European Network and Information Security Agency) tries to answer this question.

The report was written by Chris Hall, with the assistance of Ross Anderson and Richard Clayton at Cambridge and Panagiotis Trimintzios and Evangelos Ouzounis at ENISA. The full report runs to 238 pages, but for the time-challenged there’s a shorter 31 page executive summary and there will be a more ‘academic’ version of the latter at this year’s Workshop on the Economics of Information Security (WEIS 2011).
Continue reading Resilience of the Internet Interconnection Ecosystem

Securing and Trusting Internet Names (SATIN 2011)

The inaugural SATIN workshop was held at the National Physical Laboratory (NPL) on Monday/Tuesday this week. The workshop format was presentations of 15 minutes followed by 15 minutes of discussions — so all the 49 registered attendees were able to contribute to success of the event.

Many of the papers were about DNSSEC, but there were also papers on machine learning, traffic classification, use of names by malware and ideas for new types of naming system. There were also two invited talks: Roy Arends from Nominet (who kindly sponsored the event) gave an update on how the co.uk zone will be signed, and Rod Rasmussen from Internet Identity showed how passive DNS is helping in the fight against eCrime. All the papers, and the presenters slides can be found on the workshop website.

The workshop will be run again (as SATIN 2012), probably on March 22/23 (the week before IETF goes to Paris). The CFP, giving the exact submission schedule, will appear in late August.

The PET Award: Nominations wanted for prestigious privacy award

The PET Award is presented annually to researchers who have made an outstanding contribution to the theory, design, implementation, or deployment of privacy enhancing technology. It is awarded at the annual Privacy Enhancing Technologies Symposium (PETS).

The PET Award carries a prize of 3000 USD thanks to the generous support of Microsoft. The crystal prize itself is offered by the Office of the Information and Privacy Commissioner of Ontario, Canada.

Any paper by any author written in the area of privacy enhancing technologies is eligible for nomination. However, the paper must have appeared in a refereed journal, conference, or workshop with proceedings published in the period from August 8, 2009 until April 15, 2011.

The complete award rules including eligibility requirements can be found under the award rules section of the PET Symposium website.

Anyone can nominate a paper by sending an email message containing the following to award-chair11@petsymposium.org.

  • Paper title
  • Author(s)
  • Author(s) contact information
  • Publication venue and full reference
  • Link to an available online version of the paper
  • A nomination statement of no more than 500 words.

All nominations must be submitted by April 15th, 2011. The Award Committee will select one or two winners among the nominations received. Winners must be present at the PET Symposium in order to receive the Award. This requirement can be waived only at the discretion of the PET Advisory board.

More information about the PET award (including past winners) is available at http://petsymposium.org/award/

More information about the 2011 PET Symposium is available at http://petsymposium.org/2011.

Pico: no more passwords!

Passwords are no longer acceptable as a security mechanism. The arrogant security people ask users that passwords be memorable, unguessable, high entropy, all different and never written down. With the proliferation of the number of passwords and the ever-increasing brute-force capabilities of modern computers, passwords of adequate strength are too complicated for human memory, especially when one must remember dozens of them. The above demands cannot all be satisfied simultaneously. Users are right to be pissed off.

A number of proposals have attempted to find better alternatives for the case of web authentication, partly because the web is the foremost culprit in the proliferation of passwords and partly because its clean interfaces make technical solutions tractable.

For the poor user, however, a password is a password, and it’s still a pain in the neck regardless of where it comes from. Users aren’t fed up with web passwords but with passwords altogether. In “Pico: no more passwords, the position paper I’ll be presenting tomorrow morning at the Security Protocols Workshop, I propose a clean-slate design to get rid of passwords everywhere, not just online. A portable gadget called Pico transforms your credentials from “what you know” into “what you have”.

A few people have already provided interesting feedback on the pre-proceedings draft version of the paper. I look forward to an animated discussion of this controversial proposal tomorrow. Whenever I serve as help desk for my non-geek acquaintances and listen to what drives them crazy about computers I feel ashamed that, with passwords, we (the security people) impose on them such a contradictory and unsatisfiable set of requests. Maybe your gut reaction to Pico will be “it’ll never work”, but I believe we have a duty to come up with something more usable than passwords.

[UPDATE: the paper can also be downloaded from my own Cambridge web site, where the final version will appear in due course.]

Can we Fix Federated Authentication?

My paper Can We Fix the Security Economics of Federated Authentication? asks how we can deal with a world in which your mobile phone contains your credit cards, your driving license and even your car key. What happens when it gets stolen or infected?

Using one service to authenticate the users of another is an old dream but a terrible tar-pit. Recently it has become a game of pass-the-parcel: your newspaper authenticates you via your social networking site, which wants you to recover lost passwords by email, while your email provider wants to use your mobile phone and your phone company depends on your email account. The certification authorities on which online trust relies are open to coercion by governments – which would like us to use ID cards but are hopeless at making systems work. No-one even wants to answer the phone to help out a customer in distress. But as we move to a world of mobile wallets, in which your phone contains your credit cards and even your driving license, we’ll need a sound foundation that’s resilient to fraud and error, and usable by everyone. Where might this foundation be? I argue that there could be a quite surprising answer.

The paper describes some work I did on sabbatical at Google and will appear next week at the Security Protocols Workshop.

JPEG canaries: exposing on-the-fly recompression

Many photo-sharing websites decompress and compress uploaded images, to enforce particular compression parameters. This recompression degrades quality. Some web proxies can also recompress images/videos, to give the impression of a faster connection.

In Towards copy-evident JPEG images (with Markus Kuhn, in Lecture Notes in Informatics), we present an algorithm for imperceptibly marking JPEG images so that the recompressed copies show a clearly-visible warning message. (Full page demonstration.)

Original image:

Original image

After recompression:

The image after recompression, with a visible warning message

(If you can’t see the message in the recompressed image, make sure your browser is rendering the images without scaling or filtering.)

Richard Clayton originally suggested the idea of trying to create an image which would show a warning when viewed via a recompressing proxy server. Here is a real-world demonstration using the Google WAP proxy.

Our marking technique is inspired by physical security printing, used to produce documents such as banknotes, tickets, academic transcripts and cheques. Photocopied versions will display a warning (e.g. ‘VOID’) or contain obvious distortions, as duplication turns imperceptible high-frequency patterns into more noticeable low-frequency signals.

Our algorithm works by adding a high-frequency pattern to the image with an amplitude carefully selected to cause maximum quantization error on recompression at a chosen target JPEG quality factor. The amplitude is modulated with a covert warning message, so that foreground message blocks experience maximum quantization error in the opposite direction to background message blocks. While the message is invisible in the marked original image, it becomes visible due to clipping in a recompressed copy.

The challenge remains to extend our approach to mark video data, where rate control and adaptive quantization make the copied document’s properties less predictable. The result would be a digital video that would be severely degraded by recompression to a lower quality, making the algorithm useful for digital content protection.

A Merry Christmas to all Bankers

The bankers’ trade association has written to Cambridge University asking for the MPhil thesis of one of our research students, Omar Choudary, to be taken offline. They complain it contains too much detail of our No-PIN attack on Chip-and-PIN and thus “breaches the boundary of responsible disclosure”; they also complain about Omar’s post on the subject to this blog.

Needless to say, we’re not very impressed by this, and I made this clear in my response to the bankers. (I am embarrassed to see I accidentally left Mike Bond off the list of authors of the No-PIN vulnerability. Sorry, Mike!) There is one piece of Christmas cheer, though: the No-PIN attack no longer works against Barclays’ cards at a Barclays merchant. So at least they’ve started to fix the bug – even if it’s taken them a year. We’ll check and report on other banks later.

The bankers also fret that “future research, which may potentially be more damaging, may also be published in this level of detail”. Indeed. Omar is one of my coauthors on a new Chip-and-PIN paper that’s been accepted for Financial Cryptography 2011. So here is our Christmas present to the bankers: it means you all have to come to this conference to hear what we have to say!