TalkTalk's new blocking system

Back in January I visited TalkTalk along with Jim Killock of the Open Rights Group (ORG) to have their new Internet blocking system explained to us. The system was announced yesterday, and I’m now publishing my technical description of how it works (note that it was called “BrightFeed” when we saw it, but is now named “HomeSafe”).

Buried in all the detail of how the system works are two key points — the first is the notion that it is possible for a centralised checking system (especially one that tells a remote site its identity) to determine whether sites are malicious are not. This is problematic and I doubt that malware distributors will see this as much of a challenge — although on the other hand, perhaps by setting your browser’s User Agent string to pretend to be the checking system you might become rather safer!

The second is that although the system is described as “opt in”, that only applies to whether or not websites you visit might be blocked. What is not “opt in” is whether or not TalkTalk learns the details of the URLs that all of their customers visit, whether they have opted in or not. All of these sites will be visited by TalkTalk’s automated system — which may take some explaining if the remote system told you a URL in confidence and is checking their logs to see who visits.

On their site, ORG have expressed an opinion as to whether the system can be operated lawfully, along with TalkTalk’s own legal analysis. TalkTalk argue that the system’s purpose is to protect their network, which gives them a statutory exemption from wire-tapping legislation; whereas all the public relations material seems to think it’s been developed to protect the users….

… in the end though, the system will be judged by its effectiveness, and in a world where less than 20% of new threats are detected — that may not be all that high.

Make noise and whisper: a solution to relay attacks

About a moth ago I’ve presented at the Security Protocols Workshop a new idea to detect relay attacks, co-developed with Frank Stajano.

The idea relies on having a trusted box (which we call the T-Box as in the image below) between the physical interfaces of two communicating parties. The T-Box accepts 2 inputs (one from each party) and provides one output (seen by both parties). It ensures that none of the parties can determine the complete input of the other party.

T-Box

Therefore by connecting 2 instances of a T-Box together (as in the case of a relay attack) the message from one end to the other (Alice and Bob in the image above) gets distorted twice as much as it would in the case of a direct connection. That’s the basic idea.

One important question is how does the T-Box operate on the inputs such that we can detect a relay attack? In the paper we describe two example implementations based on a bi-directional channel (which is used for example between a smart card and a terminal). In order to help the reader understand these examples better and determine the usefulness of our idea Mike Bond and I have created a python simulation. This simulation allows you to choose the type of T-Box implementation, a direct or relay connection, as well as other parameters including the length of the anti-relay data stream and detection threshold.

In these two implementations we have restricted ourselves to make the T-Box part of the communication channel. The advantage is that we don’t rely on any party providing the T-Box since it is created automatically by communicating over the physical channel. The disadvantage is that a more powerful attacker can sample the line at twice the speed and overcome our T-Box solution.

The relay attack can be used against many applications, including all smart card based payments. There are already several ideas, including distance bounding, for detecting relay attacks. However our idea brings a new approach to the existing methods, and we hope that in the future we can find a practical implementation of our solutions, or a good scenario to use a physical T-Box which should not be affected by a powerful attacker.

The Sony hack: passwords vs. financial details

Sometime last week, Sony discovered that up to 77 M accounts on its PlayStation Network were compromised. Sony’s network was down for a week before they finally disclosed details yesterday. Unusually, there haven’t yet been any credible claims of responsibility for the hack, so we can only go on Sony’s official statements. The breach included names and addresses, passwords, and answers to personal knowledge questions, and possibly payment details. The risks of leaking payment card numbers are well-known, including fraudulent payment transactions and identity theft. Sony has responded by offering to provide free credit checks for affected customers and notifying major credit ratings bureaus with a list of affected customers. This hasn’t been enough for many critics, including a US Senator.

Still, this is far more than Sony has done regarding the leaked passwords. The risks here are very real—hackers can attempt to re-use the compromised passwords (possibly after inverting hashes using brute-force) at many other websites, including financial ones. There are no disclosure laws here though, and Sony has done nothing, not even disclosing the key technical details of how passwords were stored. The implications are very different if the passwords were stored in cleartext, hashed in a constant manner, or properly hashed and salted. Sony customers ought to know what really happened. Instead, towards the bottom of Sony’s FAQ they trail off mid sentence when discussing the leaked passwords:

Additionally, if you use the same user name or password for your PlayStation Network or Qriocity service account for other [no further text]

As we explored last summer, this is a serious market failure. Sony’s security breach has potentially compromised passwords at hundreds of other sites where its users re-use the same password and email address as credentials. This is a significant externality, but Sony bears no legal responsibility, and it shows. The options are never great once a breach has occurred, but Sony should at a minimum have promptly provided full details about their password storage, gave clear instructions to users to change their password at other sites, and notified at least the email providers of each account holder to instruct a forced password reset. The legal framework surrounding password breaches must catch up to that for financial breaches.

Resilience of the Internet Interconnection Ecosystem

The Internet is, by very definition, an interconnected network of networks. The resilience of the way in which the interconnection system works is fundamental to the resilience of the Internet. Thus far the Internet has coped well with disasters such as 9/11 and Hurricane Katrina – which have had very significant local impact, but the global Internet has scarcely been affected. Assorted technical problems in the interconnection system have caused a few hours of disruption but no long term effects.

But have we just been lucky ? A major new report, just published by ENISA (the European Network and Information Security Agency) tries to answer this question.

The report was written by Chris Hall, with the assistance of Ross Anderson and Richard Clayton at Cambridge and Panagiotis Trimintzios and Evangelos Ouzounis at ENISA. The full report runs to 238 pages, but for the time-challenged there’s a shorter 31 page executive summary and there will be a more ‘academic’ version of the latter at this year’s Workshop on the Economics of Information Security (WEIS 2011).
Continue reading Resilience of the Internet Interconnection Ecosystem

Securing and Trusting Internet Names (SATIN 2011)

The inaugural SATIN workshop was held at the National Physical Laboratory (NPL) on Monday/Tuesday this week. The workshop format was presentations of 15 minutes followed by 15 minutes of discussions — so all the 49 registered attendees were able to contribute to success of the event.

Many of the papers were about DNSSEC, but there were also papers on machine learning, traffic classification, use of names by malware and ideas for new types of naming system. There were also two invited talks: Roy Arends from Nominet (who kindly sponsored the event) gave an update on how the co.uk zone will be signed, and Rod Rasmussen from Internet Identity showed how passive DNS is helping in the fight against eCrime. All the papers, and the presenters slides can be found on the workshop website.

The workshop will be run again (as SATIN 2012), probably on March 22/23 (the week before IETF goes to Paris). The CFP, giving the exact submission schedule, will appear in late August.

The PET Award: Nominations wanted for prestigious privacy award

The PET Award is presented annually to researchers who have made an outstanding contribution to the theory, design, implementation, or deployment of privacy enhancing technology. It is awarded at the annual Privacy Enhancing Technologies Symposium (PETS).

The PET Award carries a prize of 3000 USD thanks to the generous support of Microsoft. The crystal prize itself is offered by the Office of the Information and Privacy Commissioner of Ontario, Canada.

Any paper by any author written in the area of privacy enhancing technologies is eligible for nomination. However, the paper must have appeared in a refereed journal, conference, or workshop with proceedings published in the period from August 8, 2009 until April 15, 2011.

The complete award rules including eligibility requirements can be found under the award rules section of the PET Symposium website.

Anyone can nominate a paper by sending an email message containing the following to award-chair11@petsymposium.org.

  • Paper title
  • Author(s)
  • Author(s) contact information
  • Publication venue and full reference
  • Link to an available online version of the paper
  • A nomination statement of no more than 500 words.

All nominations must be submitted by April 15th, 2011. The Award Committee will select one or two winners among the nominations received. Winners must be present at the PET Symposium in order to receive the Award. This requirement can be waived only at the discretion of the PET Advisory board.

More information about the PET award (including past winners) is available at http://petsymposium.org/award/

More information about the 2011 PET Symposium is available at http://petsymposium.org/2011.

Pico: no more passwords!

Passwords are no longer acceptable as a security mechanism. The arrogant security people ask users that passwords be memorable, unguessable, high entropy, all different and never written down. With the proliferation of the number of passwords and the ever-increasing brute-force capabilities of modern computers, passwords of adequate strength are too complicated for human memory, especially when one must remember dozens of them. The above demands cannot all be satisfied simultaneously. Users are right to be pissed off.

A number of proposals have attempted to find better alternatives for the case of web authentication, partly because the web is the foremost culprit in the proliferation of passwords and partly because its clean interfaces make technical solutions tractable.

For the poor user, however, a password is a password, and it’s still a pain in the neck regardless of where it comes from. Users aren’t fed up with web passwords but with passwords altogether. In “Pico: no more passwords, the position paper I’ll be presenting tomorrow morning at the Security Protocols Workshop, I propose a clean-slate design to get rid of passwords everywhere, not just online. A portable gadget called Pico transforms your credentials from “what you know” into “what you have”.

A few people have already provided interesting feedback on the pre-proceedings draft version of the paper. I look forward to an animated discussion of this controversial proposal tomorrow. Whenever I serve as help desk for my non-geek acquaintances and listen to what drives them crazy about computers I feel ashamed that, with passwords, we (the security people) impose on them such a contradictory and unsatisfiable set of requests. Maybe your gut reaction to Pico will be “it’ll never work”, but I believe we have a duty to come up with something more usable than passwords.

[UPDATE: the paper can also be downloaded from my own Cambridge web site, where the final version will appear in due course.]

Can we Fix Federated Authentication?

My paper Can We Fix the Security Economics of Federated Authentication? asks how we can deal with a world in which your mobile phone contains your credit cards, your driving license and even your car key. What happens when it gets stolen or infected?

Using one service to authenticate the users of another is an old dream but a terrible tar-pit. Recently it has become a game of pass-the-parcel: your newspaper authenticates you via your social networking site, which wants you to recover lost passwords by email, while your email provider wants to use your mobile phone and your phone company depends on your email account. The certification authorities on which online trust relies are open to coercion by governments – which would like us to use ID cards but are hopeless at making systems work. No-one even wants to answer the phone to help out a customer in distress. But as we move to a world of mobile wallets, in which your phone contains your credit cards and even your driving license, we’ll need a sound foundation that’s resilient to fraud and error, and usable by everyone. Where might this foundation be? I argue that there could be a quite surprising answer.

The paper describes some work I did on sabbatical at Google and will appear next week at the Security Protocols Workshop.

Why the Cabinet Office's £27bn cyber crime cost estimate is meaningless

Today the UK Cabinet Office released a report written by Detica. The report concluded that the annual cost of cyber crime in UK is £27bn. That’s less than $1 trillion, as AT&T’s Ed Amoroso testified before the US Congress in 2009. But it’s still a very large number, approximately 2% of UK GDP. If the total is accurate, then cyber crime is a very serious problem of utmost national importance.

Unfortunately, much of the total cost is based on questionable calculations that are impossible for outsiders to verify. 60% of the total cost is ascribed to intellectual property theft (i.e., business secrets not copied music and films) and espionage. The report does describe a methodology for how it arrived at the figures. However, several key details are lacking. To calculate the IP and espionage losses, the authors first calculated measures of each sector’s value to the economy. Then they qualitatively assessed how lucrative and feasible these attacks would be in each sector.

This is where trouble arises. Based on these assessments, the authors assigned a sector-specific probability of theft, one for the best-, worst- and average cases. Unfortunately, these probabilities are not specified in the report, and no detailed rationale is given for their assignment. Are the probabilities based on surveys of firms that have fallen victim to these particular types of crime? Or is it a number simply pulled from the air based on the hunch of the authors? It is impossible to determine from the report.
Continue reading Why the Cabinet Office's £27bn cyber crime cost estimate is meaningless

Measuring password re-use empirically

In the aftermath of Anonymous’ revenge hacking of HBGary over the weekend, some enterprising hackers used one of the stolen credentials and some social engineering to gain root access at rootkit.com, which has been down for a few days since. There isn’t much novel about the hack but the dump of rootkit.com’s SQL databases provides another password dataset for research, though an order of magnitude smaller than the Gawker dataset with just 81,000 hashed passwords.

More interestingly, due to the close proximity of the hacks, we can compare the passwords associated with email addresses registered at both Gawker and rootkit.com. This gives an interesting data point on the widely known problem of password re-use. This new data seems to indicate a significantly higher re-use rate than the few previously published estimates. Continue reading Measuring password re-use empirically