The main reason for passwords appearing in headlines are large password breaches. What about being able to fearlessly publish scrambled passwords as they are stored on servers and still keep passwords hidden even if they were “123456”, “password”, or “iloveyou”.
Category Archives: Authentication
Financial cryptography 2014
I will be trying to liveblog Financial Cryptography 2014. I just gave a keynote talk entitled “EMV – Why Payment Systems Fail” summarising our last decade’s research on what goes wrong with Chip and PIN. There will be a paper on this out in a few months; meanwhile here’s the slides and here’s our page of papers on bank security.
The sessions of refereed papers will be blogged in comments to this post.
Anatomy of Passwords
Passwords have not really changed since they were first used. Let’s go down the memory lane a bit and then analyse how password systems work and how they could be improved. You may say – forget passwords, OTP is the way forward. My next question would then be: So why do we use OTP in combination with passwords when they are so good?
Continue reading Anatomy of Passwords
Liveblogging SOUPS 2013
I’m at SOUPS 2013, including the Workshop on Risk Perception in IT Security and Privacy, and will be liveblogging them in followups to this post.
NSA Award for Best Scientific Cybersecurity Paper
Yesterday I received the NSA award for the Best Scientific Cybersecurity Paper of 2012 for my IEEE Oakland paper “The science of guessing.” I’m honored to have been recognised by the distinguished academic panel assembled by the NSA. I’d like to again thank Henry Watts, Elizabeth Zwicky, and everybody else at Yahoo! who helped me with this research while I interned there, as well as Richard Clayton and Ross Anderson for their support and supervision throughout.
On a personal note, I’d be remiss not to mention my conflicted feelings about winning the award given what we know about the NSA’s widespread collection of private communications and what remains unknown about oversight over the agency’s operations. Like many in the community of cryptographers and security engineers, I’m sad that we haven’t better informed the public about the inherent dangers and questionable utility of mass surveillance. And like many American citizens I’m ashamed we’ve let our politicians sneak the country down this path.
In accepting the award I don’t condone the NSA’s surveillance. Simply put, I don’t think a free society is compatible with an organisation like the NSA in its current form. Yet I’m glad I got the rare opportunity to visit with the NSA and I’m grateful for my hosts’ genuine hospitality. A large group of engineers turned up to hear my presentation, asked sharp questions, understood and cared about the privacy implications of studying password data. It affirmed my feeling that America’s core problems are in Washington and not in Fort Meade. Our focus must remain on winning the public debate around surveillance and developing privacy-enhancing technology. But I hope that this award program, established to increase engagement with academic researchers, can be a small but positive step.
Should we boycott John Lewis?
Last weekend, my wife and I were in Milton Keynes where we bought a cradle as a present for our new granddaughter. They had only the demo model in the shop, but sold us one to pick up from their store in Cambridge. So yesterday I went into John Lewis with the receipt, to be told by the official that as I couldn’t show the card with which the purchase was made, they needed photo-id. I told him that along with over a million others I’d resisted the previous government’s ID card proposals, the last government had lost the election, and I didn’t carry ID on principle. The response was the usual nonsense: that I should have read the terms and conditions (but when I studied the receipt later it said nothing about ID) and that he was just doing his job (but John Lewis prides itself on being employee-owned, so in theory at least he is a partner in the firm). I won’t be shopping there again anytime soon.
We get harassed more and more by security theatre, by snooping and by bullying. What’s the best way to push back? Why can businesses be so pointlessly annoying?
Perhaps John Lewis are consciously pro-Labour given their history as a co-op; but it’s not prudent to advertise that in a three-way marginal like Cambridge, let alone in the leafy southern suburbs where they make most of their money. Or perhaps it’s just incompetence. When my wife phoned later to complain, the customer services people apologised and said we should have been told when we bought the thing that we’d need to show ID. She offered to post the cradle to our daughter, but then rung back later to say they’d lost the order and would need our paperwork. So that’s another 30-mile round-trip to their depot. But if they’re incompetent, why should I trust them enough to buy their food?
I invite the chairman, Charlie Mayfield, to explain by means of a follow-up to this post whether this was policy or cockup. Will he continue to demand photo-id even from customers who have a principled objection? Will he tell us who in the firm imposed this policy, and show us the training material that was prepared to ensure that counter staff would explain it properly to customers?
How Certification Systems Fail: Lessons from the Ware Report
Research in the Security Group has uncovered various flaws in systems, despite them being certified as secure. Sometimes the certification criteria have been inadequate and sometimes the certification process has been subverted. Not only do these failures affect the owners of the system but when evidence of certification comes up in court, the impact can be much wider.
There’s a variety of approaches to certification, ranging from extremely generic (such as Common Criteria) to highly specific (such as EMV), but all are (at least partially) descendants of a report by Willis H. Ware – “Security Controls for Computer Systems”. There’s much that can be learned from this report, particularly the rationale for why certification systems are set up as the way they are. The differences between how Ware envisaged certification and how certification is now performed is also informative, whether these differences are for good or for ill.
Along with Mike Bond and Ross Anderson, I have written an article for the “Lost Treasures” edition of IEEE Security & Privacy where we discuss what can be learned, about how today’s certifications work and should work, from the Ware report. In particular, we explore how the failure to follow the recommendations in the Ware report can explain why flaws in certified banking systems were not detected earlier. Our article, “How Certification Systems Fail: Lessons from the Ware Report” is available open-access in the version submitted to the IEEE. The edited version, as appearing in the print edition (IEEE Security & Privacy, volume 10, issue 6, pages 40–44, Nov‐Dec 2012. DOI:10.1109/MSP.2012.89) is only available to IEEE subscribers.
Dear ICO: disclose Sony's hash algorithm!
Today the UK Information Commissioner’s Office levied a record £250k fine against Sony over their 2011 Playstation Network breach in which 77 million passwords were stolen. Sony stated that they hashed the passwords, but provided no details. I was hoping that investigators would reveal what hash algorithm Sony used, and in particular if they salted and iterated the hash. Unfortunately, the ICO’s report failed to provide any such details:
The Commissioner is aware that the data controller made some efforts to protect account passwords, however the data controller failed to ensure that the Network Platform service provider kept up with technical developments. Therefore the means used would not, at the time of the attack, be deemed appropriate, given the technical resources available to the data controller.
Given how often I see password implementations use a single iteration of MD5 with no salt, I’d consider that to be the most likely interpretation. It’s inexcusable though for a 12-page report written at public expense to omit such basic technical details. As I said at the time of the Sony Breach, it’s important to update breach notification laws to require that password hashing details be disclosed in full. It makes a difference for users affected by the breach, and it might help motivate companies to get these basic security mechanics right.
Moore's Law won't kill passwords
Computers are getting exponentially faster, yet the human brain is constant! Surely password crackers will eventually beat human memory…
I’ve heard this fallacy repeated enough times, usually soon after the latest advance in hardware for password cracking hits the news, that I’d like to definitively debunk it. Password cracking is certainly getting faster. In my thesis I charted 20 years of password cracking improvements and found an increase of about 1,000 in the number of guesses per second per unit cost that could be achieved, almost exactly a Moore’s Law-style doubling every two years. The good news though is that password hash functions can (and should) co-evolve to get proportionately costlier to evaluate over time. This is a classic arms race and keeping pace simply requires regularly increasing the number of iterations in a password hash. We can even improve against password cracking over time using memory-bound functions, because memory speeds aren’t increasing nearly as quickly and are harder to parallellise. The scrypt() key derivation function is a good implementation of a memory-bound password hash and every high security application should be using it or something similar.
The downside of this arms race is that password hashing will never get any cheaper to deploy (even in inflation-adjusted terms). Hashing a password must be as slow and costly in real terms 20 years from now or else security will be lower. Moore’s Law will never reduce the expense of running an authentication system because security depends on this expense. It also needs to be a non-negligible expense. Achieving any real security requires that password verification take on the order of hundreds of milliseconds or even whole seconds. Unfortunately this hasn’t been the experience of the past 20 years. MD5 was launched over 20 years ago and is still the most common implementation I see in the wild, though it’s gone from being relatively expensive to evaluate to extremely cheap. Moore’s Law has indeed broken MD5 as a password hash and no serious application should still use it. Human memory isn’t more of a problem today than it used to be though. The problem is that we’ve chosen to let password verification become too cheap.
Authentication is machine learning
Last week, I gave a talk at the Center for Information Technology Policy at Princeton. My goal was to expand my usual research talk on passwords with broader predictions about where authentication is going. From the reaction and discussion afterwards one point I made stood out: authenticating humans is becoming a machine learning problem.
Problems with passwords are well-documented. They’re easy to guess, they can be sniffed in transit, stolen by malware, phished or leaked. This has led to loads of academic research seeking to replace passwords with something, anything, that fixes these “obvious” problems. There’s also a smaller sub-field of papers attempting to explain why passwords have survived. We’ve made the point well that network economics heavily favor passwords as the incumbent, but underestimated how effectively the risks of passwords can be managed in practice by good machine learning.
From my brief time at Google, my internship at Yahoo!, and conversations with other companies doing web authentication at scale, I’ve observed that as authentication systems develop they gradually merge with other abuse-fighting systems dealing with various forms of spam (email, account creation, link, etc.) and phishing. Authentication eventually loses its binary nature and becomes a fuzzy classification problem. Continue reading Authentication is machine learning