I’m liveblogging the Workshop on Security and Human Behaviour which is being held here in Cambridge. The participants’ papers are here and the programme is here. For background, see the liveblogs for SHB 2008-13 which are linked here and here. Blog posts summarising the talks at the workshop sessions will appear as followups below, and audio files will be here.
Category Archives: Academic papers
Post-Snowden: the economics of surveillance
After 9/11, we worked on the economics of security, in an attempt to bring back some rationality. Next followed the economics of privacy, which Alessandro Acquisti and others developed to explain why people interact with social media the way they do. A year after the Snowden revelations, it’s time to talk about the economics of surveillance.
In a new paper I discuss how information economics applies to the NSA and its allies, just as it applies to Google and Microsoft. The Snowden papers reveal that the modern world of signals intelligence exhibits strong network effects which cause surveillance platforms to behave much like operating systems or social networks. So while India used to be happy to buy warplanes from Russia (and they still do), they now share intelligence with the NSA as it has the bigger network. Networks also tend to merge, so we see the convergence of intelligence with law enforcement everywhere, from PRISM to the UK Communications Data Bill.
There is an interesting cultural split in that while the IT industry understands network effects extremely well, the international relations community pays almost no attention to it. So it’s not just a matter of the left coast thinking Snowden a whistleblower and the right coast thinking him a traitor; there is a real gap in the underlying conceptual analysis.
That is a shame. The global surveillance network that’s currently being built by the NSA, GCHQ and its collaborator agencies in dozens of countries may become a new international institution, like the World Bank or the United Nations, but more influential and rather harder to govern. And just as Britain’s imperial network of telegraph and telephone cables survived the demise of empire, so the global surveillance network may survive America’s pre-eminence. Mr Obama might care to stop and wonder whether the amount of privacy he extends to a farmer in the Punjab today might be correlated with with amount of privacy the ruler of China will extend to his grandchildren in fifty years’ time. What goes around, comes around.
IEEE Security and Privacy 2014
I’m at the IEEE Symposium on Security and Privacy, known in the trade as “Oakland” even though it’s now moved to San Jose. It’s probably the leading event every year in information security. I will try to liveblog it in followups to this post.
The pre-play vulnerability in Chip and PIN
Today we have published a new paper: “Chip and Skim: cloning EMV cards with the pre-play attack”, presented at the 2014 IEEE Symposium on Security and Privacy. The paper analyses the EMV protocol, the leading smart card payment system with 1.62 billion cards in circulation, and known as “Chip and PIN” in English-speaking countries. As a result of the Target data breach, banks in the US (which have lagged behind in Chip and PIN deployment compared to the rest of the world) have accelerated their efforts to roll out Chip and PIN capable cards to their customers.
However, our paper shows that Chip and PIN, as currently implemented, still has serious vulnerabilities, which might leave customers at risk of fraud. Previously we have shown how cards can be used without knowing the correct PIN, and that card details can be intercepted as a result of flawed tamper-protection. Our new paper shows that it is possible to create clone chip cards which normal bank procedures will not be able to distinguish from the real card.
When a Chip and PIN transaction is performed, the terminal requests that the card produces an authentication code for the transaction. Part of this transaction is a number that is supposed to be random, so as to stop an authentication code being generated in advance. However, there are two ways in which the protection can by bypassed: the first requires that the Chip and PIN terminal has a poorly designed random generation (which we have observed in the wild); the second requires that the Chip and PIN terminal or its communications back to the bank can be tampered with (which again, we have observed in the wild).
To carry out the attack, the criminal arranges that the targeted terminal will generate a particular “random” number in the future (either by predicting which number will be generated by a poorly designed random number generator, by tampering with the random number generator, or by tampering with the random number sent to the bank). Then the criminal gains temporary access to the card (for example by tampering with a Chip and PIN terminal) and requests authentication codes corresponding to the “random” number(s) that will later occur. Finally, the attacker loads the authentication codes on to the clone card, and uses this card in the targeted terminal. Because the authentication codes that the clone card provides match those which the real card would have provided, the bank cannot distinguish between the clone card and the real one.
Because the transactions look legitimate, banks may refuse to refund victims of fraud. So in the paper we discuss how bank procedures could be improved to detect whether this attack has occurred. We also describe how the Chip and PIN system could be improved. As a result of our research, work has started on mitigating one of the vulnerabilities we identified; the certification requirements for random number generators in Chip and PIN terminals have been improved, though old terminals may still be vulnerable. Attacks making use of tampered random number generators or communications are more challenging to prevent and have yet to be addressed.
Update (2014-05-20): There is press coverage of this paper in The Register, SC Magazine UK and Schneier on Security.
Update (2014-05-21): Also now covered in The Hacker News.
Ghosts of Banking Past
Bank names are so tricksy — they all have similar words in them… and so it’s common to see phishing feeds with slightly the wrong brand identified as being impersonated.
However, this story is about how something the way around has happened, in that AnonGhost, a hacker group, believe that they’ve defaced “Yorkshire Bank, one of the largest United Kingdom bank” and there’s some boasting about this to be found at http://www.p0ison.com/ybs-bank-got-hacked-by-team-anonghost/.
However, it rather looks to me as if they’ve hacked an imitation bank instead! A rather less glorious exploit from the point of view of potential admirers.
Continue reading Ghosts of Banking Past
Financial cryptography 2014
I will be trying to liveblog Financial Cryptography 2014. I just gave a keynote talk entitled “EMV – Why Payment Systems Fail” summarising our last decade’s research on what goes wrong with Chip and PIN. There will be a paper on this out in a few months; meanwhile here’s the slides and here’s our page of papers on bank security.
The sessions of refereed papers will be blogged in comments to this post.
WEIS 2014: last call for papers
Next year’s Workshop on the Economics of Information Security (WEIS 2014) will be at Penn State on June 23–24. Submissions are due a week from today, at the end of February. It will be fascinating to see what effects the Snowden revelations will have on the community’s thinking. Get writing!
Why dispute resolution is hard
Today we release a paper on security protocols and evidence which analyses why dispute resolution mechanisms in electronic systems often don’t work very well. On this blog we’ve noted many many problems with EMV (Chip and PIN), as well as other systems from curfew tags to digital tachographs. Time and again we find that electronic systems are truly awful for courts to deal with. Why?
The main reason, we observed, is that their dispute resolution aspects were never properly designed, built and tested. The firms that delivered the main production systems assumed, or hoped, that because some audit data were available, lawyers would be able to use them somehow.
As you’d expect, all sorts of things go wrong. We derive some principles, and show how these are also violated by new systems ranging from phone banking through overlay payments to Bitcoin. We also propose some enhancements to the EMV protocol which would make it easier to resolve disputes over Chip and PIN transactions.
Update (2013-03-07): This post was mentioned on Bruce Schneier’s blog, and this is some good discussion there.
Update (2014-03-03): The slides for the presentation at Financial Cryptography are now online.
Why bouncing droplets are a pretty good model of quantum mechanics
Today Robert Brady and I publish a paper that solves an outstanding problem in physics. We explain the beautiful bouncing droplet experiments of Yves Couder, Emmanuel Fort and their colleagues.
For years now, people interested in the foundations of physics have been intrigued by the fact that droplets bouncing on a vibrating tray of fluid can behave in many ways like quantum mechanical particles, with single-slit and double-slit diffraction, tunneling, Anderson localisation and quantised orbits.
In our new paper, Robert Brady and I explain why. The wave field surrounding the droplet is, to a good approximation, Lorentz covariant with the constant c being the speed of surface waves. This plus the inverse square force between bouncing droplets (which acts like the Coulomb force) gives rise to an analogue of the magnetic force, which can be observed clearly in the droplet data. There is also an analogue of the Schrödinger equation, and even of the Pauli exclusion principle.
These results not only solve a fascinating puzzle, but might perhaps nudge more people to think about novel models of quantum foundations, about which we’ve written three previous papers.
Reading this may harm your computer
David Modic and I have just published a paper on The psychology of malware warnings. We’re constantly bombarded with warnings designed to cover someone else’s back, but what sort of text should we put in a warning if we actually want the user to pay attention to it?
To our surprise, social cues didn’t seem to work. What works best is to make the warning concrete; people ignore general warnings such as that a web page “might harm your computer” but do pay attention to a specific one such as that the page would “try to infect your computer with malware designed to steal your bank account and credit card details in order to defraud you”. There is also some effect from appeals to authority: people who trust their browser vendor will avoid a page “reported and confirmed by our security team to contain malware”.
We also analysed who turned off browser warnings, or would have if they’d known how: they were people who ignored warnings anyway, typically men who distrusted authority and either couldn’t understand the warnings or were IT experts.