All posts by Ross Anderson

Internet of Bad Things

A lot of people are starting to ask about the security and privacy implications of the “Internet of Things”. Once there’s software in everything, what will go wrong? We’ve seen a botnet recruiting CCTV cameras, and a former Director of GCHQ recently told a parliamentary committee that it might be convenient if a suspect’s car could be infected with malware that would cause it to continually report its GPS position. (The new Investigatory Powers Bill will give the police and the spooks the power to hack any device they want.)

So here is the video of a talk I gave on The Internet of Bad Things to the Virus Bulletin conference. As the devices around us become smarter they will become less loyal, and it’s not just about malware (whether written by cops or by crooks). We can expect all sorts of novel business models, many of them exploitative, as well as some downright dishonesty: the recent Volkswagen scandal won’t be the last.

But dealing with pervasive malware in everything will demand new approaches. Our approach to the Internet of Bad Things includes our new Cambridge Cybercrime Centre, which will let us monitor bad things online at the kind of scale that will be required.

Emerging, fascinating, and disruptive views of quantum mechanics

I have just spent a long weekend at Emergent Quantum Mechanics (EmQM15). This workshop is organised every couple of years by Gerhard Groessing and is the go-to place if you’re interested in whether quantum mechanics dooms us to a universe (or multiverse) that can be causal or local but not both, or whether we might just make sense of it after all. It’s held in Austria – the home not just of the main experimentalists working to close loopholes in the Bell tests, such as Anton Zeilinger, but of many of the physicists still looking for an underlying classical model from which quantum phenomena might emerge. The relevance to the LBT audience is that the security proofs of quantum cryptography, and the prospects for quantum computing, turn on this obscure area of science.

The two themes emergent from this year’s workshop are both relevant to these questions; they are weak measurement and emergent global correlation.

Weak measurement goes back to the 1980s and the thesis of Lev Vaidman. The idea is that you can probe the trajectory of a quantum mechanical particle by making many measurements of a weakly coupled observable between preselection and postselection operations. This has profound theoretical implications, as it means that the Heisenberg uncertainty limit can be stretched in carefully chosen circumstances; Masanao Ozawa has come up with a more rigorous version of the Heisenberg bound, and in fact gave one of the keynote talks two years ago. Now all of a sudden there are dozens of papers on weak measurement, exploring all sorts of scientific puzzles. This leads naturally to the question of whether weak measurement is any good for breaking quantum cryptosystems. After some discussion with Lev I’m convinced the answer is almost certainly no; getting information about quantum states takes exponentially much work and lots of averaging, and works only in specific circumstances, so it’s easy for the designer to forestall. There is however a question around interdisciplinary proofs. Physicists have known about weak measurement since 1988 (even if few paid attention till a few years ago), yet no-one has rushed to tell the crypto community “Sorry, guys, when we said that nothing can break the Heisenberg bound, we kinda overlooked something.”

The second theme, emergent global correlation, may be of much more profound interest, to cryptographers and physicists alike.

Continue reading Emerging, fascinating, and disruptive views of quantum mechanics

Decepticon: interdisciplinary conference on deception research

I’m at Decepticon 2015 and will be liveblogging the talks in followups to this post. Up till now, research on deception has been spread around half a dozen different events, aimed at cognitive psychologists, forensic psychologists, law enforcement, cybercrime specialists and others. My colleague Sophie van der Zee decided to organise a single annual event to bring everyone together, and Decepticon is the the result. With over 160 registrants for the first edition of the event (and late registrants turned away) it certainly seems to have hit a sweet spot.

Award-winning case history of the care.data health privacy scandal

Each year we divide our masters of public policy students into teams and get them to write case studies of public policy failures. The winning team this year wrote a case study of the care.data fiasco. The UK government collected personal health information on tens of millions of people who had had hospital treatment in England and then sold it off to researchers, drug companies and even marketing firms, with only a token gesture of anonymisation. In practice patients were easy to identify. The resulting scandal stalled plans to centralise GP data as well, at least for a while.

Congratulations to Lizzie Presser, Maia Hruskova, Helen Rowbottom and Jesse Kancir, who tell the story of how mismanagement, conflicts and miscommunication led to a failure of patient privacy on an industrial scale, and discuss the lessons that might be learned. Their case study has just appeared today in Technology Science, a new open-access journal for people studying conflicts that arise between technology and society. LBT readers will recall several posts reporting the problem, but it’s great to have a proper, peer-reviewed case study that we can give to future generations of students. (Incidentally, the previous year’s winning case study was on a related topic, the failure of the NHS National Programme for IT.)

Four cool new jobs

We’re advertising for four people to join the security group from October.

The first three are for two software engineers to join our new cybercrime centre, to develop new ways of finding bad guys in the terabytes and (soon) petabytes of data we get on spam, phish and other bad stuff online; and a lawyer to explore and define the boundaries of how we share cybercrime data.

The fourth is in Security analysis of semiconductor memory. Could you help us come up with neat new ways of hacking chips? We’ve invented quite a few of these in the past, ranging from optical fault induction through semi-invasive attacks generally. What’s next?

FCA view on unauthorised transactions

Yesterday the Financial Conduct Authority (the UK bank regulator) issued a report on Fair treatment for consumers who suffer unauthorised transactions. This is an issue in which we have an interest, as fraud victims regularly come to us after being turned away by their bank and by the financial ombudsman service. Yet the FCA have found that everything is hunky dory, and conclude “we do not believe that further thematic work is required at this stage”.

One of the things the FCA asked their consultants is whether there’s any evidence that claims are rejected on the sole basis that a pin was used. The consultants didn’t want to reply on existing work but instead surveyed a nationally representative sample of 948 people and found that 16% had a transaction dispute in the last year. These were 37% MOTO, 22% cancelled future dated payment, 15% ATM cash, 13% shop, 13% lump sum from bank account. Of customers who complained, 43% were offered their money back spontaneously; a further 41% asked; in the end a total of 68% got refunds after varying periods of time. In total 7% (15 victims) had claim declined, most because the bank said the transaction was “authorised” or following a”contract with merchant” and 2 for chip and pin (one of them an ATM transaction; the other admitted sharing their PIN). 12 of these 15 considered the result
unfair. These figures are entirely consistent with what we learn from the British Crime Survey and elsewhere; two million UK victims a year, and while most get their money back, many don’t; and a hard core of perhaps a few tens of thousands who end up feeling that their bank has screwed them.

The case studies profiled in the consultants’ paper were of glowing happy people who got their money back; the 12 sad losers were not profiled, and the consultants concluded that “Customers might be being denied refunds on the sole basis that Chip and PIN were used … we found little evidence of this” (p 49) and went on to remark helpfully that some customers admitted sharing their PINs and felt OK lying about this. The FCA happily paraphrases this as “We also did not find any evidence of firms holding customers liable for unauthorised transactions solely on the basis that the PIN was used to make the transaction” (main report, p 13, 3.25).

According to recent news reports, the former head of the FCA, Martin Wheatley, was sacked by George Osborne for being too harsh on the banks.

Crypto Wars 2.0

Today we unveil a major report on whether law enforcement and intelligence agencies should have exceptional access to cryptographic keys and to our computer and communications data generally. David Cameron has called for this, as have US law enforcement leaders such as FBI Director James Comey.

This policy repeats a mistake of the 1990s. The Clinton administration tried for years to seize control of civilian cryptography, first with the Clipper Chip, and then with various proposals for ‘key escrow’ or ‘trusted third party encryption’. Back then, a group of experts on cryptography and computer security got together to explain why this was a bad idea. We have now reconvened in response to the attempt by Cameron and Comey to resuscitate the old dead horse of the 1990s.

Our report, Keys Under Doormats: Mandating insecurity by requiring government access to all data and communications, is timed to set the stage for a Wednesday hearing of the Senate Judiciary Committee at which Mr Comey will present his proposals. The reply to Comey will come from Peter Swire, who was on the other side twenty years ago (he was a Clinton staffer) and has written a briefing on the first crypto war here. Peter was recently on President Obama’s NSA review group. He argues that the real way to fix the problems complained of is to fix the mutual legal assistance process – which is also my own view.

Our report is also highly relevant to the new ‘Snoopers’ Charter’ that Home Secretary Teresa May has promised to put before parliament this fall. Mrs May has made clear she wants access to everything.

However this is both wrong in principle, and unworkable in practice. Building back doors into all computer and communication systems is against most of the principles of security engineering, and it also against the principles of human rights. Our right to privacy, set out in section 8 of the European Convention on Human Rights, can only be overridden by mechanisms that meet three tests. First, they must be set out in law, with sufficient clarity for their effects to be foreseeable; second, they must be proportionate; third, they must be necessary in a democratic society. As our report makes clear, universal exceptional access will fail all these tests by a mile.

For more, see the New York Times.

Thinking of selling your old phone? Watch out!

Today we unveil two papers describing serious and widespread vulnerabilities in Android mobile phones. The first presents a Security Analysis of Factory Resets. Now that hundreds of millions of people buy and sell smartphones secondhand and use them for everything from banking to dating, it’s important to able to sanitize your phone. You need to clean it when you buy it, so you don’t get caught by malware; and even more when you sell it, so you don’t give away your bank credentials or other personal information. So does the factory reset function actually work? We bought a couple of dozen second-hand Android phones and tested them to find out.

The news is not at all good. We were able to retrieve the Google master cookie from the great majority of phones, which means that we could have logged on to the previous owner’s gmail account. The reasons for failure are complex; new phones are generally better than old ones, and Google’s own brand phones are better than the OEM offerings. However the vendors need to do a fair bit of work, and users need to take a fair amount of care.

Attacks on a sold phone that could not be properly sanitized are one example of what we call a “user-not-present” attack. Another is when your phone is stolen. Many security software vendors offer a facility to lock or wipe your phone remotely when this happens, and it’s a standard feature with mobile antivirus products. Do these ‘solutions’ work?

You guessed it. Antivirus software that relies on a faulty factory reset can only go so far, and there’s only so much you can do with a user process. The AV vendors have struggled with a number of design tradeoffs, but the results are not that impressive. See Security Analysis of Consumer-Grade Anti-Theft Solutions Provided by Android Mobile Anti-Virus Apps for the gory details. These failings mean that staff at firms which handle lots of second-hand phones (whether lost, stolen, sold or given to charity) could launch some truly industrial-scale attacks. These papers appear today at the Mobile Security Technology workshop at IEEE Security and Privacy.