CFP: Learning from Authoritative Security Experiment Results (LASER 2016)

This year, I’m on the PC for LASER 2016: the Oakland-attached workshop on Learning from Authoritative Security Experiment Results. The LASER 2016 CFP is now online, with a focus on methodologies for computer security experimentation, new experimental approaches, unexpected results or failed experiments, and, more generally, consideration of how to standardise scientific approaches to security research. Please consider submitting a paper — especially if you are pushing the boundaries on how we conduct experiments in the field of computer-security research!

The deadline is 29 January 2016. A limited number of student scholarships will be available to attend.

Continue reading CFP: Learning from Authoritative Security Experiment Results (LASER 2016)

Snoopers’ Charter 2.0

This afternoon at 4.30 I have been invited to give evidence in Parliament to the Joint Select Committee on the Investigatory Powers Bill.

This follows evidence I gave on the technical aspects of the bill to the Science and Technology Committee on November 10th; see video and documents. Of particular interest may be comments by my Cambridge colleague Richard Clayton; an analysis by my UCL colleague George Danezis; the ORG wiki; and finally the text of the bill itself.

While the USA has reacted to the Snowden revelations by restraining the NSA in various ways, the UK reaction appears to be the opposite. Do we really want to follow countries like China, Russia and Kazakhstan, and take the risk that we’ll tip countries like Brazil and India into following our lead? If the Internet fragments into national islands, that will not only do grave harm to the world economy, but make life a lot harder for GCHQ too.

The emotional cost of cybercrime

We know more and more about the financial cost of cybercrime, but there has been very little work on its emotional cost. David Modic and I decided to investigate. We wanted to empirically test whether there are emotional repercussions to becoming a victim of fraud (Yes, there are). We wanted to compare emotional and financial impact across different categories of fraud and establish a ranking list (And we did). An interesting, although not surprising, finding was that in every tested category the victim’s perception of emotional impact outweighed the reported financial loss.

A victim may think that they will still be able to recover their money, if not their pride. That really depends on what type of fraud they facilitated. If it is auction fraud, then their chances of recovery are comparatively higher than in bank fraud – we found that 26% of our sample would attempt to recover funds lost in a fraudulent auction and approximately half of them were reimbursed (look at this presentation). There is considerable evidence that banks are not very likely to believe someone claiming to be a victim of, say, identity theft and by extension bank fraud. Thus, when someone ends up out of pocket, they will likely also go through a process of secondary victimisation where they will be told they broke some small-print rule like having the same pin for two of their bank cards or not using the bank’s approved anti-virus software, and are thus not eligible for any refund and it is all their own fault, really.

You can find the article here or here. (It was published in IEEE Security & Privacy.)

This paper complements and extends our earlier work on the costs of cybercrime, where we show that the broader economic costs to society of cybercrime – such as loss of confidence in online shopping and banking – also greatly exceed the amounts that cybercriminals actually manage to steal.

Internet of Bad Things

A lot of people are starting to ask about the security and privacy implications of the “Internet of Things”. Once there’s software in everything, what will go wrong? We’ve seen a botnet recruiting CCTV cameras, and a former Director of GCHQ recently told a parliamentary committee that it might be convenient if a suspect’s car could be infected with malware that would cause it to continually report its GPS position. (The new Investigatory Powers Bill will give the police and the spooks the power to hack any device they want.)

So here is the video of a talk I gave on The Internet of Bad Things to the Virus Bulletin conference. As the devices around us become smarter they will become less loyal, and it’s not just about malware (whether written by cops or by crooks). We can expect all sorts of novel business models, many of them exploitative, as well as some downright dishonesty: the recent Volkswagen scandal won’t be the last.

But dealing with pervasive malware in everything will demand new approaches. Our approach to the Internet of Bad Things includes our new Cambridge Cybercrime Centre, which will let us monitor bad things online at the kind of scale that will be required.

Efficient multivariate statistical techniques for extracting secrets from electronic devices

That’s the title of my PhD thesis, supervised by Markus Kuhn, which has become available recently as CL tech report 878:
http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-878.html

In this thesis I provide a detailed presentation of template attacks, which are considered the most powerful kind of side-channel attacks, and I present several methods for implementing and evaluating this attack efficiently in different scenarios.

These contributions may allow evaluation labs to perform their evaluations faster, show that we can determine almost perfectly an 8-bit target value even when this value is manipulated by a single LOAD instruction (may be the best published results of this kind), and show how to cope with differences across devices, among others.

Some of the datasets used in my experiments along with MATLAB scripts for reproducing my results are available here:
http://www.cl.cam.ac.uk/research/security/datasets/grizzly/

 

Ongoing badness in the RIPE database

A month ago I wrote about the presence of route objects for undelegated IPv4 address space within the RIPE database (strictly I should say RIPE NCC — the body who looks after this database).

The folks at RIPE NCC removed a number of these dubious route objects which had been entered by AS204224.

And they were put straight back again!

This continues to this day — it looks to me as if once the RIPE NCC staff go home for the evening the route objects are resurrected.

So for AS204224 (CJSC Mashzavod-Marketing-Servis) you can (at the moment of writing) find route objects for four /19s and two /21s which have a creation times between 17:53 and 17:55 this evening (2 November). This afternoon (in RIPE NCC working hours) there were no such route objects.

As an aside: as well as AS204224 I see route objects for undelegated space (these are all more recent than my original blog article) from:

    AS200439 LLC Stadis, Ekaterinburg, Russia
    AS204135 LLC Transmir, Blagoveshensk, Russia
    AS204211 LLC Aspect, Novgorod, Russia

I’d like to give a detailed account of the creation and deletion of the AS204224 route objects, but I don’t believe that there’s a public archive of RIPE database snapshots (you can find the latest snapshot taken at about 03:45 each morning at ftp://ftp.ripe.net/ripe/dbase, but if you don’t download it that day then it’s gone!).

However, I have been collecting copies of the database for the past few days and the creation times for the route objects are:

    Thu 2015-10-29  18:03
    Fri 2015-10-30  15:01
    Sat 2015-10-31  17:54
    Sun 2015-11-01  18:31
    Mon 2015-11-02  17:53

There are two conclusions to draw from this: perhaps the AS204224 people only come out at night and dutifully delete their route objects when the sun rises before repeating the activity the following night (sounds like one of Grimm’s fairy tales doesn’t it?).

The alternative, less magical explanation, is that the staff at RIPE NCC are playing “whack-a-mole” INSIDE THEIR OWN DATABASE! (and although they work weekends, they go home early on Friday afternoons!)

Emerging, fascinating, and disruptive views of quantum mechanics

I have just spent a long weekend at Emergent Quantum Mechanics (EmQM15). This workshop is organised every couple of years by Gerhard Groessing and is the go-to place if you’re interested in whether quantum mechanics dooms us to a universe (or multiverse) that can be causal or local but not both, or whether we might just make sense of it after all. It’s held in Austria – the home not just of the main experimentalists working to close loopholes in the Bell tests, such as Anton Zeilinger, but of many of the physicists still looking for an underlying classical model from which quantum phenomena might emerge. The relevance to the LBT audience is that the security proofs of quantum cryptography, and the prospects for quantum computing, turn on this obscure area of science.

The two themes emergent from this year’s workshop are both relevant to these questions; they are weak measurement and emergent global correlation.

Weak measurement goes back to the 1980s and the thesis of Lev Vaidman. The idea is that you can probe the trajectory of a quantum mechanical particle by making many measurements of a weakly coupled observable between preselection and postselection operations. This has profound theoretical implications, as it means that the Heisenberg uncertainty limit can be stretched in carefully chosen circumstances; Masanao Ozawa has come up with a more rigorous version of the Heisenberg bound, and in fact gave one of the keynote talks two years ago. Now all of a sudden there are dozens of papers on weak measurement, exploring all sorts of scientific puzzles. This leads naturally to the question of whether weak measurement is any good for breaking quantum cryptosystems. After some discussion with Lev I’m convinced the answer is almost certainly no; getting information about quantum states takes exponentially much work and lots of averaging, and works only in specific circumstances, so it’s easy for the designer to forestall. There is however a question around interdisciplinary proofs. Physicists have known about weak measurement since 1988 (even if few paid attention till a few years ago), yet no-one has rushed to tell the crypto community “Sorry, guys, when we said that nothing can break the Heisenberg bound, we kinda overlooked something.”

The second theme, emergent global correlation, may be of much more profound interest, to cryptographers and physicists alike.

Continue reading Emerging, fascinating, and disruptive views of quantum mechanics

87% of Android devices insecure because manufacturers fail to provide security updates

We are presenting a paper at SPSM next week that shows that, on average over the last four years, 87% of Android devices are vulnerable to attack by malicious apps. This is because manufacturers have not provided regular security updates. Some manufacturers are much better than others however, and our study shows that devices built by LG and Motorola, as well as those devices shipped under the Google Nexus brand are much better than most. Users, corporate buyers and regulators can find further details on manufacturer performance at AndroidVulnerabilities.org

We used data collected by our Device Analyzer app, which is available from the Google Play Store. The app collects data from volunteers around the globe and we have used data from over 20,000 devices in our study. As always, we are keen to recruit more contributors! We combined Device Analyzer data with information we collected on critical vulnerabilities affecting Android. We used this to develop the FUM score which can be used to compare the security provided by different manufacturers. Each manufacturer is given a score out of 10 based on: f, the proportion of devices free from known critical vulnerabilities; u, the proportion of devices updated to the most recent version; and m, the mean number of vulnerabilities the manufacturer has not fixed on any device.

The problem with the lack of updates to Android devices is well known and recently Google and Samsung have committed to shipping security updates every month. Our hope is that by quantifying the problem we can help people when choosing a device and that this in turn will provide an incentive for other manufacturers and operators to deliver updates.

Google has done a good job at mitigating many of the risks, and we recommend users only install apps from Google’s Play Store since it performs additional safety checks on apps. Unfortunately Google can only do so much, and recent Android security problems have shown that this is not enough to protect users. Devices require updates from manufacturers, and the majority of devices aren’t getting them.

For further information, contact Daniel Thomas and Alastair Beresford via contact@androidvulnerabilities.org