Category Archives: Cybercrime

Bad malware, worse reporting

The Wannacry malware that has infected some UK hospital computers should interest not just security researchers but also people interested in what drives fake news.

Some made errors of fact: the Daily Mail inititally reported the ransom demand as 300 bitcoin, or £415,000, rather than $300 in bitcoin. Others made errors of logic: the Indy, for example, reported that “Up to 90 percent of NHS computers still run XP, released in 2001”, citing as its source a BMJ article which stated that 90% of trusts run this version of Windows. And some made errors of concurrency. After dinner I found inquiries from journalists about my fight with the Prime Minister. My what? Eventually I found that the Guardian had followed something Mrs May’s spokesman had said (“not aware of any evidence that patient data has been compromised”) with something I’d said a couple of hours earlier (“The NHS are saying that patient privacy hasn’t been compromised, but if significant numbers of hospitals have been negligently running unpatched computers for two months after the patch came out, how do they know?”). The Home Secretary later helpfully glossed the PM’s stonewall as “No patient data has been accessed or transferred in any way” but leaving the get-out-of-jail card “that’s the information we’ve been given.”

Many papers caught the international political aspect: that the vulnerability was discovered by the NSA, kept secret rather than fixed (contrary to the advice of Obama’s NSA review group), then stolen from the CIA by the Russians and published via wikileaks. Scary stuff, eh? And we read of some surprising overreactions, such as the GP who switched off his networking as a precaution and found he couldn’t access any of his patients’ records.

As luck would have it, yesterday was the day that I gave my talk on entomology – the classification of software bugs and other security vulnerabilities – to my first-year security and software engineering class. So let’s try to look at it calmly as I’d expect of a student writing an assignment.

The first point is that there’s not a really lot of this malware. The NHS has over 200 hospitals, and the typical IT director is a senior clinician supported by technicians. Yet despite having their IT run by well-meaning amateurs, only 16 NHS organisations have been hit, according to the Register and Kaspersky – including several hospitals.

So the second point is that when the Indy says that “The NHS is a perfect combination of sensitive data and insecure storage. And there’s very little they can do about it” the answer is simple: in well over 90% of NHS organisations, the well-meaning amateurs managed perfectly well. What they did was to keep their systems patched up-to-date; simple hygiene, like washing your hands after going to the toilet.

The third takeaway is that it’s worth looking at the actual code. A UK researcher did so and discovered a kill switch.

Now I am just listening on the BBC morning news to a former deputy director of GCHQ who first cautions against alarmist headlines and argues that everyone develops malware; that a patch had been issued by Microsoft halfway through March; that you can deal with ransomware by keeping decent backups; and that paying ransom will embolden the bad guys. However he claims that it’s clearly an organised criminal attack. (when it could be one guy in his bedroom somewhere) and says that the NCSC should look at whether there is some countermeasure that everyone should have taken (for answer see above).

So our fourth takeaway is that although the details matter, so do the economics of security. When something unexpected happens, you should not just get your head down and look at the code, but look up and observe people’s agendas. Politicians duck and weave; NHS managers blame the system rather than step up to the plate; the NHS as a whole turns every incident into a plea for more money; the spooks want to avoid responsibility for the abuse of their stolen cyberweaponz, but still big up the threat and get more influence for a part of their agency that’s presented as solely defensive. And we academics? Hey, we just want the students to pay attention to what we’re teaching them.

Hope this helps!

Video on Edge

John Brockman of Edge interviewed me in London in March. The video of the interview, and a transcript, are now available on the Edge website. Edge runs big interviews with several dozen scientists a year, with particular interest in people who do cross-disciplinary work. For me, the interaction of economics, psychology and engineering is one of the things that makes security so fascinating, as well as the creativity driven by adversarial behaviour.

The topics covered include the last thirty years of progress (of lack of it) in information security, from the early beginnings, through the crypto wars and crime moving online, to the economics of security. We talked about how cryptography can help less developed countries; about managing complexity in big projects; about how network effects lead firms to design insecure products; about whether big data can undermine democracy by empowering elites; and about how in a future world of intelligent things, security may become more about safety than anything else. Finally I talk about our current big project, the Cambridge Cybercrime Centre.

John runs a literary agency, and he’s worked on books by many of the scientists who feature on his site. This makes me wonder: on what topic should I write my next book?

1000 days of UDP amplification DDoS attacks

 

We presented “1000 days of UDP amplification DDoS attacks” at APWG’s eCrime 2017 conference last week in Scottsdale Arizona. The paper is here, and the slides from Daniel Thomas’s talk are here.

Distributed Denial of Service (DDoS) attacks employing reflected UDP amplification are regularly used to disrupt networks and systems. The amplification allows one rented server to generate significant volumes of data, while the reflection hides the identity of the attacker. Consequently this is an attractive, low risk, strategy for criminals bent on vandalism and extortion. Despite this, many of these criminals have been arrested.

These reflected UDP amplification attacks work by spoofing the source IP address on UDP packets sent from networks that negligently fail to implement BCP38/SAVE. Since UDP (unlike TCP) does not validate the source address, the much larger responses go to the attacker’s intended victim as they spoof the victim’s address on the packets they send out. There are many protocols that can be exploited in this way including DNS and NTP.

To measure the use of this strategy we analysed the results of running a network of honeypot UDP reflectors from July 2014 onwards. We explored the life cycle of attacks that use our honeypots, from the scanning phase used to detect our honeypot machines, through to their use in attacks. We see a median of 1450 malicious scanners per day across all UDP protocols, and have recorded details of 5.18 million subsequent attacks involving in excess of 3.31 trillion packets. We investigated the length of attacks and found that most are very short, but some last for days.

To estimate the total number of attacks that occurred, including those our honeypots did not observe, we used a capture-recapture statistical technique. From this we estimated that our honeypots can see between 85.1% and 96.6% of UDP reflection attacks over our measurement period.

We observe wide variation in the number of attacks per day over the course of the measurement period as attacks using different protocols went in and out of fashion.

This work is ongoing and data from our honeypot network is available to researchers through the Cambridge Cybercrime Centre.

Also, if you want to help stop these attacks being possible you could help CAIDA by
running their spoofer prober software that checks which ISPs are negligently failing to implement BCP38/SAVE.

Configuring Zeus

We presented “Configuring Zeus: A case study of online crime target selection and knowledge transmission” at APWG’s eCrime 2017 conference this past week in Scottsdale Arizona. The paper is here, and the slides from Richard Clayton’s talk are here.

Zeus (sometimes called Zbot) is a family of credential stealing malware which was widely deployed from 2007 to 2012 or so. It belongs to a class of malware dubbed ‘man-in-the-browser‘ (a play on a ‘man in the middle attack’) in that it runs on end-user machines where it can intercept web browser traffic to extract login credentials or to manipulate the page content displayed to the user.

It has been used to attack large numbers of sites, mainly banks — its extreme flexibility is achieved with ‘configuration files’ that indicate which websites are to be targeted, which user submitted fields are to be collected, what webpage rewriting (so called ‘webinjects’) is required and where the results are to be sent.

The complexity of these files seem to have restricted the number of websites actually targeted. In a paper presented at WEIS 2014 Tajalizadehkhoob et al. examined a large number of configuration files and described this lack of development and measured a substantial overlap in the content of different files. As a result, the authors suggested that offenders were not developing configuration files from scratch but were selling, sharing or stealing them.

We decided to test out this conjecture by seeking out messages about Zeus configuration files on underground forums (many of these are have been scraped, leaked or confiscated by law enforcement) — and this paper describes how we found evidence to support all three mechanisms: selling, sharing and stealing.

The paper also gives an account of the history of Zeus with illustrations from the messages that were uncovered along with clear evidence the release of tools to decrypt configuration files by security researchers was also closely followed on the forums, and assisted offenders when it came to stealing configuration files from others.

The University is Hiring

We’re looking for a Chief Information Security Officer. This isn’t a research post here at the lab, but across the yard in University Information Services, where they manage our networks and our administrative systems. There will be opportunities to work with security researchers like us, but the main task is protecting Cambridge from all sorts of online bad actors. If you would like to be in the thick of it, and you know what you’re doing, here’s how you can apply.

Security Economics MOOC

In two weeks’ time we’re starting an open course in security economics. I’m teaching this together with Rainer Boehme, Tyler Moore, Michel van Eeten, Carlos Ganan, Sophie van der Zee and David Modic.

Over the past fifteen years, we’ve come to realise that many information security failures arise from poor incentives. If Alice guards a system while Bob pays the cost of failure, things can be expected to go wrong. Security economics is now an important research topic: you can’t design secure systems involving multiple principals if you can’t get the incentives right. And it goes way beyond computer science. Without understanding how incentives play out, you can’t expect to make decent policy on cybercrime, on consumer protection or indeed on protecting critical national infrastructure

We first did the course last year as a paid-for course with EdX. Our agreement with them was that they’d charge for it the first time, to recoup the production costs, and thereafter it would be free.

So here it is as a free course. Spread the word!

Yet another Android side channel: input stealing for fun and profit

At PETS 2016 we presented a new side-channel attack in our paper Don’t Interrupt Me While I Type: Inferring Text Entered Through Gesture Typing on Android Keyboards. This was part of Laurent Simon‘s thesis, and won him the runner-up to the best student paper award.

We found that software on your smartphone can infer words you type in other apps by monitoring the aggregate number of context switches and the number of hardware interrupts. These are readable by permissionless apps within the virtual procfs filesystem (mounted under /proc). Three previous research groups had found that other files under procfs support side channels. But the files they used contained information about individual apps– e.g. the file /proc/uid_stat/victimapp/tcp_snd contains the number of bytes sent by “victimapp”. These files are no longer readable in the latest Android version.

We found that the “global” files – those that contain aggregate information about the system – also leak. So a curious app can monitor these global files as a user types on the phone and try to work out the words. We looked at smartphone keyboards that support “gesture typing”: a novel input mechanism democratized by SwiftKey, whereby a user drags their finger from letter to letter to enter words.

This work shows once again how difficult it is to prevent side channels: they come up in all sorts of interesting and unexpected ways. Fortunately, we think there is an easy fix: Google should simply disable access to all procfs files, rather than just the files that leak information about individual apps. Meanwhile, if you’re developing apps for privacy or anonymity, you should be aware that these risks exist.

Inaugural Cybercrime Conference

The Cambridge Cloud Cybercrime Centre is organising an inaugural one day conference on cybercrime on Thursday, 14th July 2016.

In future years we intend to focus on research that has been carried out using datasets provided by the Cybercrime Centre, but for this first year we have a stellar group of invited speakers who are at the forefront of their fields:

  • Adam Bossler, Associate Professor, Department of Criminal Justice and Criminology, Georgia Southern University, USA
  • Alice Hutchings, Post-doc Criminologist, Computer Laboratory, University of Cambridge, UK
  • David S. Wall, Professor of Criminology, University of Leeds, UK
  • Maciej Korczynski Post-Doctoral Researcher, Delft University of Technology, The Netherlands
  • Michael Levi, Professor of Criminology, Cardiff University, UK
  • Mike Hulett, Head of Operations, National Cyber Crime Unit, National Crime Agency, UK
  • Nicolas Christin, Assistant Research Professor of Electrical and Computer Engineering, Carnegie Mellon University, USA
  • Richard Clayton, Director, Cambridge Cloud Cybercrime Centre, University of Cambridge, UK
  • Ross Anderson, Professor of Security Engineering, Computer Laboratory, University of Cambridge, UK
  • Tyler Moore, Tandy Assistant Professor of Cyber Security & Information Assurance, University of Tulsa, USA

They will present various aspects of cybercrime from the point of view of criminology, security economics, cybersecurity governance and policing.

This one day event, to be held in the Faculty of Law, University of Cambridge will follow immediately after (and will be in the same venue as) the “Ninth International Conference on Evidence Based Policing” organised by the Institute of Criminology which runs on the 12th and 13th July 2016.

For more details see here.