Category Archives: Academic papers

Security Engineering: Third Edition

I’m writing a third edition of my best-selling book Security Engineering. The chapters will be available online for review and feedback as I write them.

Today I put online a chapter on Who is the Opponent, which draws together what we learned from Snowden and others about the capabilities of state actors, together with what we’ve learned about cybercrime actors as a result of running the Cambridge Cybercrime Centre. Isn’t it odd that almost six years after Snowden, nobody’s tried to pull together what we learned into a coherent summary?

There’s also a chapter on Surveillance or Privacy which looks at policy. What’s the privacy landscape now, and what might we expect from the tussles over data retention, government backdoors and censorship more generally?

There’s also a preface to the third edition.

As the chapters come out for review, they will appear on my book page, so you can give me comment and feedback as I write them. This collaborative authorship approach is inspired by the late David MacKay. I’d suggest you bookmark my book page and come back every couple of weeks for the latest instalment!

Does security advice discriminate against women?

Security systems are often designed by geeks who assume that the users will also be geeks, and the same goes for the advice that users are given when things start to go wrong. For example, banks reacted to the growth of phishing in 2006 by advising their customers to parse URLs. That’s fine for geeks but most people don’t do that, and in particular most women don’t do that. So in the second edition of my Security Engineering book, I asked (in chapter 2, section 2.3.4, pp 27-28): “Is it unlawful sex discrimination for a bank to expect its customers to detect phishing attacks by parsing URLs?”

Tyler Moore and I then ran the experiment, and Tyler presented the results at the first Workshop on Security and Human Behaviour that June. We recruited 132 volunteers between the ages of 18 and 30 (77 female, 55 male) and tested them to see whether they could spot phishing websites, as well as for systematising quotient (SQ) and empathising quotient (EQ). These measures were developed by Simon Baron-Cohen in his work on Asperger’s; most men have SQ > EQ while for most women EQ > SQ. The ability to parse URLs is correlated with SQ-EQ and independently with gender. A significant minority of women did badly at URL parsing. We didn’t get round to publishing the full paper at the time, but we’ve mentioned the results in various talks and lectures.

We have now uploaded the original paper, How brain type influences online safety. Given the growing interest in gender HCI, we hope that our study might spur people to do research in the gender aspects of security as well. It certainly seems like an open goal!

Could a gaming app steal your bank PIN?

Have you ever wondered whether one app on your phone could spy on what you’re typing into another? We have. Five years ago we showed that you could use the camera to measure the phone’s motion during typing and use that to recover PINs. Then three years ago we showed that you could use interrupt timing to recover text entered using gesture typing. So what other attacks are possible?

Our latest paper shows that one of the apps on the phone can simply record the sound from its microphones and work out from that what you’ve been typing.

Your phone’s screen can be thought of as a drum – a membrane supported at the edges. It makes slightly different sounds depending on where you tap it. Modern phones and tablets typically have two microphones, so you can also measure the time difference of arrival of the sounds. The upshot is that can recover PIN codes and short words given a few measurements, and in some cases even long and complex words. We evaluate the new attack against previous ones and show that the accuracy is sometimes even better, especially against larger devices such as tablets.

This paper is based on Ilia Shumailov’s MPhil thesis project.

Struck by a Thunderbolt

At the Network and Distributed Systems Security Symposium in San Diego today we’re presenting Thunderclap, which describes a set of new vulnerabilities involving the security of computer peripherals and the open-source research platform used to discover them. This is a joint work with Colin Rothwell, Brett Gutstein, Allison Pearce, Peter Neumann, Simon Moore and Robert Watson.

We look at the security of input/output devices that use the Thunderbolt interface, which is available via USB-C ports in many modern laptops. Our work also covers PCI Express (PCIe) peripherals which are found in desktops and servers.

Such ports offer very privileged, low-level, direct memory access (DMA), which gives peripherals much more privilege than regular USB devices. If no defences are used on the host, an attacker has unrestricted memory access, and can completely take control of a target computer: they can steal passwords, banking logins, encryption keys, browser sessions and private files, and they can also inject malicious software that can run anywhere in the system.

We studied the defences of existing systems in the face of malicious DMA-enabled peripheral devices and found them to be very weak.

The primary defence is a component called the Input-Output Memory Management Unit (IOMMU), which, in principle, can allow devices to access only the memory needed to do their job and nothing else. However, we found existing operating systems do not use the IOMMU effectively.

To begin with, most systems don’t enable the IOMMU at all. Windows 7, Windows 8, and Windows 10 Home and Pro didn’t support the IOMMU. Windows 10 Enterprise can optionally use it, but in a very limited way that leaves most of the system undefended. Linux and FreeBSD do support using the IOMMU, but this support is not enabled by default in most distributions. MacOS is the only OS we studied that uses the IOMMU out of the box.

This state of affairs is not good, and our investigations revealed significant further vulnerabilities even when the IOMMU is enabled.

We built a fake network card that is capable of interacting with the operating system in the same way as a real one, including announcing itself correctly, causing drivers to attach, and sending and receiving network packets. To do this, we extracted a software model of an Intel E1000 from the QEMU full-system emulator and ran it on an FPGA. Because this is a software model, we can easily add malicious behaviour to find and exploit vulnerabilities.

We found the attack surface available to a network card was much richer and more nuanced than was previously thought. By examining the memory it was given access to while sending and receiving packets, our device was able to read traffic from networks that it wasn’t supposed to. This included VPN plaintext and traffic from Unix domain sockets that should never leave the machine.

On MacOS and FreeBSD, our network card was able to start arbitrary programs as the system administrator, and on Linux it had access to sensitive kernel data structures. Additionally, on MacOS devices are not protected from one another, so a network card is allowed to read the display contents and keystrokes from a USB keyboard.

Worst of all, on Linux we could completely bypass the enabled IOMMU, simply by setting a few option fields in the messages that our malicious network card sent.

Such attacks are very plausible in practice. The combination of power, video, and peripheral-device DMA over Thunderbolt 3 ports facilitates the creation of malicious charging stations or displays that function correctly but simultaneously take control of connected machines.

We’ve been collaborating with vendors about these vulnerabilities since 2016, and a number of mitigations have been shipped. We have also been working with vendors, helping them to use our Thunderclap tools to explore this vulnerability space and audit their systems for problems.

MacOS fixed the specific vulnerability we used to get administrator access in macOS 10.12.4 in 2016, although the more general scope of such attacks remain relevant. More recently, new laptops that ship with Windows 10 version 1803 or later have a feature called Kernel DMA Protection for Thunderbolt 3, which at least enables the IOMMU for Thunderbolt devices (but not PCI Express ones). Since this feature requires firmware support, older laptops that were shipped before 1803 remain vulnerable. Recently, Intel committed patches to Linux to enable the IOMMU for Thunderbolt devices, and to disable the ATS feature that allowed our IOMMU bypass. These are part of the Linux kernel 5.0 which is currently in the release process.

One major laptop vendor told us they would like to study these vulnerabilities in more detail before adding Thunderbolt to new product lines.

More generally, since this is a new space of many vulnerabilities, rather than a specific example, we believe all operating systems are vulnerable to similar attacks, and that more substantial design changes will be needed to remedy these problems. We noticed similarities between the vulnerability surface available to malicious peripherals in the face of IOMMU protections and that of the kernel system call interface, long a source of operating system vulnerabilities. The kernel system call interface has been subjected to much scrutiny, security analysis, and code hardening over the years, which must now be applied to the interface between peripherals and the IOMMU.

As well as asking vendors to improve the security of their systems, we advise users to update their systems and to be cautious about attaching unfamiliar USB-C devices to their machines – especially those in public places.

We have placed more background on our work and a list of FAQs on our website, thunderclap.io. There, we have also open sourced the Thunderclap research platform to allow other researchers to reproduce and extend our work, and to aid vendors in performing security evaluation of their products.

Thunderclap: Exploring Vulnerabilities in Operating System IOMMU Protection via DMA from Untrustworthy Peripherals A. Theodore Markettos, Colin Rothwell, Brett F. Gutstein, Allison Pearce, Peter G. Neumann, Simon W. Moore, Robert N. M. Watson. Proceedings of the Network and Distributed Systems Security Symposium (NDSS), 24-27 February 2019, San Diego, USA.

Visualizing Diffusion of Stolen Bitcoins

In previous work we have shown how stolen bitcoins can be traced if we simply apply existing law. If bitcoins are “mixed”, that is to say if multiple actors pool together their coins in one transaction to obfuscate which coins belong to whom, then the precedent in Clayton’s Case says that FIFO ordering must be used to track which fragments of coin are tainted. If the first input satoshi (atomic unit of Bitcoin) was stolen then the first output satoshi should be marked stolen, and so on.

This led us to design Taintchain, a system for tracing stolen coins through the Bitcoin network. However, we quickly discovered a problem: while it was now possible to trace coins, it was harder to spot patterns. A decent way of visualizing the data is important to make sense of the patterns of splits and joins that are used to obfuscate bitcoin transactions. We therefore designed a visualization tool that interactively expands the taint graph based on user input. We first came up with a way to represent transactions and their associated taints in a temporal graph. After realizing the sheer number of hops that some satoshis go through and the high outdegree of some transactions, we came up with a way to do graph generation on-the-fly while assuming some restrictions on maximum hop length and outdegree.

Using this tool, we were able to spot many of the common tricks used by bitcoin launderers. A summary of our findings can be found in the short paper here.

Symposium on Post-Bitcoin Cryptocurrencies

I am at the Symposium on Post-Bitcoin Cryptocurrencies in Vienna and will try to liveblog the talks in follow-ups to this post.

The introduction was by Bernhard Haslhofer of AIT, who maintains the graphsense.info toolkit and runs the Titanium project on bitcoin forensics jointly with Rainer Boehme of Innsbruck. Rainer then presented an economic analysis arguing that criminal transactions were pretty well the only logical app for bitcoin as it’s permissionless and trustless; if you have access to the courts then there are better ways of doing things. However in the post-bitcoin world of ICOs and smart contracts, it’s not just the anti-money-laundering agencies who need to understand cryptocurrency but the securities regulators and the tax collectors. Yet there is a real policy tension. Governments hype blockchains; Austria uses them to auction sovereign bonds. Yet the only way in for the citizen is through the swamp. How can the swamp be drained?

Privacy for Tigers

As mobile phone masts went up across the world’s jungles, savannas and mountains, so did poaching. Wildlife crime syndicates can not only coordinate better but can mine growing public data sets, often of geotagged images. Privacy matters for tigers, for snow leopards, for elephants and rhinos – and even for tortoises and sharks. Animal data protection laws, where they exist at all, are oblivious to these new threats, and no-one seems to have started to think seriously about information security.

So we have been doing some work on this, and presented some initial ideas via an invited talk at Usenix Security in August. A video of the talk is now online.

The most serious poaching threats involve insiders: game guards who go over to the dark side, corrupt officials, and (now) the compromise of data and tools assembled for scientific and conservation purposes. Aggregation of data makes things worse; I might not care too much about a single geotagged photo, but a corpus of thousands of such photos tells a poacher where to set his traps. Cool new AI tools for recognising individual animals can make his work even easier. So people developing systems to help in the conservation mission need to start paying attention to computer security. Compartmentation is necessary, but there are hundreds of conservancies and game reserves, many of which are mutually mistrustful; there is no central authority at Fort Meade to manage classifications and clearances. Data sharing is haphazard and poorly understood, and the limits of open data are only now starting to be recognised. What sort of policies do we need to support, and what sort of tools do we need to create?

This is joint work with Tanya Berger-Wolf of Wildbook, one of the wildlife data aggregation sites, which is currently redeveloping its core systems to incorporate and test the ideas we describe. We are also working to spread the word to both conservators and online service firms.