Category Archives: Politics

Talk in Oxford at 5pm today on the ethics and economics of privacy in a world of Big Data

Today at 5pm I’ll be giving the Bellwether Lecture at the Oxford Internet Institute. My topic is Big Conflicts: the ethics and economics of privacy in a world of Big Data.

I’ll be discussing a recent Nuffield Bioethics Council report of which I was one of the authors. In it, we asked what medical ethics should look like in a world of ‘Big Data’ and pervasive genomics. It will take the law some time to catch up with what’s going on, so how should researchers behave meanwhile so that the people whose data we use don’t get annoyed or surprised, and so that we can defend our actions if challenged? We came up with four principles, which I’ll discuss. I’ll also talk about how they might apply more generally, for example to my own field of security research.

Can we have medical privacy, cloud computing and genomics all at the same time?

Today sees the publication of a report I helped to write for the Nuffield Bioethics Council on what happens to medical ethics in a world of cloud-based medical records and pervasive genomics.

As the information we gave to our doctors in private to help them treat us is now collected and treated as an industrial raw material, there has been scandal after scandal. From failures of anonymisation through unethical sales to the care.data catastrophe, things just seem to get worse. Where is it all going, and what must a medical data user do to behave ethically?

We put forward four principles. First, respect persons; do not treat their confidential data like were coal or bauxite. Second, respect established human-rights and data-protection law, rather than trying to find ways round it. Third, consult people who’ll be affected or who have morally relevant interests. And fourth, tell them what you’ve done – including errors and security breaches.

The collection, linking and use of data in biomedical research and health care: ethical issues took over a year to write. Our working group came from the medical profession, academics, insurers and drug companies. We had lots of arguments. But it taught us a lot, and we hope it will lead to a more informed debate on some very important issues. And since medicine is the canary in the mine, we hope that the privacy lessons can be of value elsewhere – from consumer data to law enforcement and human rights.

To freeze or not to freeze

We think we may have discovered a better polygraph.

Telling truth from lies is an ancient problem; some psychologists believe that it helped drive the evolution of intelligence, as hominids who were better at cheating, or detecting cheating by others, left more offspring. Yet despite thousands of years of practice, most people are pretty bad at lie detection, and can tell lies from truth only about 55% of the time – not much better than random.

Since the 1920s, law enforcement and intelligence agencies have used the polygraph, which measures the physiological stresses that result from anxiety. This is slightly better, but not much; a skilled examiner may be able to tell truth from lies 60% of the time. However it is easy for an examiner who has a preconceived view of the suspect’s innocence or guilt to use a polygraph as a prop to help find supporting “evidence” by intimidating them. Other technologies, from EEG to fMRI, have been tried, and the best that can be said is that it’s a complicated subject. The last resort of the desperate or incompetent is torture, where the interviewee will tell the interviewer whatever he wants to hear in order to stop the pain. The recent Feinstein committee inquiry into the use of torture by the CIA found that it was not just a stain on America’s values but ineffective.

Sophie van der Zee decided to see if datamining people’s body movements might help. She put 90 pairs of volunteers in motion capture suits and got them to interview each other; half the interviewees were told to lie. Her first analysis of the data was to see whether you could detect deception from mimicry (you can, but it’s not much better than the conventional polygraph) and to debug the technology.

After she joined us in Cambridge we had another look at the data, and tried analysing it using a number of techniques, some suggested by Ronald Poppe. We found that total body motion was a reliable indicator of guilt, and works about 75% of the time. Put simply, guilty people fidget more; and this turns out to be fairly independent of cultural background, cognitive load and anxiety – the factors that confound most other deception detection technologies. We believe we can improve that to over 80% by analysing individual limb data, and also using effective questioning techniques (as our method detects truth slightly more dependably than lies).

Our paper is appearing at HICSS, the traditional venue for detection-deception technology. Our task for 2015 will be to redevelop this for low-cost commodity hardware and test it in a variety of environments. Of course, a guilty man can always just freeze, but that will rather give the game away; we suspect it might be quite hard to fidget deliberately at exactly the same level as you do when you’re not feeling guilty. (See also press coverage.)

Our Christmas message for troublemakers: how to do anonymity in the real world

On the 5th of December I gave a talk at a journalists’ conference on what tradecraft means in the post-Snowden world. How can a journalist, or for that matter an MP or an academic, protect a whistleblower from being identified even when MI5 and GCHQ start trying to figure out who in Whitehall you’ve been talking to? The video of my talk is now online here. There is also a TV interview I did later, which can be found here, while the other conference talks are here.

Enjoy!

Ross

Curfew tags – the gory details

In previous posts I told the story of how Britain’s curfew tagging system can fail. Some prisoners are released early provided they wear a tag to enforce a curfew, which typically means that they have to stay home from 7pm to 7am; some petty offenders get a curfew instead of a prison sentence; and some people accused of serious crimes are tagged while on bail. In dozens of cases, curfewees had been accused of tampering with their tags, but had denied doing so. In a series of these cases, colleagues and I were engaged as experts, but when we demanded tags for testing, the prosecution was withdrawn and the case collapsed. In the most famous case, three men accused of terrorist offences were released; although one has since absconded, the other two are now free in the UK.

This year, a case finally came to trial. Our client, to whom we must refer simply as “Special Z”, was accused of tag tampering, which he denied vigorously. I was instructed as an expert along with my colleague Dr James Dean of Materials Science. Here is my expert report, together with James’s report and addendum, as well as a video of a tag being removed using much less than the amount of force required by the system specification.

The judge was not ready to set a precedent that could have thrown the UK tagging system into chaos. However, I understand our client has now been released on other grounds. Although the court did order us to hand back all the tags, and fragments of broken tags, so as to protect G4S’s intellectual property, it did not make a secrecy order on our expert reports. We publish them here in the hope that they might provide useful guidance to defendants in similar cases in the future, and to policymakers when tagging contracts come up for renewal, whether in the UK or overseas.

Spooks behaving badly

Like many in the tech world, I was appalled to see how the security and intelligence agencies’ spin doctors managed to blame Facebook for Lee Rigby’s murder. It may have been a convenient way of diverting attention from the many failings of MI5, MI6 and GCHQ documented by the Intelligence and Security Committee in its report yesterday, but it will be seriously counterproductive. So I wrote an op-ed in the Guardian.

Britain spends less on fighting online crime than Facebook does, and only about a fifth of what either Google or Microsoft spends (declaration of interest: I spent three months working for Google on sabbatical in 2011, working with the click fraud team and on the mobile wallet). The spooks’ approach reminds me of how Pfizer dealt with Viagra spam, which was to hire lawyers to write angry letters to Google. If they’d hired a geek who could have talked to the abuse teams constructively, they’d have achieved an awful lot more.

The likely outcome of GCHQ’s posturing and MI5’s blame avoidance will be to drive tech companies to route all the agencies’ requests past their lawyers. This will lead to huge delays. GCHQ already complained in the Telegraph that they still haven’t got all the murderers’ Facebook traffic; this is no doubt due to the fact that the Department of Justice is sitting on a backlog of requests for mutual legal assistance, the channel through which such requests must flow. Congress won’t give the Department enough money for this, and is content to play chicken with the Obama administration over the issue. If GCHQ really cares, then it could always pay the Department of Justice to clear the backlog. The fact that all the affected government departments and agencies use this issue for posturing, rather than tackling the real problems, should tell you something.

Largest ever civil government IT disaster

Last year I taught a systems course to students on the university’s Masters of Public Policy course (this is like an MBA but for civil servants). For their project work, I divided them into teams of three or four and got them to write a case history of a public-sector IT project that went wrong.

The class prize was won by Oliver Campion-Awwad, Alexander Hayton, Leila Smith and Mark Vuaran for The National Programme for IT in the NHS – A Case History. It’s now online, not just to acknowledge their excellent work and to inspire future MPP students, but also as a resource for people interested in what goes wrong with large public-sector IT projects, and how to do better in future.

Regular readers of this blog will recall a series of posts on this topic and related ones; yet despite the huge losses the government doesn’t seem to have learned much at all.

There is more information on our MPP course here, while my teaching materials are available here. With luck, the next generation of civil servants won’t be quite as clueless.

Privacy with technology: where do we go from here?

As part of the Royal Society Summer Science Exhibition 2014, I spoke at the panel session “Privacy with technology: where do we go from here?”, along with Ross Anderson, and Bashar Nuseibeh with Jon Crowcroft as chair.

The audio recording is available and some notes from the session are below.

The session started with brief presentations from each of the panel members. Ross spoke on the economics of surveillance and in particular network effects, the topic of his paper at WEIS 2014.

Bashar discussed the difficulties of requirements engineering, as eloquently described by Billy Connolly. These challenges are particularly acute when it comes to designing for privacy requirements, especially for wearable devices with their limited ability to communicate with users.

I described issues around surveillance on the Internet, whether by governments targeting human rights workers or advertisers targeting pregnant customers. I discussed how anonymous communication tools, such as Tor, can help defend against such surveillance.

Continue reading Privacy with technology: where do we go from here?

First Global Deception Conference

Global Deception conference, Oxford, 17–19th of July 2014

Conference introduction

This deception conference, as part of Hostility and Violence, was organized by interdisciplinary net. Interdisciplinary net runs about 75 conferences a year and was set up by Rob Fisher in 1999 to facilitate international dialogue between disciplines. Conferences are organized on a range of topics, such as gaming, empathycyber cultures, violence and communication and conflict. Not just the topics of the different conferences are interdisciplinary, this is the case within each conference as well. During our deception conference we approached deception from very different angles; from optical illusions in art and architecture via literary hoaxes, fiction and spy novels to the role of the media in creating false beliefs amongst society and ending with a more experimental approach to detecting deception. Even a magic trick was part of the (informal) program, and somehow I ended up being the magician’s assistant. You can find my notes and abstracts below.

Finally, if you (also) have an interest in more experimental deception research with a high practical applicability, then we have good news. Aldert Vrij, Ross Anderson and I are hosting a deception conference to bring together deception researchers and law enforcement people from all over the world. This event will take place at Cambridge University on August 22-24, 2015.

Session 1 – Hoaxes

John Laurence Busch: Deceit without, deceit within: The British Government behavior in the secret race to claim steam-powered superiority at sea. Lord Liverpool became prime minister in 1812 and wanted to catch up with the Americans regarding steam-powered boats. The problem however was that the Royal Navy did not know how to build those vessels, so they joined the British Post in 1820 who wanted to build steam powered boats to deliver post to Ireland more quickly. The post was glad the navy wants to collaborate, although the navy was deceptive; they kept quiet, both to the post, the public and other countries, that they did not know how to build those vessels, and that were hoping to learn how to build a steam boat from them, which succeeded, importantly whilst successfully masking/hiding from the French and the Americans that the British Navy was working on steam vessels to catch up with the US. So the Navy was hiding something questionable (military activity) behind something innocent (post); deceptive public face.

Catelijne Coopmans & Brian Rappert: Revealing deception and its discontents: Scrutinizing belief and skepticism about the moon landing. The moon landing in the 60s is a possible deceptive situation in which the stakes are high and is of high symbolic value. A 2001 documentary by Fox “Conspiracy theory: Did we land on the moon or not?” The documentary bases their suspicions mainly on photographic and visual evidence, such as showing shadows where they shouldn’t be, a “c” shape on a stone, a flag moving in a breeze and pictures with exactly the same background but with different foregrounds. As a response, several people have explained these inconsistencies (e.g., the C was a hair). The current authors focus more on the paradoxes that surround and maybe even fuel these conspiracy theories, such as disclosure vs. non-disclosure, secrecy that fuels suspicion. Like the US governments secrecy around Area 51. Can you trust and at the same time not trust the visual proof of the moan landing presented by NASA? Although the quality of the pictures was really bad, the framing was really well done. Apollo 11 tried to debunk this conspiracy theory by showing a picture of the flag currently still standing on the moon. But then, that could be photoshopped…

Discussion: How can you trust a visual image, especially when used to proof something, when we live in a world where technology makes it possible to fake anything with a high standard? Continue reading First Global Deception Conference

EMV: Why Payment Systems Fail

In the latest edition of Communications of the ACM, Ross Anderson and I have an article in the Inside Risks column: “EMV: Why Payment Systems Fail” (DOI 10.1145/2602321).

Now that US banks are deploying credit and debit cards with chips supporting the EMV protocol, our article explores what lessons the US should learn from the UK experience of having chip cards since 2006. We address questions like whether EMV would have prevented the Target data breach (it wouldn’t have), whether Chip and PIN is safer for customers than Chip and Signature (it isn’t), whether EMV cards can be cloned (in some cases, they can) and whether EMV will protect against online fraud (it won’t).

While the EMV specification is the same across the world, they way each country uses it varies substantially. Even individual banks within a country may make different implementation choices which have an impact on security. The US will prove to be an especially interesting case study because some banks will be choosing Chip and PIN (as the UK has done) while others will choose Chip and Signature (as Singapore did). The US will act as a natural experiment addressing the question of whether Chip and PIN or Chip and Signature is better, and from whose perspective?

The US is also distinctive in that the major tussle over payment card security is over the “interchange” fees paid by merchants to the banks which issue the cards used. Interchange fees are about an order of magnitude higher than losses due to fraud, so while security is one consideration in choosing different sets of EMV features, the question of who pays how much in fees is a more important factor (even if the decision is later claimed to be justified by security). We’re already seeing results of this fight in the courts and through legislation.

EMV is coming to the US, so it is important that banks, customers, merchants and regulators know the likely consequences and how to manage the risks, learning from the lessons of the UK and elsewhere. Discussion of these and further issues can be found in our article.