Category Archives: Academic papers

Is this research ethical?

The Economist features face recognition on its front page, reporting that deep neural networks can now tell whether you’re straight or gay better than humans can just by looking at your face. The research they cite is a preprint, available here.

Its authors Kosinski and Wang downloaded thousands of photos from a dating site, ran them through a standard feature-extraction program, then classified gay vs straight using a standard statistical classifier, which they found could tell the men seeking men from the men seeking women. My students pretty well instantly called this out as selection bias; if gay men consider boyish faces to be cuter, then they will upload their most boyish photo. The paper authors suggest their finding may support a theory that sexuality is influenced by fetal testosterone levels, but when you don’t control for such biases your results may say more about social norms than about phenotypes.

Quite apart from the scientific value of the research, which is perhaps best assessed by specialists, I’m concerned with the ethics and privacy aspects. I am surprised that the paper doesn’t report having been through ethical review; the authors consider that photos on a dating website are public information and appear to assume that privacy issues simply do not arise.

Yet UK courts decided, in Campbell v Mirror, that privacy could be violated even by photos taken on the public street, and European courts have come to similar conclusions in I v Finland and elsewhere. For example, a Catholic woman is entitled to object to the use of her medical record in research on abortifacients and contraceptives even if the proposed use is fully anonymised and presents no privacy risk whatsoever. The dating site users would be similarly entitled to object to their photos being used in research to which they might have an ethical objection, even if they could not be identified from their photos. There are surely going to be people who object to research in any nature vs nurture debate, especially on a charged topic such as sexuality. And the whole point of the Economist’s coverage is that face-recognition technology is now good enough to work at population scale.

What do LBT readers think?

History of the Crypto Wars in Britain

Back in March I gave an invited talk to the Cambridge University Ethics in Mathematics Society on the Crypto Wars. They have just put the video online here.

We spent much of the 1990s pushing back against attempts by the intelligence agencies to seize control of cryptography. From the Clipper Chip through the regulation of trusted third parties to export control, the agencies tried one trick after another to make us all less secure online, claiming that thanks to cryptography the world of intelligence was “going dark”. Quite the opposite was true; with communications moving online, with people starting to carry mobile phones everywhere, and with our communications and traffic data mostly handled by big firms who respond to warrants, law enforcement has never had it so good. Twenty years ago it cost over a thousand pounds a day to follow a suspect around, and weeks of work to map his contacts; Ed Snowden told us how nowadays an officer can get your location history with one click and your address book with another. In fact, searches through the contact patterns of whole populations are now routine.

The checks and balances that we thought had been built in to the RIP Act in 2000 after all our lobbying during the 1990s turned out to be ineffective. GCHQ simply broke the law and, after Snowden exposed them, Parliament passed the IP Act to declare that what they did was all right now. The Act allows the Home Secretary to give secret orders to tech companies to do anything they physically can to facilitate surveillance, thereby delighting our foreign competitors. And Brexit means the government thinks it can ignore the European Court of Justice, which has already ruled against some of the Act’s provisions. (Or perhaps Theresa May chose a hard Brexit because she doesn’t want the pesky court in the way.)

Yet we now see the Home Secretary repeating the old nonsense about decent people not needing privacy along with law enforcement officials on both sides of the Atlantic. Why doesn’t she just sign the technical capability notices she deems necessary and serve them?

In these fraught times it might be useful to recall how we got here. My talk to the Ethics in Mathematics Society was a personal memoir; there are many links on my web page to relevant documents.

AlphaBay and Hansa Market takedowns

Yesterday the FBI announced the takedown of the AlphaBay marketplace, a hidden service facilitating the sale of drugs, as well as other illicit products and services. The takedown had actually occurred weeks earlier, and had been staged to appear like an exit scam, where the operators take off with the money.

What was particularly interesting about the FBI’s takedown was that it was coordinated with the activities of the Dutch police, who had previously taken over the Hansa Market, another leading blackmarket. As the investigators were then controlling this marketplace they were able to monitor the activities of traders who had been using AlphaBay and then moved to Hansa Market.

I’ve been interested in online blackmarkets for some time, particularly those that relate to the stolen data economy. In fact, last year a paper written by Professor Thomas Holt and I was published. This paper outlines a number of intervention approaches, including disrupting the actual marketplaces where trade takes place.

Among our numerous suggestions are three that have been used, in combination, by this international police effort. We suggest that law enforcement promote distrust, which they did by making AlphaBay appear to have been an exit scam. We also suggest that law enforcement take over and take down marketplaces. Neither of these police approaches are new, and we point to previous examples where this has happened. In our conclusion, we stated:

Multiple interventions coordinated across different guardians, nationally and internationally, incorporating different bodies (investigative, regulatory, strategic, non-government organisations and the private sector) that have ownership of the crime prevention problem may reduce duplication of effort, as well as provide a more systematic approach with the greatest disruption effect.

The Hansa Market and AlphaBay approach demonstrates how this can be achieved. By co-ordinating the approaches, and working together, the disruptive effects of their work is likely to be much greater than if they had acted alone. It’s likely we’ll see arrests of traders and further disruption to the online drug trade.

Work by Soska and Christin found that after the Silk Road takedown, more online blackmarkets emerged and evolved. I think this evolution will continue, but perhaps marketplace administrators will have to work harder in order to earn the trust of their users.

Testing the usability of offline mobile payments

Last September we spent some time in Nairobi figuring out whether we could make offline phone payments usable. Phone payments have greatly improved the lives of millions of poor people in countries like Kenya and Bangladesh, who previously didn’t have bank accounts at all but who can now send and receive money using their phones. That’s great for the 80% who have mobile phone coverage, but what about the others?

Last year I described how we designed and built a prototype system to support offline payments, with the help of a grant from the Bill and Melinda Gates Foundation, and took it to Africa to test it. Offline payments require both the sender and the receiver to enter some extra digits to ensure that the payer and the payee agree on who’s paying whom how much. We worked as hard as we could to minimise the number of digits and to integrate them into the familar transaction flow. Would this be good enough?

Our paper setting out the results was accepted to the Symposium on Usable Privacy and Security (SOUPS), the leading security usability event. This has now started and the paper’s online; the lead author, Khaled Baqer, will be presenting it tomorrow. As we noted last year, the DigiTally pilot was a success. For the data and the detailed analysis, please see our paper:

DigiTally: Piloting Offline Payments for Phones, Khaled Baqer, Ross Anderson, Jeunese Adrienne Payne, Lorna Mutegi, Joseph Sevilla, 13th Symposium on Usable Privacy & Security (SOUPS 2017), pp 131–143

Camouflage or scary monsters: deceiving others about risk

I have just been at the Cambridge Risk and Uncertainty Conference which brings together people who educate the public about risks. They include public-health doctors trying to get people to eat better and exercise more, statisticians trying to keep governments honest about crime statistics, and climatologists trying to educate us about global warming – an eclectic and interesting bunch.

Most of the people in this community see their role as dispelling ignorance, or motivating the slothful. Yet in most of the cases we discussed, the public get risk wrong because powerful interests make a serious effort to scare them about some of life’s little hazards, or to reassure them about others. When this is put to the risk communication folks in a question – whether after a talk or in the corridor – they readily admit they’re up against a torrent of misleading marketing. But they don’t see what they’re doing as adversarial, and I strongly suspect that many risk interventions are less effective as a result.

In my talk (slides) I set this out as simply and starkly as I could. We spend too much on terrorism, because both the terrorists and the governments who’re supposed to protect us from them big up the threat; we spend too little on cybercrime, because everyone from the crooks through the police and the banks to the computer industry has their own reason to talk down the threat. I mentioned recent cases such as Wannacry as examples of how institutions communicate risk in self-serving, misleading ways. I discussed our own study of browser warnings, which suggests that people at least subconsciously know that most of the warnings they see are written to benefit others rather than them; they tune out all but the most specific.

What struck me with some force when preparing my talk, though, is that there’s just nobody in academia who takes a holistic view of adversarial risk communication. Many people look at some small part of the problem, from David Rios’ game-theoretic analysis of adversarial risk through John Mueller’s studies of terrorism risk and Alessandro Acquisti’s behavioural economics of privacy, through to criminologists who study pathways into crime and psychologists who study deception. Of all these, the literature on deception might be the most relevant, though we should also look at politics, propaganda, and studies of why people stubbornly persist in their beliefs – including the excellent work by Bénabou and Tirole on the value people place on belief. Perhaps the professionals whose job comes closest to adversarial risk communication are political spin doctors. So when should we talk about new facts, and when should we talk about who’s deceiving you and why?

Given the current concern over populism and the role of social media in the Brexit and Trump votes, it might be time for a more careful cross-disciplinary study of how we can change people’s minds about risk in the presence of smart and persistent adversaries. We know, for example, that a college education makes people much less susceptible to propaganda and marketing; but what is the science behind designing interventions that are quicker and cheaper in specific circumstances?

Second Annual Cybercrime Conference

The Cambridge Cybercrime Centre is organising another one day conference on cybercrime on Thursday, 13th July 2017.

In future years we intend to focus on research that has been carried out using datasets provided by the Cybercrime Centre, but just as last year (details here, liveblog here) we have a stellar group of invited speakers who are at the forefront of their fields:

They will present various aspects of cybercrime from the point of view of criminology, policy, security economics, law and policing.

This one day event, to be held in the Faculty of Law, University of Cambridge will follow immediately after (and will be in the same venue as) the “Tenth International Conference on Evidence Based Policing” organised by the Institute of Criminology which runs on the 11th and 12th July 2016.

Full details (and information about booking) is here.

When safety and security become one

What happens when your car starts getting monthly upgrades like your phone and your laptop? It’s starting to happen, and the changes will be profound. We’ll be able to improve car safety as we learn from accidents, and fixing a flaw won’t mean spending billions on a recall. But if you’re writing navigation code today that will go in the 2020 Landrover, how will you be able to ship safety and security patches in 2030? In 2040? In 2050? At present we struggle to keep software patched for three years; we have no idea how to do it for 30.

Our latest paper reports a project that Éireann Leverett, Richard Clayton and I undertook for the European Commission into what happens to safety in this brave new world. Europe is the world’s lead safety regulator for about a dozen industry sectors, of which we studied three: road transport, medical devices and the electricity industry.

Up till now, we’ve known how to make two kinds of fairly secure system. There’s the software in your phone or laptop which is complex and exposed to online attack, so has to be patched regularly as vulnerabilities are discovered. It’s typically abandoned after a few years as patching too many versions of software costs too much. The other kind is the software in safety-critical machinery which has tended to be stable, simple and thoroughly tested, and not exposed to the big bad Internet. As these two worlds collide, there will be some rather large waves.

Regulators who only thought in terms of safety will have to start thinking of security too. Safety engineers will have to learn adversarial thinking. Security engineers will have to think much more about ease of safe use. Educators will have to start teaching these subjects together. (I just expanded my introductory course on software engineering into one on software and security engineering.) And the policy debate will change too; people might vote for the FBI to have a golden master key to unlock your iPhone and read your private messages, but they might be less likely to vote them a master key to take over your car or your pacemaker.

Researchers and software developers will have to think seriously about how we can keep on patching the software in durable goods such as vehicles for thirty or forty years. It’s not acceptable to recycle cars after seven years, as greedy carmakers might hope; the embedded carbon cost of a car is about equal to its lifetime fuel burn, and reducing average mileage from 200,000 to 70,000 would treble the car industry’s CO2 emissions. So we’re going to have to learn how to make software sustainable. How do we do that?

Our paper is here; there’s a short video here and a longer video here. The full report is available from the EU here.