Category Archives: Security economics

Social-science angles of security

What Goes Around Comes Around

What Goes Around Comes Around is a chapter I wrote for a book by EPIC. What are America’s long-term national policy interests (and ours for that matter) in surveillance and privacy? The election of a president with a very short-term view makes this ever more important.

While Britain was top dog in the 19th century, we gave the world both technology (steamships, railways, telegraphs) and values (the abolition of slavery and child labour, not to mention universal education). America has given us the motor car, the Internet, and a rules-based international trading system – and may have perhaps one generation left in which to make a difference.

Lessig taught us that code is law. Similarly, architecture is policy. The architecture of the Internet, and the moral norms embedded in it, will be a huge part of America’s legacy, and the network effects that dominate the information industries could give that architecture great longevity.

So if America re-engineers the Internet so that US firms can microtarget foreign customers cheaply, so that US telcos can extract rents from foreign firms via service quality, and so that the NSA can more easily spy on people in places like Pakistan and Yemen, then in 50 years’ time the Chinese will use it to manipulate, tax and snoop on Americans. In 100 years’ time it might be India in pole position, and in 200 years the United States of Africa.

My book chapter explores this topic. What do the architecture of the Internet, and the network effects of the information industries, mean for politics in the longer term, and for human rights? Although the chapter appeared in 2015, I forgot to put it online at the time. So here it is now.

Is the City force corrupt, or just clueless?

This week brought an announcement from a banking association that “identity fraud” is soaring to new levels, with 89,000 cases reported in the first six months of 2017 and 56% of all fraud reported by its members now classed as “identity fraud”.

So what is “identity fraud”? The announcement helpfully clarifies the concept:

“The vast majority of identity fraud happens when a fraudster pretends to be an innocent individual to buy a product or take out a loan in their name. Often victims do not even realise that they have been targeted until a bill arrives for something they did not buy or they experience problems with their credit rating. To carry out this kind of fraud successfully, fraudsters need access to their victim’s personal information such as name, date of birth, address, their bank and who they hold accounts with. Fraudsters get hold of this in a variety of ways, from stealing mail through to hacking; obtaining data on the ‘dark web’; exploiting personal information on social media, or though ‘social engineering’ where innocent parties are persuaded to give up personal information to someone pretending to be from their bank, the police or a trusted retailer.”

Now back when I worked in banking, if someone went to Barclays, pretended to be me, borrowed £10,000 and legged it, that was “impersonation”, and it was the bank’s money that had been stolen, not my identity. How did things change?

The members of this association are banks and credit card issuers. In their narrative, those impersonated are treated as targets, when the targets are actually those banks on whom the impersonation is practised. This is a precursor to refusing bank customers a “remedy” for “their loss” because “they failed to protect themselves.”
Now “dishonestly making a false representation” is an offence under s2 Fraud Act 2006. Yet what is the police response?

The Head of the City of London Police’s Economic Crime Directorate does not see the banks’ narrative as dishonest. Instead he goes along with it: “It has become normal for people to publish personal details about themselves on social media and on other online platforms which makes it easier than ever for a fraudster to steal someone’s identity.” He continues: “Be careful who you give your information to, always consider whether it is necessary to part with those details.” This is reinforced with a link to a police website with supposedly scary statistics: 55% of people use open public wifi and 40% of people don’t have antivirus software (like many security researchers, I’m guilty on both counts). This police website has a quote from the Head’s own boss, a Commander who is the National Police Coordinator for Economic Crime.

How are we to rate their conduct? Given that the costs of the City force’s Dedicated Card and Payment Crime Unit are borne by the banks, perhaps they feel obliged to sing from the banks’ hymn sheet. Just as the MacPherson report criticised the Met for being institutionally racist, we might perhaps describe the City force as institutionally corrupt. There is a wide literature on regulatory capture, and many other examples of regulators keen to do the banks’ bidding. And it’s not just the City force. There are disgraceful examples of the Metropolitan Police Commissioner and GCHQ endorsing the banks’ false narrative. However people are starting to notice, including the National Audit Office.

Or perhaps the police are just clueless?

History of the Crypto Wars in Britain

Back in March I gave an invited talk to the Cambridge University Ethics in Mathematics Society on the Crypto Wars. They have just put the video online here.

We spent much of the 1990s pushing back against attempts by the intelligence agencies to seize control of cryptography. From the Clipper Chip through the regulation of trusted third parties to export control, the agencies tried one trick after another to make us all less secure online, claiming that thanks to cryptography the world of intelligence was “going dark”. Quite the opposite was true; with communications moving online, with people starting to carry mobile phones everywhere, and with our communications and traffic data mostly handled by big firms who respond to warrants, law enforcement has never had it so good. Twenty years ago it cost over a thousand pounds a day to follow a suspect around, and weeks of work to map his contacts; Ed Snowden told us how nowadays an officer can get your location history with one click and your address book with another. In fact, searches through the contact patterns of whole populations are now routine.

The checks and balances that we thought had been built in to the RIP Act in 2000 after all our lobbying during the 1990s turned out to be ineffective. GCHQ simply broke the law and, after Snowden exposed them, Parliament passed the IP Act to declare that what they did was all right now. The Act allows the Home Secretary to give secret orders to tech companies to do anything they physically can to facilitate surveillance, thereby delighting our foreign competitors. And Brexit means the government thinks it can ignore the European Court of Justice, which has already ruled against some of the Act’s provisions. (Or perhaps Theresa May chose a hard Brexit because she doesn’t want the pesky court in the way.)

Yet we now see the Home Secretary repeating the old nonsense about decent people not needing privacy along with law enforcement officials on both sides of the Atlantic. Why doesn’t she just sign the technical capability notices she deems necessary and serve them?

In these fraught times it might be useful to recall how we got here. My talk to the Ethics in Mathematics Society was a personal memoir; there are many links on my web page to relevant documents.

National Audit Office confirms that police, banks, Home Office pass the buck on fraud

The National Audit Office has found as follows:

“For too long, as a low value but high volume crime, online fraud has been overlooked by government, law enforcement and industry. It is now the most commonly experienced crime in England and Wales and demands an urgent response. While the Department is not solely responsible for reducing and preventing online fraud, it is the only body that can oversee the system and lead change. The launch of the Joint Fraud Taskforce in February 2016 was a positive step, but there is still much work to be done. At this stage it is hard to judge that the response to online fraud is proportionate, efficient or effective.”

Our regular readers will recall that over ten years ago the government got the banks to agree with the police that fraud would be reported to the bank first. This ensured that the police and the government could boast of falling fraud figures, while the banks could direct such fraud investigations as did happen. This was roundly criticized by the Science and Technology Committee (here and here) but the government held firm. Over the succeeding decade, dissident criminologists started pointing out that fraud was not falling, just going online like everything else, and the online stuff was being ignored. Successive governments just didn’t want to know; for most of the period in question the Home Secretary was one Theresa May, who so impressed her party by “cutting crime” even though she’d cut 20,000 police jobs that she got a promotion.

But pigeons come home to roost eventually, and over the last two years the Office of National Statistics has been moving to more honest crime figures. The NAO report bears close study by anyone interested in cybercrime, in crime generally, and in how politicians game the crime figures. It makes clear that the Home Office doesn’t know what’s going on (or doesn’t really want to) and hopes that other people (such as banks and the IT industry) will solve the problem.

Government has made one or two token gestures such as setting up Action Fraud, and the NAO piously hopes that the latest such (the Joint Fraud Taskforce) could be beefed up to do some good.

I’m afraid that the NAO’s recommendations are less impressive. Let me give an example. The main online fraud bothering Cambridge University relates to bogus accommodation; about fifty times a year, a new employee or research student turns up to find that the apartment they rented doesn’t exist. This is an organised scam, run by crooks in Germany, that affects students elsewhere in the UK (mostly in London) and is netting £5-10m a year. The cybercrime guy in the Cambridgeshire Constabulary can’t do anything about this as only the National Crime Agency in London is allowed to talk to the German police; but he can’t talk to the NCA directly. He has to go through the Regional Organised Crime Unit in Bedford, who don’t care. The NCA would rather do sexier stuff; they seem to have planned to take over the Serious Fraud Office, as that was in the Conservative manifesto for this year’s election.

Every time we look at why some scam persists, it’s down to the institutional economics – to the way that government and the police forces have arranged their targets, their responsibilities and their reporting lines so as to make problems into somebody else’s problems. The same applies in the private sector; if you complain about fraud on your bank account the bank may simply reply that as their systems are secure, it’s your fault. If they record it at all, it may be as a fraud you attempted to commit against them. And it’s remarkable how high a proportion of people prosecuted under the Computer Misuse Act appear to have annoyed authority, for example by hacking police websites. Why do we civilians not get protected with this level of enthusiasm?

Many people have lobbied for change; LBT readers will recall numerous articles over the last ten years. Which? made a supercomplaint to the Payment Services Regulator, and got the usual bland non-reassurance. Other members of the old establishment were less courteous; the Commissioner of the Met said that fraud was the victims’ fault and GCHQ agreed. Such attitudes hit the poor and minorities the hardest.

The NAO is just as reluctant to engage. At p34 it says of the Home Office “The Department … has to influence partners to take responsibility in the absence of more formal legal or contractual levers.” But we already have the Payment Services Regulations; the FCA explained in response to the Tesco Bank hack that the banks it regulates should make fraud victims good. And it has always been the common-law position that in the absence of gross negligence a banker could not debit his customer’s account without the customer’s mandate. What’s lacking is enforcement. Nobody, from the Home Office through the FCA to the NAO, seems to want to face down the banks. Rather than insisting that they obey the law, the Home Office will spend another £500,000 on a publicity campaign, no doubt to tell us that it’s all our fault really.

Regulatory capture

Today’s newspapers report that the cladding on the Grenfell Tower, which appears to have been a major factor in the dreadful loss of life there, was banned in Germany and permitted in America only for low-rise buildings. It would have cost only £2 more per square meter to use fire-resistant cladding instead.

The tactical way of looking at this is whether the landlords or the builders were negligent, or even guilty of manslaughter, for taking such a risk in order to save £5000 on an £8m renovation job. The strategic approach is to ask why British regulators are so easily bullied by the industries they are supposed to police. There is a whole literature on regulatory capture but Britain seems particularly prone to it.

Regular readers of this blog will recall many cases of British regulators providing the appearance of safety, privacy and security rather than the reality. The Information Commissioner is supposed to regulate privacy but backs away from confronting powerful interests such as the tabloid press or the Department of Health. The Financial Ombudsman Service is supposed to protect customers but mostly sides with the banks instead; the new Payment Systems Regulator seems no better. The MHRA is supposed to regulate the safety of medical devices, yet resists doing anything about infusion pumps, which kill as many people as cars do.

Attempts to fix individual regulators are frustrated by lobbyists, or even by fear of lobbyists. For example, my colleague Harold Thimbleby has done great work on documenting the hazards of infusion pumps; yet when he applied to be a non-executive director of the MHRA he was not even shortlisted. I asked a civil servant who was once responsible for recommending such appointments to the Secretary of State why ministers never seemed to appoint people like Harold who might make a real difference. He replied wearily that ministers would never dream of that as “the drug companies would make too much of a fuss”.

In the wake of this tragedy there are both tactical and strategic questions of blame. Tactically, who decided that it was OK to use flammable cladding on high-rise buildings, when other countries came to a different conclusion? Should organisations be fined, should people be fired, and should anyone go to prison? That’s now a matter for the public inquiry, the police and the courts.

Strategically, why is British regulators so cosy with the industries they regulate, and what can be done about that? My starting point is that the appointment of regulators should no longer be in the gift of ministers. I propose that regulatory appointments be moved from the Cabinet Office to an independent commission, like the Judicial Appointments Commission, but with a statutory duty to hire the people most likely to challenge groupthink and keep the regulator effective. That is a political matter – a matter for all of us.

Camouflage or scary monsters: deceiving others about risk

I have just been at the Cambridge Risk and Uncertainty Conference which brings together people who educate the public about risks. They include public-health doctors trying to get people to eat better and exercise more, statisticians trying to keep governments honest about crime statistics, and climatologists trying to educate us about global warming – an eclectic and interesting bunch.

Most of the people in this community see their role as dispelling ignorance, or motivating the slothful. Yet in most of the cases we discussed, the public get risk wrong because powerful interests make a serious effort to scare them about some of life’s little hazards, or to reassure them about others. When this is put to the risk communication folks in a question – whether after a talk or in the corridor – they readily admit they’re up against a torrent of misleading marketing. But they don’t see what they’re doing as adversarial, and I strongly suspect that many risk interventions are less effective as a result.

In my talk (slides) I set this out as simply and starkly as I could. We spend too much on terrorism, because both the terrorists and the governments who’re supposed to protect us from them big up the threat; we spend too little on cybercrime, because everyone from the crooks through the police and the banks to the computer industry has their own reason to talk down the threat. I mentioned recent cases such as Wannacry as examples of how institutions communicate risk in self-serving, misleading ways. I discussed our own study of browser warnings, which suggests that people at least subconsciously know that most of the warnings they see are written to benefit others rather than them; they tune out all but the most specific.

What struck me with some force when preparing my talk, though, is that there’s just nobody in academia who takes a holistic view of adversarial risk communication. Many people look at some small part of the problem, from David Rios’ game-theoretic analysis of adversarial risk through John Mueller’s studies of terrorism risk and Alessandro Acquisti’s behavioural economics of privacy, through to criminologists who study pathways into crime and psychologists who study deception. Of all these, the literature on deception might be the most relevant, though we should also look at politics, propaganda, and studies of why people stubbornly persist in their beliefs – including the excellent work by Bénabou and Tirole on the value people place on belief. Perhaps the professionals whose job comes closest to adversarial risk communication are political spin doctors. So when should we talk about new facts, and when should we talk about who’s deceiving you and why?

Given the current concern over populism and the role of social media in the Brexit and Trump votes, it might be time for a more careful cross-disciplinary study of how we can change people’s minds about risk in the presence of smart and persistent adversaries. We know, for example, that a college education makes people much less susceptible to propaganda and marketing; but what is the science behind designing interventions that are quicker and cheaper in specific circumstances?

Second Annual Cybercrime Conference

The Cambridge Cybercrime Centre is organising another one day conference on cybercrime on Thursday, 13th July 2017.

In future years we intend to focus on research that has been carried out using datasets provided by the Cybercrime Centre, but just as last year (details here, liveblog here) we have a stellar group of invited speakers who are at the forefront of their fields:

They will present various aspects of cybercrime from the point of view of criminology, policy, security economics, law and policing.

This one day event, to be held in the Faculty of Law, University of Cambridge will follow immediately after (and will be in the same venue as) the “Tenth International Conference on Evidence Based Policing” organised by the Institute of Criminology which runs on the 11th and 12th July 2016.

Full details (and information about booking) is here.