Category Archives: Security psychology

National Audit Office confirms that police, banks, Home Office pass the buck on fraud

The National Audit Office has found as follows:

“For too long, as a low value but high volume crime, online fraud has been overlooked by government, law enforcement and industry. It is now the most commonly experienced crime in England and Wales and demands an urgent response. While the Department is not solely responsible for reducing and preventing online fraud, it is the only body that can oversee the system and lead change. The launch of the Joint Fraud Taskforce in February 2016 was a positive step, but there is still much work to be done. At this stage it is hard to judge that the response to online fraud is proportionate, efficient or effective.”

Our regular readers will recall that over ten years ago the government got the banks to agree with the police that fraud would be reported to the bank first. This ensured that the police and the government could boast of falling fraud figures, while the banks could direct such fraud investigations as did happen. This was roundly criticized by the Science and Technology Committee (here and here) but the government held firm. Over the succeeding decade, dissident criminologists started pointing out that fraud was not falling, just going online like everything else, and the online stuff was being ignored. Successive governments just didn’t want to know; for most of the period in question the Home Secretary was one Theresa May, who so impressed her party by “cutting crime” even though she’d cut 20,000 police jobs that she got a promotion.

But pigeons come home to roost eventually, and over the last two years the Office of National Statistics has been moving to more honest crime figures. The NAO report bears close study by anyone interested in cybercrime, in crime generally, and in how politicians game the crime figures. It makes clear that the Home Office doesn’t know what’s going on (or doesn’t really want to) and hopes that other people (such as banks and the IT industry) will solve the problem.

Government has made one or two token gestures such as setting up Action Fraud, and the NAO piously hopes that the latest such (the Joint Fraud Taskforce) could be beefed up to do some good.

I’m afraid that the NAO’s recommendations are less impressive. Let me give an example. The main online fraud bothering Cambridge University relates to bogus accommodation; about fifty times a year, a new employee or research student turns up to find that the apartment they rented doesn’t exist. This is an organised scam, run by crooks in Germany, that affects students elsewhere in the UK (mostly in London) and is netting £5-10m a year. The cybercrime guy in the Cambridgeshire Constabulary can’t do anything about this as only the National Crime Agency in London is allowed to talk to the German police; but he can’t talk to the NCA directly. He has to go through the Regional Organised Crime Unit in Bedford, who don’t care. The NCA would rather do sexier stuff; they seem to have planned to take over the Serious Fraud Office, as that was in the Conservative manifesto for this year’s election.

Every time we look at why some scam persists, it’s down to the institutional economics – to the way that government and the police forces have arranged their targets, their responsibilities and their reporting lines so as to make problems into somebody else’s problems. The same applies in the private sector; if you complain about fraud on your bank account the bank may simply reply that as their systems are secure, it’s your fault. If they record it at all, it may be as a fraud you attempted to commit against them. And it’s remarkable how high a proportion of people prosecuted under the Computer Misuse Act appear to have annoyed authority, for example by hacking police websites. Why do we civilians not get protected with this level of enthusiasm?

Many people have lobbied for change; LBT readers will recall numerous articles over the last ten years. Which? made a supercomplaint to the Payment Services Regulator, and got the usual bland non-reassurance. Other members of the old establishment were less courteous; the Commissioner of the Met said that fraud was the victims’ fault and GCHQ agreed. Such attitudes hit the poor and minorities the hardest.

The NAO is just as reluctant to engage. At p34 it says of the Home Office “The Department … has to influence partners to take responsibility in the absence of more formal legal or contractual levers.” But we already have the Payment Services Regulations; the FCA explained in response to the Tesco Bank hack that the banks it regulates should make fraud victims good. And it has always been the common-law position that in the absence of gross negligence a banker could not debit his customer’s account without the customer’s mandate. What’s lacking is enforcement. Nobody, from the Home Office through the FCA to the NAO, seems to want to face down the banks. Rather than insisting that they obey the law, the Home Office will spend another £500,000 on a publicity campaign, no doubt to tell us that it’s all our fault really.

Regulatory capture

Today’s newspapers report that the cladding on the Grenfell Tower, which appears to have been a major factor in the dreadful loss of life there, was banned in Germany and permitted in America only for low-rise buildings. It would have cost only £2 more per square meter to use fire-resistant cladding instead.

The tactical way of looking at this is whether the landlords or the builders were negligent, or even guilty of manslaughter, for taking such a risk in order to save £5000 on an £8m renovation job. The strategic approach is to ask why British regulators are so easily bullied by the industries they are supposed to police. There is a whole literature on regulatory capture but Britain seems particularly prone to it.

Regular readers of this blog will recall many cases of British regulators providing the appearance of safety, privacy and security rather than the reality. The Information Commissioner is supposed to regulate privacy but backs away from confronting powerful interests such as the tabloid press or the Department of Health. The Financial Ombudsman Service is supposed to protect customers but mostly sides with the banks instead; the new Payment Systems Regulator seems no better. The MHRA is supposed to regulate the safety of medical devices, yet resists doing anything about infusion pumps, which kill as many people as cars do.

Attempts to fix individual regulators are frustrated by lobbyists, or even by fear of lobbyists. For example, my colleague Harold Thimbleby has done great work on documenting the hazards of infusion pumps; yet when he applied to be a non-executive director of the MHRA he was not even shortlisted. I asked a civil servant who was once responsible for recommending such appointments to the Secretary of State why ministers never seemed to appoint people like Harold who might make a real difference. He replied wearily that ministers would never dream of that as “the drug companies would make too much of a fuss”.

In the wake of this tragedy there are both tactical and strategic questions of blame. Tactically, who decided that it was OK to use flammable cladding on high-rise buildings, when other countries came to a different conclusion? Should organisations be fined, should people be fired, and should anyone go to prison? That’s now a matter for the public inquiry, the police and the courts.

Strategically, why is British regulators so cosy with the industries they regulate, and what can be done about that? My starting point is that the appointment of regulators should no longer be in the gift of ministers. I propose that regulatory appointments be moved from the Cabinet Office to an independent commission, like the Judicial Appointments Commission, but with a statutory duty to hire the people most likely to challenge groupthink and keep the regulator effective. That is a political matter – a matter for all of us.

Camouflage or scary monsters: deceiving others about risk

I have just been at the Cambridge Risk and Uncertainty Conference which brings together people who educate the public about risks. They include public-health doctors trying to get people to eat better and exercise more, statisticians trying to keep governments honest about crime statistics, and climatologists trying to educate us about global warming – an eclectic and interesting bunch.

Most of the people in this community see their role as dispelling ignorance, or motivating the slothful. Yet in most of the cases we discussed, the public get risk wrong because powerful interests make a serious effort to scare them about some of life’s little hazards, or to reassure them about others. When this is put to the risk communication folks in a question – whether after a talk or in the corridor – they readily admit they’re up against a torrent of misleading marketing. But they don’t see what they’re doing as adversarial, and I strongly suspect that many risk interventions are less effective as a result.

In my talk (slides) I set this out as simply and starkly as I could. We spend too much on terrorism, because both the terrorists and the governments who’re supposed to protect us from them big up the threat; we spend too little on cybercrime, because everyone from the crooks through the police and the banks to the computer industry has their own reason to talk down the threat. I mentioned recent cases such as Wannacry as examples of how institutions communicate risk in self-serving, misleading ways. I discussed our own study of browser warnings, which suggests that people at least subconsciously know that most of the warnings they see are written to benefit others rather than them; they tune out all but the most specific.

What struck me with some force when preparing my talk, though, is that there’s just nobody in academia who takes a holistic view of adversarial risk communication. Many people look at some small part of the problem, from David Rios’ game-theoretic analysis of adversarial risk through John Mueller’s studies of terrorism risk and Alessandro Acquisti’s behavioural economics of privacy, through to criminologists who study pathways into crime and psychologists who study deception. Of all these, the literature on deception might be the most relevant, though we should also look at politics, propaganda, and studies of why people stubbornly persist in their beliefs – including the excellent work by Bénabou and Tirole on the value people place on belief. Perhaps the professionals whose job comes closest to adversarial risk communication are political spin doctors. So when should we talk about new facts, and when should we talk about who’s deceiving you and why?

Given the current concern over populism and the role of social media in the Brexit and Trump votes, it might be time for a more careful cross-disciplinary study of how we can change people’s minds about risk in the presence of smart and persistent adversaries. We know, for example, that a college education makes people much less susceptible to propaganda and marketing; but what is the science behind designing interventions that are quicker and cheaper in specific circumstances?

Bad malware, worse reporting

The Wannacry malware that has infected some UK hospital computers should interest not just security researchers but also people interested in what drives fake news.

Some made errors of fact: the Daily Mail inititally reported the ransom demand as 300 bitcoin, or £415,000, rather than $300 in bitcoin. Others made errors of logic: the Indy, for example, reported that “Up to 90 percent of NHS computers still run XP, released in 2001”, citing as its source a BMJ article which stated that 90% of trusts run this version of Windows. And some made errors of concurrency. After dinner I found inquiries from journalists about my fight with the Prime Minister. My what? Eventually I found that the Guardian had followed something Mrs May’s spokesman had said (“not aware of any evidence that patient data has been compromised”) with something I’d said a couple of hours earlier (“The NHS are saying that patient privacy hasn’t been compromised, but if significant numbers of hospitals have been negligently running unpatched computers for two months after the patch came out, how do they know?”). The Home Secretary later helpfully glossed the PM’s stonewall as “No patient data has been accessed or transferred in any way” but leaving the get-out-of-jail card “that’s the information we’ve been given.”

Many papers caught the international political aspect: that the vulnerability was discovered by the NSA, kept secret rather than fixed (contrary to the advice of Obama’s NSA review group), then stolen from the CIA by the Russians and published via wikileaks. Scary stuff, eh? And we read of some surprising overreactions, such as the GP who switched off his networking as a precaution and found he couldn’t access any of his patients’ records.

As luck would have it, yesterday was the day that I gave my talk on entomology – the classification of software bugs and other security vulnerabilities – to my first-year security and software engineering class. So let’s try to look at it calmly as I’d expect of a student writing an assignment.

The first point is that there’s not a really lot of this malware. The NHS has over 200 hospitals, and the typical IT director is a senior clinician supported by technicians. Yet despite having their IT run by well-meaning amateurs, only 16 NHS organisations have been hit, according to the Register and Kaspersky – including several hospitals.

So the second point is that when the Indy says that “The NHS is a perfect combination of sensitive data and insecure storage. And there’s very little they can do about it” the answer is simple: in well over 90% of NHS organisations, the well-meaning amateurs managed perfectly well. What they did was to keep their systems patched up-to-date; simple hygiene, like washing your hands after going to the toilet.

The third takeaway is that it’s worth looking at the actual code. A UK researcher did so and discovered a kill switch.

Now I am just listening on the BBC morning news to a former deputy director of GCHQ who first cautions against alarmist headlines and argues that everyone develops malware; that a patch had been issued by Microsoft halfway through March; that you can deal with ransomware by keeping decent backups; and that paying ransom will embolden the bad guys. However he claims that it’s clearly an organised criminal attack. (when it could be one guy in his bedroom somewhere) and says that the NCSC should look at whether there is some countermeasure that everyone should have taken (for answer see above).

So our fourth takeaway is that although the details matter, so do the economics of security. When something unexpected happens, you should not just get your head down and look at the code, but look up and observe people’s agendas. Politicians duck and weave; NHS managers blame the system rather than step up to the plate; the NHS as a whole turns every incident into a plea for more money; the spooks want to avoid responsibility for the abuse of their stolen cyberweaponz, but still big up the threat and get more influence for a part of their agency that’s presented as solely defensive. And we academics? Hey, we just want the students to pay attention to what we’re teaching them.

Hope this helps!

The University is Hiring

We’re looking for a Chief Information Security Officer. This isn’t a research post here at the lab, but across the yard in University Information Services, where they manage our networks and our administrative systems. There will be opportunities to work with security researchers like us, but the main task is protecting Cambridge from all sorts of online bad actors. If you would like to be in the thick of it, and you know what you’re doing, here’s how you can apply.

Security Economics MOOC

In two weeks’ time we’re starting an open course in security economics. I’m teaching this together with Rainer Boehme, Tyler Moore, Michel van Eeten, Carlos Ganan, Sophie van der Zee and David Modic.

Over the past fifteen years, we’ve come to realise that many information security failures arise from poor incentives. If Alice guards a system while Bob pays the cost of failure, things can be expected to go wrong. Security economics is now an important research topic: you can’t design secure systems involving multiple principals if you can’t get the incentives right. And it goes way beyond computer science. Without understanding how incentives play out, you can’t expect to make decent policy on cybercrime, on consumer protection or indeed on protecting critical national infrastructure

We first did the course last year as a paid-for course with EdX. Our agreement with them was that they’d charge for it the first time, to recoup the production costs, and thereafter it would be free.

So here it is as a free course. Spread the word!

Might Brexit make us more dishonest?

When Lying Feels the Right Thing to Do reports three studies we did on what made people less or more likely to submit fraudulent insurance claims. Our first study found that people were more likely to cheat when rejected; the other two showed that rejected claimants were just as likely to cheat when this didn’t lead to financial gain, but that they felt more strongly when there was no money involved.

Our research was conducted as part of a broader research programme to investigate the deterrence of deception; our goal was to understand how to design better websites. However we can’t help wondering whether it might shine some light on the UK’s recent political turmoil. The Brexit campaigners were minorities of both main political parties and their anti-EU rhetoric had been rejected by the political mainstream for years; they had ideological rather than selfish motives. They ran a blatantly deceptive campaign, persisting in obvious untruths but abandoning them promptly after winning the vote. Rejection is not the only known factor in situational deception; it’s known, for example, that people with unmet goals are more likely to cheat than people who are simply doing their best, and that one bad apple can have a cascading effect. But it still makes you think.

The outcome and aftermath of the referendum have left many people feeling rejected, from remain voters through people who will lose financially to foreign residents of the UK. Our research shows that feelings of rejection can increase cheating by 15-30%; perhaps this might have measurable effects in some sectors. How one might disentangle this from the broader effects of diminished social solidarity, and from politicians simply setting a bad example, could be an interesting problems for social scientists.

GCHQ helps banks dump fraud losses on customers

We recently reported that the Commissioner of the Met, Sir Bernard Hogan-Howe, said that banks should not refund fraud victims as this would just make people careless with their passwords and antivirus. The banks’ desire to blame fraud victims if they can, to avoid refunding them, is rational enough, but for a police chief to support them was disgraceful. Thirty years ago, a chief constable might have said that rape victims had themselves to blame for wearing nice clothes; if he were to say that nowadays, he’d be sacked. Hogan-Howe’s view of bank fraud is just as uninformed, and just as offensive to victims.

Our spooky friends at Cheltenham have joined the party. The Register reports a story in the Financial Times (behind a paywall) which says GCHQ believes that “companies must do more to try and encourage their customers to improve their cyber security standards. Customers using outdated software – sometimes riddled with vulnerabilities that hackers can exploit – are a weak link in the UK’s cyber defences.” There is no mention of the banks’ own outdated technology, or of GCHQ’s role in keeping consumer software vulnerable.

The elegant scribblers at the Financial Times are under the impression that “At present, banks routinely cover the cost of fraud, regardless of blame.” So they clearly are not regular readers of Light Blue Touchpaper.

The spooks are slightly more cautious; according to the FT, GCHQ “has told the private sector it will not take responsibility for regulatory failings”. I’m sure the banks will heave a big sigh of relief that their cosy relationship with the police, the ombudsman and the FCA will not be disturbed.

We will have to change our security-economics teaching material so we don’t just talk about the case where “Alice guards a system and Bob pays the costs of failure”, but also this new case where “Alice guards a system, and bribes the government to compel Bob to pay the costs of failure.” Now we know how Hogan-Howe is paid off; the banks pay for his Dedicated Card and Payment Crime Unit. But how are they paying off GCHQ, and what else are they getting as part of the deal?