Category Archives: Security economics

Social-science angles of security

Evidence based policing (of booters)

“Booters” (they usually call themselves “stressers” in a vain attempt to appear legitimate) are denial-of-service-for-hire websites where anyone can purchase small scale attacks that will take down a home Internet connection, a High School (perhaps there’s an upcoming maths test?) or a poorly defended business website. Prices vary but for around $20.00 you can purchase as many 10 minute attacks as you wish to send for the next month! In pretty much every jurisdiction, booters are illegal to run and illegal to use, and there have been a series of Law Enforcement take-downs over the years, notably in the US, UK, Israel and the Netherlands.

On Wednesday December 14th, in by far the biggest operation to date, the FBI announced the arrest of six booter operators and the seizure of 49 (misreported as 48) booter domain names. Visiting those domains will now display a “WEBSITE SEIZED” splash page.

FBI website seizure splash page

The seizures were “evidence based” in that the FBI specifically targeted the most active booters by taking advantage of one of the datasets collected by the Cambridge Cybercrime Centre, which uses self-reported data from booters.
Continue reading Evidence based policing (of booters)

The Online Safety Bill: Reboot it, or Shoot it?

Yesterday I took part in a panel discussion organised by the Adam Smith Institute on the Online Safety Bill. This sprawling legislative monster has outlasted not just six Secretaries of State for Culture, Media and Sport, but two Prime Ministers. It’s due to slither back to Parliament in November, so we wrote a Policy Brief that explains what it tries to do and some of the things it gets wrong.

Some of the bill’s many proposals command wide support – for example, that online services should enable users to contact them effectively to report illegal material, which should be removed quickly. At present, only copyright owners and the police seem to be able to get the attention of the major platforms; ordinary people, including young people, should also be able to report unlawful things and have them taken down quickly. Here, the UK government intends to bind only large platforms like Facebook and Twitter. We propose extending the duty to gaming platforms too. Kids just aren’t on Facebook any more.

The Bill also tries to reignite the crypto wars by empowering Ofcom to require services to use “accredited technology” (read: software written by GCHQ contractors) to scan your WhatsApp messages. The idea that you can catch violent criminals such as child abusers and terrorists by bulk text scanning is entirely implausible; the error rates are so high that the police would swamped with false positives. Quite apart from that, bulk intercept has always been illegal in Britain, and would also contravene the European Convention on Human Rights, to which we are still a signatory despite Brexit. This power to mandate client-side scanning has to be scrapped, a move that quite a few MPs already support.

But what should we do instead about illegal images of minors, and about violent online political extremism? More local policing would be better; we explain why. This is informed by our work on the link between violent extremism and misogyny, as well as our analysis of a similar proposal in the EU. So it is welcome that the government is hiring more police officers. What’s needed now is a greater focus on family violence, which is the root cause of most child abuse, rather than using child abuse as an excuse to increase the central agencies’ surveillance powers and budgets.

In our Policy Brief, we also discuss content moderation, and suggest that it be guided by the principle of minimising cruelty. One of the other panelists, Graham Smith, discussed the legal difficulties of regulating speech and made a strong case that restrictions (such as copyright, libel, incitement and harassment) should be set out in primary legislation rather than farmed out to private firms, as at present, or to a regulator, as the Bill proposes. Given that most of the bad stuff is illegal already, why not make a start by enforcing the laws we already have, as they do in Germany? British policing efforts online range from the pathetic to the outrageous. It looks like Parliament will have some interesting decisions to take when the bill comes back.

Talking Trojan: Analyzing an Industry-Wide Disclosure

Talking Trojan: Analyzing an Industry-Wide Disclosure tells the story of what happened after we discovered the Trojan Source vulnerability, which broke almost all computer languages, and the Bad Characters vulnerability, which broke almost all large NLP tools. This provided a unique opportunity to measure software maintenance in action. Who patched quickly, reluctantly, or not at all? Who paid bug bounties, and who dodged liability? What parts of the disclosure ecosystem work well, which are limping along, and which are broken?

Security papers typically describe a vulnerability but say little about how it was disclosed and patched. And while disclosing one vulnerability to a single vendor can be hard enough, modern supply chains multiply the number of affected parties leading to an exponential increase in the complexity of the disclosure. One vendor will want an in-house web form, another will use an outsourced bug bounty platform, still others will prefer emails, and *nix OS maintainers will use a very particular PGP mailing list. Governments sort-of want to assist with disclosures but prefer to use yet another platform. Many open-source projects lack an embargoed disclosure process, but it is often in the interest of commercial operating system maintainers to write embargoed patches – if you can get hold of the right people.

A vulnerability that affected many different products at the same time and in similar ways gave us a unique chance to observe the finite-impulse response of this whole complex system. Our observations reveal a number of weaknesses, such as a potentially dangerous misalignment of incentives between commercially sponsored bug bounty programs and multi-vendor coordinated disclosure platforms. We suggest tangible changes that could strengthen coordinated disclosure globally.

We also hope to inspire other researchers to publish the mechanics of individual disclosures, so that we can continue to measure and improve the critical ecosystem on which we rely as our main defense against growing supply chain threats. In the meantime, our paper can be found here, and will appear in SCORED ‘22 this November.

The Dynamics of Industry-wide Disclosure

Last year, we disclosed two related vulnerabilities that broke a wide range of systems. In our Bad Characters paper, we showed how to use Unicode tricks – such as homoglyphs and bidi characters – to mislead NLP systems. Our Trojan Source paper showed how similar tricks could be used to make source code look one way to a human reviewer, and another way to a compiler, opening up a wide range of supply-chain attacks on critical software. Prior to publication, we disclosed our findings to four suppliers of large NLP systems, and nineteen suppliers of software development tools. So how did industry respond?

We were invited to give the keynote talk this year at LangSec, and the video is now available. In it we describe not just the Bad Characters and Trojan Source vulnerabilities, but the large natural experiment created by their disclosure. The Trojan Source vulnerability affected most compilers, interpreters, code editors and code repositories; this enabled us to compare responses by firms versus nonprofits and by firms that managed their own response versus those who outsourced it. The interaction between bug bounty programs, government disclosure assistance, peer review and press coverage was interesting. Most of the affected development teams took action, though some required a bit of prodding.

The response by the NLP maintainers was much less enthusiastic. By the time we gave this talk, only Google had done anything – though we now hear that Microsoft is now also working on a fix. The reasons for this responsibility gap need to be understood better. They may include differences in culture between C coders and data scientists; the greater costs and delays in the build-test-deploy cycle for large ML models; and the relative lack of press interest in attacks on ML systems. If many of our critical systems start to include ML components that are less maintainable, will the ML end up being the weakest link?

European Commission prefers breaking privacy to protecting kids

Today, May 11, EU Commissioner Ylva Johannson announced a new law to combat online child sex abuse. This has an overt purpose, and a covert purpose.

The overt purpose is to pressure tech companies to take down illegal material, and material that might possibly be illegal, more quickly. A new agency is to be set up in the Hague, modeled on and linked to Europol, to maintain an official database of illegal child sex-abuse images. National authorities will report abuse to this new agency, which will then require hosting providers and others to take suspect material down. The new law goes into great detail about the design of the takedown process, the forms to be used, and the redress that content providers will have if innocuous material is taken down by mistake. There are similar provisions for blocking URLs; censorship orders can be issued to ISPs in Member States.

The first problem is that this approach does not work. In our 2016 paper, Taking Down Websites to Prevent Crime, we analysed the takedown industry and found that private firms are much better at taking down websites than the police. We found that the specialist contractors who take down phishing websites for banks would typically take six hours to remove an offending website, while the Internet Watch Foundation – which has a legal monopoly on taking down child-abuse material in the UK – would often take six weeks.

We have a reasonably good understanding of why this is the case. Taking down websites means interacting with a great variety of registrars and hosting companies worldwide, and they have different ways of working. One firm expects an encrypted email; another wants you to open a ticket; yet another needs you to phone their call centre during Peking business hours and speak Mandarin. The specialist contractors have figured all this out, and have got good at it. However, police forces want to use their own forms, and expect everyone to follow police procedure. Once you’re outside your jurisdiction, this doesn’t work. Police forces also focus on process more than outcome; they have difficulty hiring and retaining staff to do detailed technical clerical work; and they’re not much good at dealing with foreigners.

Our takedown work was funded by the Home Office, and we recommended that they run a randomised controlled trial where they order a subset of UK police forces to use specialist contractors to take down criminal websites. We’re still waiting, six years later. And there’s nothing in UK law that would stop them running such a trial, or that would stop a Chief Constable outsourcing the work.

So it’s really stupid for the European Commission to mandate centralised takedown by a police agency for the whole of Europe. This will be make everything really hard to fix once they find out that it doesn’t work, and it becomes obvious that child abuse websites stay up longer, causing real harm.

Oh, and the covert purpose? That is to enable the new agency to undermine end-to-end encryption by mandating client-side scanning. This is not evident on the face of the bill but is evident in the impact assessment, which praises Apple’s 2021 proposal. Colleagues and I already wrote about that in detail, so I will not repeat the arguments here. I will merely note that Europol coordinates the exploitation of communications systems by law enforcement agencies, and the Dutch National High-Tech Crime Unit has developed world-class skills at exploiting mobile phones and chat services. The most recent case of continent-wide bulk interception was EncroChat; although reporting restrictions prevent me telling the story of that, there have been multiple similar cases in recent years.

So there we have it: an attack on cryptography, designed to circumvent EU laws against bulk surveillance by using a populist appeal to child protection, appears likely to harm children instead.

Security engineering course

This week sees the start of a course on security engineering that Sam Ainsworth and I are teaching. It’s based on the third edition of my Security Engineering book, and is a first cut at a ‘film of the book’.

Each week we will put two lectures online, and here are the first two. Lecture 1 discusses our adversaries, from nation states through cyber-crooks to personal abuse, and the vulnerability life cycle that underlies the ecosystem of attacks. Lecture 2 abstracts this empirical experience into more formal threat models and security policies.

Although our course is designed for masters students and fourth-year undergrads in Edinburgh, we’re making the lectures available to everyone. I’ll link the rest of the videos in followups here, and eventually on the book’s web page.

WEIS 2022 call for papers

The 2022 Workshop on the Economics of Information Security will be held at Tulsa, Oklahoma, on 21-22 June 2022. Paper submissions are due by 28 February 2022. After two virtual events we’re eager to get back to meeting in person if we possibly can.

The program chairs for 2022 are Sadia Afroz and Laura Brandimarte, and here is the call for papers.

We originally set this as 20-21, being unaware that June 20 is the Juneteenth holiday in the USA. Sorry about that.

Anyway, we hope to see lots of you in Tulsa!

Bugs in our pockets?

In August, Apple announced a system to check all our iPhones for illegal images, then delayed its launch after widespread pushback. Yet some governments continue to press for just such a surveillance system, and the EU is due to announce a new child protection law at the start of December.

Now, in Bugs in our Pockets: The Risks of Client-Side Scanning, colleagues and I take a long hard look at the options for mass surveillance via software embedded in people’s devices, as opposed to the current practice of monitoring our communications. Client-side scanning, as the agencies’ new wet dream is called, has a range of possible missions. While Apple and the FBI talked about finding still images of sex abuse, the EU was talking last year about videos and text too, and of targeting terrorism once the argument had been won on child protection. It can also use a number of possible technologies; in addition to the perceptual hash functions in the Apple proposal, there’s talk of machine-learning models. And, as a leaked EU internal report made clear, the preferred outcome for governments may be a mix of client-side and server-side scanning.

In our report, we provide a detailed analysis of scanning capabilities at both the client and the server, the trade-offs between false positives and false negatives, and the side effects – such as the ways in which adding scanning systems to citizens’ devices will open them up to new types of attack.

We did not set out to praise Apple’s proposal, but we ended up concluding that it was probably about the best that could be done. Even so, it did not come close to providing a system that a rational person might consider trustworthy.

Even if the engineering on the phone were perfect, a scanner brings within the user’s trust perimeter all those involved in targeting it – in deciding which photos go on the naughty list, or how to train any machine-learning models that riffle through your texts or watch your videos. Even if it starts out trained on images of child abuse that all agree are illegal, it’s easy for both insiders and outsiders to manipulate images to create both false negatives and false positives. The more we look at the detail, the less attractive such a system becomes. The measures required to limit the obvious abuses so constrain the design space that you end up with something that could not be very effective as a policing tool; and if the European institutions were to mandate its use – and there have already been some legislative skirmishes – they would open up their citizens to quite a range of avoidable harms. And that’s before you stop to remember that the European Court of Justice struck down the Data Retention Directive on the grounds that such bulk surveillance, without warrant or suspicion, was a grossly disproportionate infringement on privacy, even in the fight against terrorism. A client-side scanning mandate would invite the same fate.

But ‘if you build it, they will come’. If device vendors are compelled to install remote surveillance, the demands will start to roll in. Who could possibly be so cold-hearted as to argue against the system being extended to search for missing children? Then President Xi will want to know who has photos of the Dalai Lama, or of men standing in front of tanks; and copyright lawyers will get court orders blocking whatever they claim infringes their clients’ rights. Our phones, which have grown into extensions of our intimate private space, will be ours no more; they will be private no more; and we will all be less secure.