Category Archives: Security economics

Social-science angles of security

Symposium on Post-Bitcoin Cryptocurrencies

I am at the Symposium on Post-Bitcoin Cryptocurrencies in Vienna and will try to liveblog the talks in follow-ups to this post.

The introduction was by Bernhard Haslhofer of AIT, who maintains the graphsense.info toolkit and runs the Titanium project on bitcoin forensics jointly with Rainer Boehme of Innsbruck. Rainer then presented an economic analysis arguing that criminal transactions were pretty well the only logical app for bitcoin as it’s permissionless and trustless; if you have access to the courts then there are better ways of doing things. However in the post-bitcoin world of ICOs and smart contracts, it’s not just the anti-money-laundering agencies who need to understand cryptocurrency but the securities regulators and the tax collectors. Yet there is a real policy tension. Governments hype blockchains; Austria uses them to auction sovereign bonds. Yet the only way in for the citizen is through the swamp. How can the swamp be drained?

Privacy for Tigers

As mobile phone masts went up across the world’s jungles, savannas and mountains, so did poaching. Wildlife crime syndicates can not only coordinate better but can mine growing public data sets, often of geotagged images. Privacy matters for tigers, for snow leopards, for elephants and rhinos – and even for tortoises and sharks. Animal data protection laws, where they exist at all, are oblivious to these new threats, and no-one seems to have started to think seriously about information security.

So we have been doing some work on this, and presented some initial ideas via an invited talk at Usenix Security in August. A video of the talk is now online.

The most serious poaching threats involve insiders: game guards who go over to the dark side, corrupt officials, and (now) the compromise of data and tools assembled for scientific and conservation purposes. Aggregation of data makes things worse; I might not care too much about a single geotagged photo, but a corpus of thousands of such photos tells a poacher where to set his traps. Cool new AI tools for recognising individual animals can make his work even easier. So people developing systems to help in the conservation mission need to start paying attention to computer security. Compartmentation is necessary, but there are hundreds of conservancies and game reserves, many of which are mutually mistrustful; there is no central authority at Fort Meade to manage classifications and clearances. Data sharing is haphazard and poorly understood, and the limits of open data are only now starting to be recognised. What sort of policies do we need to support, and what sort of tools do we need to create?

This is joint work with Tanya Berger-Wolf of Wildbook, one of the wildlife data aggregation sites, which is currently redeveloping its core systems to incorporate and test the ideas we describe. We are also working to spread the word to both conservators and online service firms.

How Protocols Evolve

Over the last thirty years or so, we’ve seen security protocols evolving in different ways, at different speeds, and at different levels in the stack. Today’s TLS is much more complex than the early SSL of the mid-1990s; the EMV card-payment protocols we now use at ATMs are much more complex than the ISO 8583 protocols used in the eighties when ATM networking was being developed; and there are similar stories for GSM/3g/4g, SSH and much else.

How do we make sense of all this?

Reconciling Multiple Objectives – Politics or Markets? was particularly inspired by Jan Groenewegen’s model of innovation according to which the rate of change depends on the granularity of change. Can a new protocol be adopted by individuals, or does it need companies to adopt it en masse for internal use, or does it need to spread through a whole ecosystem, or – the hardest case of all – does it require a change in culture, norms or values?

Security engineers tend to neglect such “soft” aspects of engineering, and we probably shouldn’t. So we sketch a model of the innovation stack for security and draw a few lessons.

Perhaps the most overlooked need in security engineering, particularly in the early stages of a system’s evolution, is recourse. Just as early ATM and point-of-sale system operators often turned away fraud victims claiming “Our systems are secure so it must have been your fault”, so nowadays people who suffer abuse on social media can find that there’s nowhere to turn. A prudent engineer should anticipate disputes, and give some thought in advance to how they should be resolved.

Reconciling Multiple Objectives appeared at Security Protocols 2017. I forgot to put the accepted version online and in the repository after the proceedings were published in late 2017. Sorry about that. Fortunately the REF rule that papers must be made open access within three months doesn’t apply to conference proceedings that are a book series; it may be of value to others to know this!

Google doesn’t seem to believe booters are illegal

Google has a number of restrictions on what can be advertised on their advert serving platforms. They don’t allow adverts for services that “cause damage, harm, or injury” and they don’t allow adverts for services that “are designed to enable dishonest behavior“.

Google don’t seem to have an explicit policy that says you cannot advertise a criminal enterprise : perhaps they think that is too obvious to state.

Nevertheless, the policies they written down might lead you to believe that advertising “booter” (or as they sometimes style themselves to appear more legitimate) “stresser” services would not be allowed. These are websites that allow anyone with a spare $5.00 or so to purchase distributed denial of service (DDoS) attacks.

Booters are mainly used by online game players to cheat — by knocking some of their opponents offline — or by pupils who down the school website to postpone an online test or just because they feel like it. You can purchase attacks for any reason (and attack any Internet system) that you want.

These booter sites are quite clearly illegal — there have been recent arrests in Israel and the Netherlands and in the UK Adam Mudd got two years (reduced to 21 months on appeal) for running a booter service. In the USA a New Mexico man recently got a fifteen year sentence for merely purchasing attacks from these sites (and for firearms charges as well).

However, Google doesn’t seem to mind booter websites advertising their wares on their platform. This advert was served up a couple of weeks back:

advert for booter

I complained using Google’s web form — after all, they serve up lots of adverts and their robots may not spot all the wickedness. That’s why they have reporting channels to allow them to correct mistakes. Nothing happened until I reached out to a Google employee (who spends a chunk of his time defending Google from DDoS attacks) and then finally the advert disappeared.

Last week another booter advert appeared:

but another complaint also made no difference and this time my contact failed to have any impact either, and so at the time of writing the advert is still there.

It seems to me that, for Google, income is currently more important than enforcing policies.

Responsible vulnerability disclosure in Europe

There is a report out today from the European economics think-tank CEPS on how responsible vulnerability disclosure might be harmonised across Europe. I was one of the advisers to this effort which involved not just academics and NGOs but also industry.

It was inspired in part by earlier work reported here on standardisation and certification in the Internet of Things. What happens to car safety standards once cars get patched once a month, like phones and laptops? The answer is not just that safety becomes a moving target, rather than a matter of pre-market testing; we also need a regime whereby accidents, hazards, vulnerabilities and security breaches get reported. That will mean responsible disclosure not just to OEMs and component vendors, but also to safety regulators, standards bodies, traffic police, insurers and accident victims. If we get it right, we could have a learning system that becomes steadily safer and more secure. But we could also get it badly wrong.

Getting it might will involve significant organisational and legal changes, which we discussed in our earlier report and which we carry forward here. We didn’t get everything we wanted; for example, large software vendors wouldn’t support our recommendation to extend the EU Product Liability Directive to services. Nonetheless, we made some progress, so today’s report can be seen a second step on the road.

Third Annual Cybercrime Conference

The Cambridge Cybercrime Centre is organising another one day conference on cybercrime on Thursday, 12th July 2018.

We have a stellar group of invited speakers who are at the forefront of their fields:

They will present various aspects of cybercrime from the point of view of criminology, policy, security economics, law and industry.

This one day event, to be held in the Faculty of Law, University of Cambridge will follow immediately after (and will be in the same venue as) the “11th International Conference on Evidence Based Policing” organised by the Institute of Criminology which runs on the 10th and 11th July 2018.

Full details (and information about booking) is here.