Category Archives: Security economics

Social-science angles of security

Why dispute resolution is hard

Today we release a paper on security protocols and evidence which analyses why dispute resolution mechanisms in electronic systems often don’t work very well. On this blog we’ve noted many many problems with EMV (Chip and PIN), as well as other systems from curfew tags to digital tachographs. Time and again we find that electronic systems are truly awful for courts to deal with. Why?

The main reason, we observed, is that their dispute resolution aspects were never properly designed, built and tested. The firms that delivered the main production systems assumed, or hoped, that because some audit data were available, lawyers would be able to use them somehow.

As you’d expect, all sorts of things go wrong. We derive some principles, and show how these are also violated by new systems ranging from phone banking through overlay payments to Bitcoin. We also propose some enhancements to the EMV protocol which would make it easier to resolve disputes over Chip and PIN transactions.

Update (2013-03-07): This post was mentioned on Bruce Schneier’s blog, and this is some good discussion there.

Update (2014-03-03): The slides for the presentation at Financial Cryptography are now online.

Opting out of the latest NHS data grab

The next three weeks will see a leaflet drop on over 20 million households. NHS England plans to start uploading your GP records in March or April to a central system, from which they will be sold to a wide range of medical and other research organisations. European data-protection and human-rights laws demand that we be able to opt out of such things, so the Information Commissioner has told the NHS to inform you of your right to opt out.

Needless to say, their official leaflet is designed to cause as few people to opt out as possible. It should really have been drafted like this. (There’s a copy of the official leaflet at the MedConfidential.org website.) But even if it had been, the process still won’t meet the consent requirements of human-rights law as it won’t be sent to every patient. One of your housemates could throw it away as junk before you see it, and if you’ve opted out of junk mail you won’t get a leaflet at all.

Yet if you don’t opt out in the next few weeks your data will be uploaded to central systems and you will not be able to get it deleted, ever. If you don’t opt out your kids in the next few weeks the same will happen to their data, and they will not be able to get their data deleted even if they decide they prefer privacy once they come of age. If you opted out of the Summary Care Record in 2009, that doesn’t count; despite a ministerial assurance to the contrary, you now need to opt out all over again. For further information see the website of GP Neil Bhatia (who drafted our more truthful leaflet) and previous LBT posts on medical privacy.

Reading this may harm your computer

David Modic and I have just published a paper on The psychology of malware warnings. We’re constantly bombarded with warnings designed to cover someone else’s back, but what sort of text should we put in a warning if we actually want the user to pay attention to it?

To our surprise, social cues didn’t seem to work. What works best is to make the warning concrete; people ignore general warnings such as that a web page “might harm your computer” but do pay attention to a specific one such as that the page would “try to infect your computer with malware designed to steal your bank account and credit card details in order to defraud you”. There is also some effect from appeals to authority: people who trust their browser vendor will avoid a page “reported and confirmed by our security team to contain malware”.

We also analysed who turned off browser warnings, or would have if they’d known how: they were people who ignored warnings anyway, typically men who distrusted authority and either couldn’t understand the warnings or were IT experts.

Crypto festival video

We had a crypto festival in London in London in November at which a number of cryptographers and crypto policy folks got together with over 1000 mostly young attendees to talk about what might be done in response to the Snowden revelations.

Here is a video of the session in which I spoke. The first speaker was Annie Machon (at 02.35) talking of her experience of life on the run from MI5, and on what we might do to protect journalists’ sources in the future. I’m at 23.55 talking about what’s changed for governments, corporates, researchers and others. Nick Pickles of Big Brother Watch follows at 45.45 talking on what can be done in terms of practical politics; it turned out that only two of us in the auditorium had met our MPs over the Comms Data Bill. The final speaker, Smari McCarthy, comes on at 56.45, calling for lots more encryption. The audience discussion starts at 1:12:00.

WEIS 2014 call for papers

Next year’s Workshop on the Economics of Information Security (WEIS 2014) will be at Penn State on June 23–24. The call for papers is now out; papers are due by the end of February.

It will be fascinating to see what effects the Snowden revelations will have on the community’s thinking about security standards and regulation, the incentives for information sharing and cooperation, and the economics of privacy, to mention only three of the workshop’s usual topics. Now that we’ve had six months to think things through, how do you think the security game has changed, and why? Do we need minor changes to our models, or major ones? Are we in the same policy world, or a quite different one? WEIS could be particularly interesting next year. Get writing!

Your medical records – now on sale

Your medical records are now officially on sale. American drug companies now learn that MedRed BT Health Cloud will provide public access to 50 million de-identified patient records from UK.

David Cameron announced in 2011 that every NHS patient would be a research patient, with their records opened up to private healthcare firms. He promised that our records would be anonymised and we’d have a right to opt out. I pointed out that anonymisation doesn’t work very well (as did the Royal Society) but the Information Commissioner predictably went along with the charade (and lobbyists are busy fixing up the new data protection regulation in Brussels to leave huge loopholes for health service management and research). The government duly started to compel the upload of GP data, to join the hospital data it already has. During the launch of a medical confidentiality campaign the health secretary promised to respect existing opt-outs but has now reneged on his promise.

The data being put online by BT appear to be the data it already manages from the Secondary Uses Service, which is mostly populated by records of finished consultant episodes from hospitals. These are pseudonymised by removing names and addresses but still have patient postcodes and dates of birth; patient views on this were ignored. I wonder if US purchasers will get these data items? I also wonder whether patients will be able to opt out of SUS? Campaigners have sent freedom of information requests to hundreds of hospitals to find out; so we should know soon enough.

We're hiring again

We have a vacancy for a postdoc to work on the economics of cybercrime for two years from January. It might suit someone with a PhD in economics or criminology and an interest in online crime; or a PhD in computer science with an interest in security and economics.

Security economics has grown rapidly in the last decade; security in global systems is usually an equilibrium that emerges from the selfish actions of many independent actors, and security failures often follow from perverse incentives. To understand better what works and what doesn’t, we need both theoretical models and empirical data. We have access to various large-scale sources of data relating to cybercrime – email spam, malware samples, DNS traffic, phishing URL feeds – and some or all of this data could be used in this research. We’re very open-minded about what work might be done on this project; possible topics include victim analysis, malware analysis, spam data mining, data visualisation, measuring attacks, how security scales (or fails to), and how cybercrime data could be shared better.

This is an international project involving colleagues at CMU, SMU and the NCFTA.

A Study of Whois Privacy and Proxy Service Abuse

ICANN have now published a draft for public comment of “A Study of Whois Privacy and Proxy Service Abuse“. I am the primary author of this report — the work being done whilst I was collaborating with the National Physical Laboratory (NPL) under EPSRC Grant EP/H018298/1.

This particular study was originally proposed by ICANN in 2010, one of several that were to examine the impact of domain registrants using privacy services (where the name of a domain registrant is published, but contact details are kept private) and proxy services (where even the domain licensee’s name is not made available on the public database).

ICANN wanted to know if a significant percentage of the domain names used to conduct illegal or harmful Internet activities are registered via privacy or proxy services to obscure the perpetrator’s identity? No surprises in our results: they are!

However, it’s more interesting to ask whether this percentage is somewhat higher than the usage of privacy or proxy services for entirely lawful and harmless Internet activities? This turned out NOT to be the case — for example banks use privacy and proxy services almost as often as the registrants of domains used in the hosting of child sexual abuse images; and the registrants of domains used to host (legal) adult pornography use privacy and proxy services more often than most (but not all) of the different types of malicious activity that we studied.

It’s also relevant to consider what other methods might be chosen by those involved in criminal activity to obscure their identities, because in the event of changes to privacy and proxy services, it is likely that they will turn to these alternatives.

Accordingly, we determined experimentally whether a significant percentage of the domain names we examined have been registered with incorrect Whois contact information – and specifically whether or not we could reach the domain registrant using a phone number from the Whois information. We asked them a single question in their native language “did you register this domain”?

We got somewhat variable results from our phone survey — but the pattern becomes clear if we consider whether there is any a priori hope at all of ringing up the domain registrant?

If we sum up the likelihoods:

  • uses privacy or proxy service
  • no (apparently valid) phone number in whois
  • number is apparently valid, but fails to connect
  • number reaches someone other than the registrant

then we find that for legal and harmless activities the probability of a phone call not being possible ranges between 24% (legal pharmacies on the Legitscript list) and 62% (owners of lawful websites that someone has broken into and installed phishing pages). For malicious activities the probability of failure is 88% or more, with typosquatting (which is a civil matter, rather than a criminal one) sitting at 68% (some of the typosquatters want to hide, some do not).

There’s lots of detail and supporting statistics in the report… and an executive summary for the time-challenged. It will provide real data, rather than just speculative anecdotes, to inform the debate around reforming Whois — and the difficulties of doing so.

We're hiring

We have a vacancy for a postdoc to work on the psychology of cybercrime and deception for two years from October. It might suit someone with a PhD in psychology or behavioural economics with a specialisation in deception, fraud or online crime; or a PhD in computer science with a strong interest in psychology, usability and security.

This is part of a cross-disciplinary project involving colleagues at Portsmouth, Newcastle and UCL. It will build on work we’ve been doing in the psychology of security over the past few years.

Why privacy regulators are ineffective: an anthropologist's view

Privacy activists have complained for years that the Information Commissioner is useless, and compared him with captured regulators like the FSA and the Financial Ombudsman. However I’ve come across a paper by a well-known anthropologist that gives a different take on the problem.

Alan Fiske did fieldwork among a tribe in northern Nigeria that has different boundaries for which activities are regulated by communal sharing, authority, tit-for-tat or monetary exchange. For example,labour within the village is always communal; you expect your neighbours to help you fix your house, and you later help them fix theirs. (This exasperated colonialists who couldn’t get the locals to work for cash; the locals for their part imagined that Europeans must present their children with an itemised bill for child-rearing when they reached adulthood.) He has since written several papers on how many of the tensions in human society arise on the boundaries of these domains of sharing, authority, tit-for-tat and the market. The boundaries can vary by culture, by generation and by politics; libertarians are happy to buy and sell organs for transplant, where many people prefer communal sharing, while radical socialists object to some routine market transactions. Indeed regulatory preferences may drive political views.

So far so good. Where it gets interesting is his extensive discussion of taboo transactions across a variety of cultures, and the institutions created to mitigate the discomfort that people feel when something affects more than one sphere of regulation: from extreme cases such as selling a child into slavery so you can feed your other children, through bride-price and blood money, to such everyday things as alimony and deconsecrating a cemetery for development. It turns out there’s a hierarchy of spheres, with sharing generally taking precedence over authority and authority over tit-for-tat, and market pricing following along last. This ordering makes “downhill” transactions easier. Alimony works (you once loved me, so pay me money!) but buying love doesn’t. Continue reading Why privacy regulators are ineffective: an anthropologist's view