All posts by Ross Anderson

Tracing stolen bitcoin

A new Computerphile video explains how we’ve worked out a much better way to track stolen bitcoin. Previous attempts to do this had got entangled in the problem of dealing with transactions that split bitcoin into change, or that consolidate smaller sums into larger ones, and with mining fees. The answer comes from an unexpected direction: a legal precedent in 1816. We discussed the technical details last week at the Security Protools Workshop; a preprint of our paper is here.

Previous attempts to track tainted coins had used either the “poison” or the “haircut” method. Suppose I open a new address and pay into it three stolen bitcoin followed by seven freshly-mined ones. Then under poison, the output is ten stolen bitcoin, while under haircut it’s ten bitcoin that are marked 30% stolen. After thousands of blocks, poison tainting will blacklist millions of addresses, while with haircut the taint gets diffused, so neither is very effective at tracking stolen property. Bitcoin due-diligence services supplant haircut taint tracking with AI/ML, but the results are still not satisfactory.

We discovered that, back in 1816, the High Court had to tackle this problem in Clayton’s case, which involved the assets and liabilities of a bank that had gone bust. The court ruled that money must be tracked through accounts on the basis of first-in, first out (FIFO); the first penny into an account goes to satisfy the first withdrawal, and so on.

Ilia Shumailov has written software that applies FIFO tainting to the blockchain and the results are impressive, with a massive improvement in precision. What’s more, FIFO taint tracking is lossless, unlike haircut; so in addition to tracking a stolen coin forward to find where it’s gone, you can start with any UTXO and trace it backwards to see its entire ancestry. It’s not just good law; it’s good computer science too.

We plan to make this software public, so that everybody can use it and everybody can see where the bad bitcoins are going.

I’m giving a further talk on Tuesday at a financial-risk conference in Paris.

We will make you like our research

This is the title of a paper that appeared today in PLOS One. It describes a tool we developed initially to assess the gullibility of cybercrime victims, and which we now present as a general-purpose psychometric of individual susceptibility to persuasion. An early version was described three years ago here and here. Since then we have developed it significantly and used it in experiments on cybercrime victims, Facebook users and IT security officers.

We investigated the effects on persuasion of a subject’s need for cognition, need for consistency, sensation seeking, self-control, consideration of future consequences, need for uniqueness, risk preferences and social influence. The strongest factor was consideration of future consequences, or “premeditation” for short.

We offer a full psychometric test in STP-II with 54 items spanning 10 subscales, and a shorter STP-II-B with 30 items to measure first-order factors, but that omits second-order constructs for brevity. The scale is here with the B items marked, and here is a live instance of the survey for you to play with. Once you complete it, there’s an on-the-fly interpretation at the end. You don’t have to give your name and we don’t record your IP address.

We invite everyone to use our STP-II scale – not just in security contexts, but also in consumer and marketing psychology and anywhere else it might possibly be helpful. Do let us know what you find!

Making security sustainable

Making security sustainable is a piece I wrote for Communications of the ACM and has just appeared in the Privacy and security column of their March issue. Now that software is appearing in durable goods, such as cars and medical devices, that can kill us, software engineering will have to come of age.

The notion that software engineers are not responsible for things that go wrong will be laid to rest for good, and we will have to work out how to develop and maintain code that will go on working dependably for decades in environments that change and evolve. And as security becomes ever more about safety rather than just privacy, we will have sharper policy debates about surveillance, competition, and consumer protection.

Perhaps the biggest challenge will be durability. At present we have a hard time patching a phone that’s three years old. Yet the average age of a UK car at scrappage is about 14 years, and rising all the time; cars used to last 100,000 miles in the 1980s but now keep going for nearer 200,000. As the embedded carbon cost of a car is about equal to that of the fuel it will burn over its lifetime, we just can’t afford to scrap cars after five years, as do we laptops.

For durable safety-critical goods that incorporate software, the long-term software maintenance cost may become the limiting factor. Two things follow. First, software sustainability will be a big research challenge for computer scientists. Second, it will also be a major business opportunity for firms who can cut the cost.

This paper follows on from our earlier work for the European Commission on what happens to safety regulation in the future Internet of Things.

What Goes Around Comes Around

What Goes Around Comes Around is a chapter I wrote for a book by EPIC. What are America’s long-term national policy interests (and ours for that matter) in surveillance and privacy? The election of a president with a very short-term view makes this ever more important.

While Britain was top dog in the 19th century, we gave the world both technology (steamships, railways, telegraphs) and values (the abolition of slavery and child labour, not to mention universal education). America has given us the motor car, the Internet, and a rules-based international trading system – and may have perhaps one generation left in which to make a difference.

Lessig taught us that code is law. Similarly, architecture is policy. The architecture of the Internet, and the moral norms embedded in it, will be a huge part of America’s legacy, and the network effects that dominate the information industries could give that architecture great longevity.

So if America re-engineers the Internet so that US firms can microtarget foreign customers cheaply, so that US telcos can extract rents from foreign firms via service quality, and so that the NSA can more easily spy on people in places like Pakistan and Yemen, then in 50 years’ time the Chinese will use it to manipulate, tax and snoop on Americans. In 100 years’ time it might be India in pole position, and in 200 years the United States of Africa.

My book chapter explores this topic. What do the architecture of the Internet, and the network effects of the information industries, mean for politics in the longer term, and for human rights? Although the chapter appeared in 2015, I forgot to put it online at the time. So here it is now.

End of privacy rights in the UK public sector?

There has already been serious controversy about the “Henry VIII” powers in the Brexit Bill, which will enable ministers to rewrite laws at their discretion as we leave the EU. Now Theresa May’s government has sneaked a new “Framework for data processing in government” into the Lords committee stage of the new Data Protection Bill (see pages 99-101, which are pp 111–3 of the pdf). It will enable ministers to promulgate a Henry VIII privacy regulation with quite extraordinary properties.

It will cover all data held by any public body including the NHS (175(1)), be outside of the ICO’s jurisdiction (178(5)) and that of any tribunal (178(2)) including Judicial Review (175(4), 176(7)), wider human-rights law (178(2,3,4)), and international jurisdictions – although ministers are supposed to change it if they notice that it breaks any international treaty obligation (177(4)).

In fact it will be changeable on a whim by Ministers (175(4)), have no effective Parliamentary oversight (175(6)), and apply retroactively (178(3)). It will also provide an automatic statutory defence for any data processing in any Government decision taken to any tribunal/court 178(4)).

Ministers have had frequent fights in the past over personal data in the public sector, most frequently over medical records which they have sold, again and again, to drug companies and others in defiance not just of UK law, EU law and human-rights law, but of the express wishes of patients, articulated by opting out of data “sharing”. In fact, we have to thank MedConfidential for being the first to notice the latest data grab. Their briefing gives more details are sets out the amendments we need to press for in Parliament. This is not the only awful thing about the bill by any means; its section 164 will be terrible news for journalists. This is one of those times when you need to write to your MP. Please do it now!

Is this research ethical?

The Economist features face recognition on its front page, reporting that deep neural networks can now tell whether you’re straight or gay better than humans can just by looking at your face. The research they cite is a preprint, available here.

Its authors Kosinski and Wang downloaded thousands of photos from a dating site, ran them through a standard feature-extraction program, then classified gay vs straight using a standard statistical classifier, which they found could tell the men seeking men from the men seeking women. My students pretty well instantly called this out as selection bias; if gay men consider boyish faces to be cuter, then they will upload their most boyish photo. The paper authors suggest their finding may support a theory that sexuality is influenced by fetal testosterone levels, but when you don’t control for such biases your results may say more about social norms than about phenotypes.

Quite apart from the scientific value of the research, which is perhaps best assessed by specialists, I’m concerned with the ethics and privacy aspects. I am surprised that the paper doesn’t report having been through ethical review; the authors consider that photos on a dating website are public information and appear to assume that privacy issues simply do not arise.

Yet UK courts decided, in Campbell v Mirror, that privacy could be violated even by photos taken on the public street, and European courts have come to similar conclusions in I v Finland and elsewhere. For example, a Catholic woman is entitled to object to the use of her medical record in research on abortifacients and contraceptives even if the proposed use is fully anonymised and presents no privacy risk whatsoever. The dating site users would be similarly entitled to object to their photos being used in research to which they might have an ethical objection, even if they could not be identified from their photos. There are surely going to be people who object to research in any nature vs nurture debate, especially on a charged topic such as sexuality. And the whole point of the Economist’s coverage is that face-recognition technology is now good enough to work at population scale.

What do LBT readers think?

Is the City force corrupt, or just clueless?

This week brought an announcement from a banking association that “identity fraud” is soaring to new levels, with 89,000 cases reported in the first six months of 2017 and 56% of all fraud reported by its members now classed as “identity fraud”.

So what is “identity fraud”? The announcement helpfully clarifies the concept:

“The vast majority of identity fraud happens when a fraudster pretends to be an innocent individual to buy a product or take out a loan in their name. Often victims do not even realise that they have been targeted until a bill arrives for something they did not buy or they experience problems with their credit rating. To carry out this kind of fraud successfully, fraudsters need access to their victim’s personal information such as name, date of birth, address, their bank and who they hold accounts with. Fraudsters get hold of this in a variety of ways, from stealing mail through to hacking; obtaining data on the ‘dark web’; exploiting personal information on social media, or though ‘social engineering’ where innocent parties are persuaded to give up personal information to someone pretending to be from their bank, the police or a trusted retailer.”

Now back when I worked in banking, if someone went to Barclays, pretended to be me, borrowed £10,000 and legged it, that was “impersonation”, and it was the bank’s money that had been stolen, not my identity. How did things change?

The members of this association are banks and credit card issuers. In their narrative, those impersonated are treated as targets, when the targets are actually those banks on whom the impersonation is practised. This is a precursor to refusing bank customers a “remedy” for “their loss” because “they failed to protect themselves.”
Now “dishonestly making a false representation” is an offence under s2 Fraud Act 2006. Yet what is the police response?

The Head of the City of London Police’s Economic Crime Directorate does not see the banks’ narrative as dishonest. Instead he goes along with it: “It has become normal for people to publish personal details about themselves on social media and on other online platforms which makes it easier than ever for a fraudster to steal someone’s identity.” He continues: “Be careful who you give your information to, always consider whether it is necessary to part with those details.” This is reinforced with a link to a police website with supposedly scary statistics: 55% of people use open public wifi and 40% of people don’t have antivirus software (like many security researchers, I’m guilty on both counts). This police website has a quote from the Head’s own boss, a Commander who is the National Police Coordinator for Economic Crime.

How are we to rate their conduct? Given that the costs of the City force’s Dedicated Card and Payment Crime Unit are borne by the banks, perhaps they feel obliged to sing from the banks’ hymn sheet. Just as the MacPherson report criticised the Met for being institutionally racist, we might perhaps describe the City force as institutionally corrupt. There is a wide literature on regulatory capture, and many other examples of regulators keen to do the banks’ bidding. And it’s not just the City force. There are disgraceful examples of the Metropolitan Police Commissioner and GCHQ endorsing the banks’ false narrative. However people are starting to notice, including the National Audit Office.

Or perhaps the police are just clueless?