Category Archives: Legal issues

Security-related legislation, government initiatives, court cases

How Certification Systems Fail: Lessons from the Ware Report

Research in the Security Group has uncovered various flaws in systems, despite them being certified as secure. Sometimes the certification criteria have been inadequate and sometimes the certification process has been subverted. Not only do these failures affect the owners of the system but when evidence of certification comes up in court, the impact can be much wider.

There’s a variety of approaches to certification, ranging from extremely generic (such as Common Criteria) to highly specific (such as EMV), but all are (at least partially) descendants of a report by Willis H. Ware – “Security Controls for Computer Systems”. There’s much that can be learned from this report, particularly the rationale for why certification systems are set up as the way they are. The differences between how Ware envisaged certification and how certification is now performed is also informative, whether these differences are for good or for ill.

Along with Mike Bond and Ross Anderson, I have written an article for the “Lost Treasures” edition of IEEE Security & Privacy where we discuss what can be learned, about how today’s certifications work and should work, from the Ware report. In particular, we explore how the failure to follow the recommendations in the Ware report can explain why flaws in certified banking systems were not detected earlier. Our article, “How Certification Systems Fail: Lessons from the Ware Report” is available open-access in the version submitted to the IEEE. The edited version, as appearing in the print edition (IEEE Security & Privacy, volume 10, issue 6, pages 40–44, Nov‐Dec 2012. DOI:10.1109/MSP.2012.89) is only available to IEEE subscribers.

Dear ICO: disclose Sony's hash algorithm!

Today the UK Information Commissioner’s Office levied a record £250k fine against Sony over their 2011 Playstation Network breach in which 77 million passwords were stolen. Sony stated that they hashed the passwords, but provided no details. I was hoping that investigators would reveal what hash algorithm Sony used, and in particular if they salted and iterated the hash. Unfortunately, the ICO’s report failed to provide any such details:

The Commissioner is aware that the data controller made some efforts to protect account passwords, however the data controller failed to ensure that the Network Platform service provider kept up with technical developments. Therefore the means used would not, at the time of the attack, be deemed appropriate, given the technical resources available to the data controller.

Given how often I see password implementations use a single iteration of MD5 with no salt, I’d consider that to be the most likely interpretation. It’s inexcusable though for a 12-page report written at public expense to omit such basic technical details. As I said at the time of the Sony Breach, it’s important to update breach notification laws to require that password hashing details be disclosed in full. It makes a difference for users affected by the breach, and it might help motivate companies to get these basic security mechanics right.

Privacy considered harmful?

The government has once again returned to the vision of giving each of us an electronic health record shared throughout the NHS. This is about the fourth time in twenty years yet its ferocity has taken doctors by surprise.

Seventeen years ago, I was advising the BMA on safety and privacy, and we explained patiently why this was a bad idea. The next government went ahead anyway, which led predictably to the disaster of NPfIT. Nonetheless enough central systems were got working to seriously undermine privacy. Colleagues and I wrote the Database State report on the dangers of such systems; its was adopted as Lib Dem policy and aspects were adopted by the Conservatives too. That did lead to the abandonment of the ContactPoint children’s database but there was a rapid u-turn on health privacy after the election.

The big pharma lobbyists got their way after they got health IT lobbyist Tim Kelsey appointed as Cameron’s privacy tsar and it’s all been downhill from there. The minister says we have an opt-out; but no-one seems to have told him that under GPs will in future be compelled to upload a lot of information about us through a system called GPES if they want to be paid (they had an opt-out but it’s being withdrawn from April). And you can’t even register under a false name any more unless you use a stolen passport.

Yet more banking industry censorship

Yesterday, banking security vendor Thales sent this DMCA takedown request to John Young who runs the excellent Cryptome archive. Thales want him to remove an equipment manual that has been online since 2003 and which was valuable raw material in research we did on API security.

Banks use hardware security modules (HSMs) to manage the cryptographic keys and PINs used to authenticate bank card transactions. These used to be thought to be secure. But their application programming interfaces (APIs) had become unmanageably complex, and in the early 2000s Mike Bond, Jolyon Clulow and I found that by sending sequences of commands to the machine that its designers hadn’t anticipated, it was often possible to break the device spectacularly. This became a thriving field of security research.

But while API security has been a goldmine for security researchers, it’s been an embarrassment for the industry, in which Thales is one of two dominant players. Hence the attempt to close down our mine. As you’d expect, the smaller firms in the industry, such as Utimaco, would prefer HSM APIs to be open (indeed, Utimaco sent two senior people to a Dagstuhl workshop on APIs that we held a couple of months ago). Even more ironically, Thales’s HSM business used to be the Cambridge startup nCipher, which helped our research by giving us samples of their competitors’ products to break.

If this case ever comes to court, the judge might perhaps consider the Lexmark case. Lexmark sued Static Control Components (SCC) for DMCA infringement in order to curtail competition. The court found this abusive and threw out the case. I am not a lawyer, and John Young must clearly take advice. However this particular case of internet censorship serves no public interest (as with previous attempts by the banking industry to censor security research).

Identifying file sharers — the US approach

Last Friday’s successful appeal in the Golden Eye case will mean that significantly more UK-based broadband users will shortly be receiving letters that say that they appear to have been participating in file sharing activity of pornographic films. Recipients of these letters could do worse than to start by consulting this guide as to what to do next.

Although I acted as an expert witness in the original hearing, I was not involved in the appeal since. It was not concerned with technical matters, but was deciding whether Golden Eye could pursue claims for damages on behalf of third party copyright holders (the court says that they may now do so).

Subsequent to the original hearing, I assisted Consumer Focus by producing an expert report on how evidence in file sharing cases should be collected and processed. I wrote about this here in July.

In September, at the request of Consumer Focus, I attended a presentation given by Ms Marianne Grant, Senior Vice President of the Motion Picture Association of America (MPAA) in which she outlined the way in which rights holders in the United States were proposing to monitor unauthorised file sharing of copyright material.

I had a number of concerns about these proposals and I wrote to Consumer Focus to set these out. I have now noted (somewhat belatedly, hence this holiday season blog post) that Consumer Focus have made this letter available online, along with their own letter to the MPAA.

So 2013 looks like being “interesting times” for Internet traceabity — with letters going out in bulk to UK consumer from Golden Eye, and the US “six strikes” process forecast to roll out early next year (albeit it’s been forecast to start in November 2012, July 2012 and many dates before that, so we shall see).

Will the Information Commissioner be consistent?

This afternoon, the Information Commissioner will unveil a code of practice for data anonymisation. His office is under pressure; as I described back in August, Big Pharma wants all our medical records and has persuaded the Prime Minister it should have access so long as our names and addresses are removed. The theory is that a scientist doing research into cardiology (for example) could have access to the anonymised records of all heart patients.

The ICO’s blog suggests that he will consider data to be anonymous and thus no longer private if they cannot be reidentified by reference to any other data already in the public domain. But this is trickier than you might think. For example, Tim Gowers just revealed on his excellent blog that he had an ablation procedure for atrial fibrillation a couple of weeks ago. So if our researcher can search for all males aged 45-54 who had such a procedure on November 6th 2012 he can pull Tim’s record, including everything that Tim intended to keep private. Even with a central cardiology register, it’s hard to think of a practical mechanism could block Tim’s record as soon as he made that blog post. But now researchers are starting to carry round millions of people’s records on their laptops, protecting privacy is getting really hard.

In his role as data protection regulator, the Commissioner has been eager to disregard the risk of re-identification from private information. Yet Maurice Frankel of the Campaign for Freedom of Information has pointed out to me that he regularly applies a very different rule in Freedom of Information cases, including one involving the University of Cambridge. There, he refused a freedom of information request about university dismissals on the grounds that “friends, former colleagues, or acquaintances of a dismissed person may, through their contact with that person, know something of the circumstances of that person’s departure” (see para 30).

So I will be curious to see this afternoon whether the Commissioner places greater value on the consistency of his legal rulings, or their convenience to the powerful.

Who will screen the screeners?

Last time I flew through Luton airport it was a Sunday morning, and I went up to screening with a copy of the Sunday Times in my hand; it’s non-metallic after all. The guard by the portal asked me to put it in the tray with my bag and jacket, and I did so. But when the tray came out, the newspaper wasn’t there. I approached the guard and complained. He tried to dismiss me but I was politely insistent. He spoke to the lady sitting at the screen; she picked up something with a guilty look sideways at me, and a few seconds later my paper came down the rollers. As I left the screening area, there were two woman police constables, and I wondered whether I should report the attempted theft of a newspaper. As my flight was leaving in less than an hour, I walked on by. But who will screen the screeners?

This morning I once more flew through Luton, and I started to suspect it wouldn’t be the airport’s management. This time the guard took exception to the size of the clear plastic bag holding my toothpaste, mouthwash and deodorant, showing me with glee that it has half a centimetre wider than the official outline on a card he had right to hand. I should mention that I was using a Sainsbury’s freezer bag, a standard item in our kitchen which we’ve used for travel for years. No matter; the guard gleefully ordered me to buy an approved one for a pound from a slot machine placed conveniently beside the belt. (And we thought Ryanair’s threat to charge us a pound to use the loo was just a marketing gimmick.) But what sort of signal do you give to low-wage security staff if the airport merely sees security as an excuse to shake down the public? And after I got through to the lounge and tried to go online, I found that the old Openzone service (which charged by the minute) is no longer on offer; instead Luton Airport now demands five pounds for an hour’s access. So I’m writing this blog post from Amsterdam, and next time I’ll probably fly from Stansted.

Perhaps one of these days I’ll write a paper on “Why Security Usability is Hard”. Meanwhile, if anyone reading this is near Amsterdam on Monday, may I recommend the Amderdam Privacy Conference? Many interesting people will be talking about the ways in which governments bother us. (I’m talking about how the UK government is trying to nobble the Data Protection Regulation in order to undermine health privacy.)

Chip and Skim: cloning EMV cards with the pre-play attack

November last, on the Eurostar back from Paris, something struck me as I looked at the logs of ATM withdrawals disputed by Alex Gambin, a customer of HSBC in Malta. Comparing four grainy log pages on a tiny phone screen, I had to scroll away from the transaction data to see the page numbers, so I couldn’t take in the big picture in one go. I differentiated pages instead using the EMV Unpredictable Number field – a 32 bit field that’s supposed to be unique to each transaction. I soon got muddled up… it turned out that the unpredictable numbers… well… weren’t. Each shared 17 bits in common and the remaining 15 looked at first glance like a counter. The numbers are tabulated as follows:

F1246E04
F1241354
F1244328
F1247348

And with that the ball started rolling on an exciting direction of research that’s kept us busy the last nine months. You see, an EMV payment card authenticates itself with a MAC of transaction data, for which the freshly generated component is the unpredictable number (UN). If you can predict it, you can record everything you need from momentary access to a chip card to play it back and impersonate the card at a future date and location. You can as good as clone the chip. It’s called a “pre-play” attack. Just like most vulnerabilities we find these days some in industry already knew about it but covered it up; we have indications the crooks know about this too, and we believe it explains a good portion of the unsolved phantom withdrawal cases reported to us for which we had until recently no explanation.

Mike Bond, Omar Choudary, Steven J. Murdoch, Sergei Skorobogatov, and Ross Anderson wrote a paper on the research, and Steven is presenting our work as keynote speaker at Cryptographic Hardware and Embedded System (CHES) 2012, in Leuven, Belgium. We discovered that the significance of these numbers went far beyond this one case.

Continue reading Chip and Skim: cloning EMV cards with the pre-play attack

The rush to 'anonymised' data

The Guardian has published an op-ed I wrote on the risks of anonymised medical records along with a news article on CPRD, a system that will make our medical records available for researchers from next month, albeit with the names and addresses removed.

The government has been pushing for this since last year, having appointed medical datamining enthusiast Tim Kelsey as its “transparency tsar”. There have been two consultations on how records should be anonymised, and how effective it could be; you can read our responses here and here (see also FIPR blog here). Anonymisation has long been known to be harder than it looks (and the Royal Society recently issued a authoritative report which said so). But getting civil servants to listen to X when the Prime Minister has declared for Not-X is harder still!

Despite promises that the anonymity mechanisms would be open for public scrutiny, CPRD refused a Freedom of Information request to disclose them, apparently fearing that disclosure would damage security. Yet research papers written using CPRD data will surely have to disclose how the data were manipulated. So the security mechanisms will become known, and yet researchers will become careless. I fear we can expect a lot more incidents like this one.

Debunking cybercrime myths

Our paper Measuring the Cost of Cybercrime sets out to debunk the scaremongering around online crime that governments and defence contractors are using to justify everything from increased surveillance to preparations for cyberwar. It will appear at the Workshop on the Economics of Information Security later this month. There’s also some press coverage.

Last year the Cabinet Office published a report by Detica claiming that cybercrime cost the UK £27bn a year. This was greeted with derision, whereupon the Ministry of Defence’s chief scientific adviser, Mark Welland, asked us whether we could come up with some more defensible numbers.

We assembled a team of experts and collated what’s known. We came up with a number of interesting conclusions. For example, we compared the direct costs of cybercrimes (the amount stolen) with the indirect costs (costs in anticipation, such as countermeasures, and costs in consequence such as paying compensation). With traditional crimes that are now classed as “cyber” as they’re done online, such as welfare fraud, the indirect costs are much less than the direct ones; while for “pure”cybercrimes that didn’t exist before (such as fake antivirus software) the indirect costs are much greater. As a striking example, the botnet behind a third of the spam in 2010 earned its owner about $2.7m while the worldwide costs of fighting spam were around $1bn.

Some of the reasons for this are already well-known; traditional crimes tend to be local, while the more modern cybercrimes tend to be global and have strong externalities. As for what should be done, our research suggests we should perhaps spend less on technical countermeasures and more on locking up the bad guys. Rather than giving most of its cybersecurity budget to GCHQ, the government should improve the police’s cybercrime and forensics capabilities, and back this up with stronger consumer protection.