Category Archives: Security engineering

Bad security, good security, case studies, lessons learned

Here we go again

The Sunday media have been trailing a speech by David Cameron tomorrow about giving us online access to our medical records and our kids’ school records, and making anonymised versions of them widely available to researchers, companies and others. Here is coverage in the BBC, the Mail and the Telegraph; there’s also a Cabinet Office paper. The measures are supported by the CEO of Glaxo and opposed by many NGOs.

If the Government is going to “ensure all NHS patients can access their personal GP records online by the end of this Parliament”, they’ll have to compel the thousands of GPs who still keep patient records on their own machines to transfer them to centrally-hosted facilities. The systems are maintained by people who have to please the Secretary of State rather than GPs, and thus become progressively less useful. This won’t just waste doctors’ time but will have real consequences for patient safety and the quality of care.

We’ve seen this repeatedly over the lifetime of NPfIT and its predecessor the NHS IM&T strategy. Officials who can’t develop working systems become envious of systems created by doctors; they wrest control, and the deterioration starts.

It’s astounding that a Conservative prime minister could get the idea that nationalising something is the best way to make it work better. It’s also astonishing that a Government containing Liberals who believe in human rights, the rule of law and privacy should support the centralisation of medical records a mere two years after the Joseph Rowntree Reform Trust, a Liberal charity, produced the Database State report which explained how the centralisation of medical records (and for that matter children’s records) destroys privacy and contravenes human-rights law. The coming debate will no doubt be vigorous and will draw on many aspects of information security, from the dreadful security usability (and safety usability) of centrally-purchased NHS systems, through the real hazards of coerced access by vulnerable patients, to the fact that anonymisation doesn’t really work. There’s much more here. Of course the new centralisation effort will probably fail, just like the last two; health informatics is a hard problem, and even Google gave up. But our privacy should not depend on the government being incompetent at wrongdoing. It should refrain from wrongdoing in the first place.

Want to create a really strong password? Don’t ask Google

Google recently launched a major advertising campaign around its “Good to Know” guides to online safety and privacy. Google’s password advice has appeared on billboards in the London underground and a full-page ad in The Economist. Their example of a “very strong password” is ‘2bon2btitq’, taken from the famous Hamlet quote “To be or not to be, that is the question”.
Empirically though, this is not a strong password-it’s almost exactly average! Continue reading Want to create a really strong password? Don’t ask Google

Trusted Computing 2.1

We’re steadily learning more about the latest Trusted Computing proposals. People have started to grok that building signed boot into UEFI will extend Microsoft’s power over the markets for AV software and other security tools that install around boot time; while ‘Metro’ style apps (i.e. web/tablet/html5 style stuff) could be limited to distribution via the MS app store. Even if users can opt out, most of them won’t. That’s a lot of firms suddenly finding Steve Ballmer’s boot on their jugular.

We’ve also been starting to think about the issues of law enforcement access that arose during the crypto wars and that came to light again with CAs. These issues are even more wicked with trusted boot. If the Turkish government compelled Microsoft to include the Tubitak key in Windows so their intelligence services could do man-in-the-middle attacks on Kurdish MPs’ gmail, then I expect they’ll also tell Microsoft to issue them a UEFI key to authenticate their keylogger malware. Hey, I removed the Tubitak key from my browser, but how do I identify and block all foreign governments’ UEFI keys?

Our Greek colleagues are already a bit cheesed off with Wall Street. How happy will they be if in future they won’t be able to install the security software of their choice on their PCs, but the Turkish secret police will?

PhD Studentship in Mobile Payments

We’ve been offered funding for a PhD student to work at the University of Cambridge Computer Laboratory on the security of mobile payments, starting in April 2012.

The objective is to explore how we can make mobile payment systems dependable despite the presence of malware. Research topics include the design of next-generation secure element hardware, trustworthy user interfaces, and mechanisms to detect and recover from compromise. Relevant skills include Android, payment protocols, human-computer interaction, hardware and software security, and cryptography.

As the sponsor wishes to start the project by April, we strongly encourage applications by 28 October 2011 (although candidates who do not need a visa to work in the UK might conceivably apply as late as early December). Enquiries should be directed to Ross Anderson.

Trusted Computing 2.0

There seems to be an attempt to revive the “Trusted Computing” agenda. The vehicle this time is UEFI which sets the standards for the PC BIOS. Proposed changes to the UEFI firmware spec would enable (in fact require) next-generation PC firmware to only boot an image signed by a keychain rooted in keys built into the PC. I hear that Microsoft (and others) are pushing for this to be mandatory, so that it cannot be disabled by the user, and it would be required for OS badging. There are some technical details here and here, and comment here.

These issues last arose in 2003, when we fought back with the Trusted Computing FAQ and economic analysis. That initiative petered out after widespread opposition. This time round the effects could be even worse, as “unauthorised” operating systems like Linux and FreeBSD just won’t run at all. (On an old-fashioned Trusted Computing platform you could at least run Linux – it just couldn’t get at the keys for Windows Media Player.)

The extension of Microsoft’s OS monopoly to hardware would be a disaster, with increased lock-in, decreased consumer choice and lack of space to innovate. It is clearly unlawful and must not succeed.

Randomly-generated passwords at myBART

Last week, in retaliation against the heavy-handed response to planned protests against the BART metro system in California, the hacktivist group Anonymous hacked into several BART servers. They leaked part of a database of users from myBART, a website which provides frequent BART riders with email updates about activities near BART stations. An interesting aspect of the leak is that 1,346 of the 2,002 accounts seem to have randomly-generated passwords-a rare opportunity to study this approach to password security. Continue reading Randomly-generated passwords at myBART

Pico: no more passwords (at Usenix Security)

The usability community has long complained about the problems of passwords (remember the Adams and Sasse classic). These days, even our beloved XKCD has something to say about the difficulties of coming up with a password that is easy to memorize and hard to brute-force. The sensible strategy suggested in the comic, of using a passphrase made of several common words, is also the main principle behind Jakobsson and Akavipat’s fastwords. It’s a great suggestion. However, in the long term, no solution that requires users to remember secrets is going to scale to hundreds of different accounts, if all those remembered secrets have to be different (and changed every couple of months).

This is why, as I previously blogged, I am exploring the space of solutions that do not require the memorization of any secrets—whether passwords, passphrases, PINs, faces, graphical squiggles or anything else. My SPW paper, Pico: No more passwords, was finalized in June (including improvements suggested in the comments to the previous blog post) and I am about to give an invited talk on Pico at Usenix Security 2011 in San Francisco.

Usenix talks are recorded and the video is posted next to the abstracts: if you are so inclined, you will be able to watch my presentation shortly after I give it.

To encourage adoption, I chose not to patent any aspect of Pico. If you wish to collaborate, or fund this effort, talk to me. If you wish to build or sell it on your own, be my guest. No royalties due—just cite the paper.

Phone hacking, technology and policy

Britain’s phone hacking scandal touches many issues of interest to security engineers. Murdoch’s gumshoes listened to celebs’ voicemail messages using default PINs. They used false-pretext phone calls – blagging – to get banking and medical records.

We’ve known for years that private eyes blag vast amounts of information (2001 book, from page 167; 2006 ICO Report). Centralisation and the ‘Cloud’ are making things worse. Twenty years ago, your bank records were available only in your branch; now any teller at any branch can look them up. The dozen people who work at your doctor’s surgery used to be able to keep a secret, but the 840,000 staff with a logon to our national health databases?

Attempts to fix the problem using the criminal justice system have failed. When blagging was made illegal in 1995, the street price of medical records actually fell from £200 to £150! Parliament increased the penalty from fines to jail in 2006 but media pressure scared ministers off implementing this law.

Our Database State report argued that the wholesale centralisation of medical and other records was unsafe and illegal; and the NHS Population Demographics Service database appears to be the main one used to find celebs’ ex-directory numbers. Celebs can opt out, but most of them are unaware of PDS abuse, so they don’t. Second, you can become a celeb instantly if you are a victim of crime, war or terror. Third, even if you do opt out, the gumshoes can just bribe policemen, who have access to just about everything.

In future, security engineers must pay much more attention to compartmentation (even the Pentagon is now starting to get it), and we must be much more wary about the risk that law-enforcement access to information will be abused.

TalkTalk's new blocking system

Back in January I visited TalkTalk along with Jim Killock of the Open Rights Group (ORG) to have their new Internet blocking system explained to us. The system was announced yesterday, and I’m now publishing my technical description of how it works (note that it was called “BrightFeed” when we saw it, but is now named “HomeSafe”).

Buried in all the detail of how the system works are two key points — the first is the notion that it is possible for a centralised checking system (especially one that tells a remote site its identity) to determine whether sites are malicious are not. This is problematic and I doubt that malware distributors will see this as much of a challenge — although on the other hand, perhaps by setting your browser’s User Agent string to pretend to be the checking system you might become rather safer!

The second is that although the system is described as “opt in”, that only applies to whether or not websites you visit might be blocked. What is not “opt in” is whether or not TalkTalk learns the details of the URLs that all of their customers visit, whether they have opted in or not. All of these sites will be visited by TalkTalk’s automated system — which may take some explaining if the remote system told you a URL in confidence and is checking their logs to see who visits.

On their site, ORG have expressed an opinion as to whether the system can be operated lawfully, along with TalkTalk’s own legal analysis. TalkTalk argue that the system’s purpose is to protect their network, which gives them a statutory exemption from wire-tapping legislation; whereas all the public relations material seems to think it’s been developed to protect the users….

… in the end though, the system will be judged by its effectiveness, and in a world where less than 20% of new threats are detected — that may not be all that high.