Category Archives: Security engineering

Bad security, good security, case studies, lessons learned

Government ignores Personal Medical Security

The Government has just published their response to the Health Committee’s report on The Electronic Patient Record. This response is shocking but not surprising.

For example, on pages 6-7 the Department reject the committee’s recommendation that sealed-envelope data should be kept out of the secondary uses service (SUS). Sealed-envelope data is the stuff you don’t want shared, and SUS is the database that lets civil servants, medical researchers others access to masses of health data. The Department’s justification (para 4 page 6) is not just an evasion but is simply untruthful: they claim that the design of SUS `ensures that patient confidentiality is protected’ when in fact it doesn’t. The data there are not pseudonymised (though the government says it’s setting up a research programme to look at this – report p 23). Already many organisations have access.

The Department also refuses to publish information about security evaluations, test results and breaches (p9) and reliability failures (p19). Their faith in security-by-obscurity is touching.

The biggest existing security problem in the NHS – that many staff carelessly give out data on the phone to anyone who asks for it – will be subject to `assessment’, which `will feed into the further implementation’. Yeah, I’m sure. But as for the recommendation that the NHS provide a substantial audit resource – as there is to detect careless and abusive disclosure from the police national computer – we just get a long-winded evasion (pp 10-11).

Finally, the fundamental changes to the NPfIT business process that would be needed to make the project work, are rejected (p14-15): Sir Humphrey will maintain central control of IT and there will be no `catalogue’ of approved systems from which trusts can choose. And the proposals that the UK participate in open standards, along the lines of the more successful Swedish or Dutch model, draw just a long evasion (p16). I fear the whole project will just continue on its slow slide towards becoming the biggest IT disaster ever.

Counters, Freshness, and Implementation

When we want to check freshness of cryptographically secured messages, we have to use monotonic counters, timestamps or random nonces. Each of these mechanisms increases the complexity of a given system in a different way. Freshness based on counters seems to be the easiest to implement in the context of ad-hoc mesh wireless networks. One does not need to increase power consumption for an extra message for challenge (containing a new random number), nor there is need for precise time synchronisation. It sounds easy but people in the real world are … creative. We have been working with TinyOS, an operating system that was designed for constrained hardware. TinyOS is a quite modular platform and even mesh networking is not part of the system’s core but is just one of the modules that can be easily replaced or not used at all.

Frame structures for TinyOS and TinySec on top of 802.15.4
Fig.: Structures of TinyOS and TinySec frames with all the counters. TinySec increases length of “data” field to store initialisation vector. Continue reading Counters, Freshness, and Implementation

Time to forget?

In a few hours time Part III of the Regulation of Investigatory Powers Act 2000 will come into effect. The commencement order means that as of October 1st a section 49 notice can be served which requires that encrypted data be “put into an intelligible form” (what you and I might call “decrypted”). Extended forms of such a notice may, under the provisions of s51, require you to hand over your decryption key, and/or under s54 include a “no tipping off” provision.

If you fail to comply with a notice (or breach a tipping off requirement by telling someone about it) then you will have committed an offence, for which the maximum penalty is two years and a fine or both. It’s five years for “tipping off” and also five years (an amendment in s15 of the Terrorism Act 2006) if the case relates to “national security”.

By convention, laws in the UK very seldom have retrospective effect, so that if you do something today, Parliament is very loth to pass a law tomorrow to make your actions illegal. However, the offences in Part III relate to failing to obey a s49 notice and that notice could be served on you tomorrow (or thereafter), but the material may have been encrypted by you today (or before).

Potentially therefore, the police could start demanding the putting into an intelligible form, not only of information that they seize in a raid tomorrow morning, but also of material that they seized weeks, months or years ago. In the 1995 Smith case (part of Operation Starburst), the defendant only received a suspended sentence because the bulk of the material was encrypted. In this particular example, the police may be constrained by double jeopardy or the time that has elapsed from serving a notice on Mr Smith, but there’s nothing in RIP itself, or the accompanying Code of Practice, to prevent them serving a s49 notice on more recently seized encrypted material if they deem it to be necessary and proportionate.

In fact, they might even be nipping round to Jack Straw’s house demanding a decryption key — as this stunt from 1999 makes possible (when the wording of a predecessor bill was rather more inane than RIP was (eventually) amended to).

There are some defences in the statute to failing to comply with a notice — one of which is that you can claim to have forgotten the decryption key (in practice, the passphrase under which the key is stored). In such a case the prosecution (the burden of proof was amended during the passage of the Bill) must show beyond a reasonable doubt that you have not forgotten it. Since they can’t mind-read, the expectation must be that they would attempt to show regular usage of the passphrase, and invite the jury to conclude that the forgetting has been faked — and this might be hard to manage if a hard disk has been in a police evidence store for over a decade.

However, if you’re still using such a passphrase and still have access to the disk, and if the contents are going to incriminate you, then perhaps a sledgehammer might be a suitable investment.

Me? I set up my alibi long ago 🙂

Keep your keypads close

On a recent visit to a local supermarket I noticed something new being displayed on the keypad before the transaction starts:

Did you know that you can remove the PIN pad to enter your PIN?

(“Did you know that you can remove the PIN pad to enter your PIN?”)

Picking up the keypad will allow the cardholder to align it such that bystanders, or the merchant, cannot observe the PIN as it is entered. On the one hand, this seems sensible (if we assume that the only way to get the PIN is by observation, no cameras are present, and that even more cardholder liability is the solution for card fraud). On the other hand, it also makes some attacks easier. For example, the relay attack we demonstrated earlier this year, where the crook inserts a modified card into the terminal, hoping that the merchant does not ask to examine it. Allowing the cardholder to move the keypad separates the merchant, who could detect the attack, from the transaction. Can I now hide the terminal under my jacket while the transaction is processed? Can I turn my back to the merchant? What if I found a way to tamper with the terminal? Clearly, this would make the process easier for me. We’ve been doing some more work on payment terminals and will hopefully have some more to say about it soon.

Continue reading Keep your keypads close

NHS Computer Project Failing

The House of Commons Health Select Committee has just published a Report on the Electronic Patient Record. This concludes that the NHS National Programme for IT (NPfIT), the 20-billion-pound project to rip out all the computers in the NHS and replace them with systems that store data in central server farms rather than in the surgery or hospital, is failing to meet its stated core objective – of providing clinically rich, interoperable detailed care records. What’s more, privacy’s at serious risk. Here is comment from e-Health Insider.

For the last few years I’ve been using the London Ambulance Service disaster as the standard teaching example of how things go wrong in big software projects. It looks like I will have to refresh my notes for the Software Engineering course next month!

I’ve been warning about the safety and privacy risks of the Department of Health’s repeated attempts to centralise healthcare IT since 1995. Here is an analysis of patient privacy I wrote earlier this year, and here are my older writings on the security of clinical information systems. It doesn’t give me any great pleasure to be proved right, though.

Embassy email accounts breached by unencrypted passwords

When it rains, it pours. Following the fuss over the Storm worm impersonating Tor, today Wired and The Register are covering the story of a Dan Egerstad, who intercepted embassy email account passwords by setting up 5 Tor exit nodes, then published the results online. People have been sniffing passwords on Tor before, and one even published a live feed. However, the sensitivity of embassies as targets and initial mystery over how the passwords were snooped, helped drum up media interest.

That unencrypted traffic can be read by Tor exit nodes is an unavoidable fact – if the destination does not accept encrypted information then there is nothing Tor can do to change this. The download page has a big warning, recommending users adopt end-to-end encryption. In some cases this might not be possible, for example browsing sites which do not support SSL, but for downloading email, not using encryption with Tor is inexcusable.

Looking at who owns the IP addresses of the compromised email accounts, I can see that they are mainly commercial ISPs, generally in the country where the embassy is located, so probably set up by the individual embassy and not subject to any server-imposed security policies. Even so, it is questionable whether such accounts should be used for official business, and it is not hard to find providers which support encrypted access.

The exceptions are Uzbekistan, and Iran whose servers are controlled by the respective Ministry of Foreign Affairs, so I’m surprised that secure access is not mandated (even my university requires this). I did note that the passwords of the Uzbek accounts are very good, so might well be allocated centrally according to a reasonable password policy. In contrast, the Iranian passwords are simply the name of the embassy, so guessable not only for these accounts, but any other one too.

In general, if you are sending confidential information over the Internet unencrypted you are at risk, and Tor does not change this fact, but it does move those risks around. Depending on the nature of the secrets, this could be for better or for worse. Without Tor, data can be intercepted near the server, near the client and also in the core of the Internet; with Tor data is encrypted near the client, but can be seen by the exit node.

Users of unknown Internet cafés or of poorly secured wireless are at risk of interception near the client. Sometimes there is motivation to snoop traffic there but not at the exit node. For example, people may be curious what websites their flatmates browse, but it is not interesting to know that an anonymous person is browsing a controversial site. This is why, at conferences, I tunnel my web browsing via Cambridge. I know that without end-to-end encryption my data can be intercepted, but the sysadmins at Cambridge have far less incentive misbehave than some joker sitting behind me.

Tor has similar properties, but when used with unencrypted data the risks need to be carefully evaluated. When collecting email, be it over Tor, using wireless, or via any other untrustworthy media, end-to-end encryption is essential. The fact that embassies, who are supposed to be security conscious, do not appreciate this is disappointing to learn.

Although I am a member of the Tor project, the views expressed here are mine alone and not those of Tor.

Analysis of the Storm Javascript exploits

On Monday I formally joined the Tor project and it certainly has been an interesting week. Yesterday, on both the Tor internal and public mailing lists, we received several reports of spam emails advertising Tor. Of course, this wasn’t anything to do with the Tor project and the included link was to an IP address (it varied across emails). On visiting this webpage (below), the user was invited to download tor.exe which was not Tor, but instead a trojan which if run would recruit a computer into the Storm (aka Peacomm and Nuwar) botnet, now believed to be the worlds largest supercomputer.

Spoofed Tor download site

Ben Laurie, amongst others, has pointed out that this attack shows that Tor must have a good reputation for it to be considered worthwhile to impersonate. So while dealing with this incident has been tedious, it could be considered a milestone in Tor’s progress. It has also generated some publicity on a few blogs. Tor has long promoted procedures for verifying the authenticity of downloads, and this attack justifies the need for such diligence.

One good piece of advice, often mentioned in relation to the Storm botnet, is that recipients of spam email should not click on the link. This is because there is malicious Javascript embedded in the webpage, intended to exploit web-browser vulnerabilities to install the trojan without the user even having to click on the download link. What I did not find much discussion of is how the exploit code actually worked.

Notably, the malware distribution site will send you different Javascript depending on the user-agent string sent by the browser. Some get Javascript tailored for vulnerabilities in that browser/OS combination while the rest just get the plain social-engineering text with a link to the trojan. I took a selection of popular user-agent strings, and investigated what I got back on sending them to one of the malware sites.

Continue reading Analysis of the Storm Javascript exploits

The dinosaurs of five years ago

A project called NSA@home has been making the rounds. It’s a gem. Stanislaw Skowronek got some old HDTV hardware off of eBay, and managed to create himself a pre-image brute force attack machine against SHA-1. The claim is that it can find a pre-image for an 8 character password hash from a 64 character set in about 24 hours.

The key here is that this hardware board uses 15 field programmable gate arrays (FPGAs), which are generic integrated circuits that can perform any logic function within their size limit. So, Stanislaw reverse engineered the connections between the FPGAs, wrote his own designs and now has a very powerful processing unit. FPGAs are better at specific tasks compared to general purpose CPUs, especially for functions that can be divided into many independently-running smaller chunks operating in parallel. Some cryptographic functions are a perfect match; our own Richard Clayton and Mike Bond attacked the DES implementation in the IBM 4758 hardware security module using an FPGA prototyping board; DES was attacked on the FPGA-based custom hardware platform, the Transmogrifier 2a; more recently, the purpose-built COPACOBANA machine which uses 120 low-end FPGAs operating in parallel to break DES in about 7 days; a proprietary stream cipher on RFID tokens was attacked using 16 commercial FPGA boards operating in parallel; and finally, people are now in the midst of cracking the A5 stream cipher in real time using commercial FPGA modules. The unique development we see with NSA@home is that it uses a defunct piece of hardware.

Continue reading The dinosaurs of five years ago

Econometrics of wickedness

Last Thursday I gave a tech talk at Google; you can now watch it online. It’s about work a number of us have done on searching for covert communities, with a focus on reputation thieves, phisherman, fake banks and other dodgy businesses.

While in California I also gave a talk on Information Security Economics, first as a keynote talk at Crypto and later as a seminar at Berkeley (the slides are here).

Chip-and-PIN relay attack paper wins "Best Student Paper" at USENIX Security 2007

In May 2007, Saar Drimer and Steven Murdoch posted about “Distance bounding against smartcard relay attacks”. Today their paper won the “Best Student Paper” award at USENIX Security 2007 and their slides are now online. You can read more about this work on the Security Group’s banking security web page.

Steven and Saar at USENIX Security 2007