Category Archives: Security engineering

Bad security, good security, case studies, lessons learned

When Layers of Abstraction Don’t Get Along: The Difficulty of Fixing Cache Side-Channel Vulnerabilities

(co-authored with Robert Watson)

Recently, our group was treated to a presentation by Ruby Lee of Princeton University, who discussed novel cache architectures which can prevent some cache-based side channel attacks against AES and RSA. The new architecture was fascinating, in particular because it may actually increase cache performance (though this point was spiritedly debated by several systems researchers in attendance). For the security group, though, it raised two interesting and troubling questions. What is the proper defence against side-channels due to processor cache? And why hasn’t it been implemented despite these attacks being around for years?

Continue reading When Layers of Abstraction Don’t Get Along: The Difficulty of Fixing Cache Side-Channel Vulnerabilities

Technical aspects of the censoring of archive.org

Back in December I wrote an article here on the “Technical aspects of the censoring of Wikipedia” in the wake of the Internet Watch Foundation’s decision to add two Wikipedia pages to their list of URLs where child sexual abuse images are to be found. This list is used by most UK ISPs (and blocking systems in other countries) in the filtering systems they deploy that attempt to prevent access to this material.

A further interesting censoring issue was in the news last month, and this article (a little belatedly) explains the technical issues that arose from that.

For some time, the IWF have been adding URLs from The Internet Archive (widely known as “the wayback machine“) to their list. I don’t have access to the list and so I am unable to say how many URLs have been involved, but for several months this blocking also caused some technical problems.
Continue reading Technical aspects of the censoring of archive.org

New Facebook Photo Hacks

Last March, Facebook caught some flak when some hacks circulated showing how to access private photos of any user. These were enabled by egregiously lazy design: viewing somebody’s private photos simply required determining their user ID (which shows up in search results) and then manually fetching a URL of the form:
www.facebook.com/photo.php?pid=1&view=all&subj=[uid]&id=[uid]
This hack was live for a few weeks in February, exposing some photos of Facebook CEO Mark Zuckerberg and (reportedly) Paris Hilton, before the media picked it up in March and Facebook upgraded the site.

Instead of using properly formatted PHP queries as capabilities to view photos, Faceook now verifies the requesting user against the ACL for each photo request. What could possibly go wrong? Well, as I discovered this week, the photos themselves are served from a separate content-delivery domain, leading to some problems which highlight the difficulty of building access control into an enormous, globally distributed website like Facebook.

Continue reading New Facebook Photo Hacks

Variable Length Fields in Cryptographic Protocols

Many crypto protocols contain variable length fields: the names of the participants, different sizes of public key, and so on.

In my previous post, I mentioned how Liqun Chen has (re)discovered the attack that many protocols are broken if you don’t include the field lengths in MAC or signature computations (and, more to the point, a bunch of ISO standards fail to warn the implementor about this issue).

The problem applies to confidentiality, as well as integrity.

Many protocol verification tools (ProVerif, for example) will assume that the attacker is unable to distinguish enc(m1, k, iv) from enc(m2, k, iv) if they don’t know k.

If m1 and m2 are of different lengths, this may not be true: the length of the ciphertext leaks information about the length of the plaintext. With Cipher Block Chaining, you can tell the length of the plaintext to the nearest block, and with stream ciphers you can tell the exact length. So you can have protocols that are “proved” correct but are still broken, because the idealized protocol doesn’t properly represent what the implementation is really doing.

If you want different plaintexts to be observationally equivalent to the attacker, you can pad the variable-length fields to a fixed length before encrypting. But if there is a great deal of variation in length, this may be a very wasteful thing to do.

The alternative approach is to change your idealization of the protocol to reflect the reality of your encryption primitive. If your implementation sends m encrypted under a stream cipher, you can idealize it as sending an encrypted version of m together with L_m (the length of m) in the clear.

Hidden Assumptions in Cryptographic Protocols

At the end of last week, Microsoft Research hosted a meeting of “Cryptoforma”, a proposed new project (a so-called “network of excellence”) to bring together researchers working on applying formal methods to security. They don’t yet know whether or not this project will get funding from the EPSRC, but I wish them good luck.

There were several very interesting papers presented at the meeting, but today I want to talk about the one by Liqun Chen, “Parsing ambiguities in authentication and key establishment protocols”.

Some of the protocol specifications published by ISO specify how the protocol should be encoded on the wire, in sufficient detail to enable different implementations to interoperate. An example of a standard of this type is the one for the public key certificates that are used in SSL authentication of web sites (and many other applications).

The security standards produced by one group within ISO (SC27) aren’t like that. They specify the abstract protocols, but give the implementor considerable leeway in how they are encoded. This means that you can have different implementations that don’t interoperate. If these implementations are in different application domains, the lack of interoperability doesn’t matter. For example, Tuomas Aura and I recently wrote a paper in which we presented a protocol for privacy-preserving wireless LAN authentication, which we rather boldly claim to be based on the abstract protocol from ISO 9798-4.

You could think of these standards as separating concerns: the SC27 folks get the abstract crypto protocol correct, and then someone else standardises how to encode it in a particular application. But does the choice of concrete encoding affect the protocol’s correctness?

Liqun Chen points out one case where it clearly does. In the abstract protocols in ISO 9798-4 and others, data fields are joined by a double vertical bar operator. If you want to find out what that double vertical bar really means, you have to spend another 66 Swiss Francs and get a copy of ISO 9798-1, which tells you that Y || Z means “the result of the concatenation of the data items Y and Z in that order”.

Oops.

When we specify abstract protocols, it’s generally understood that the concrete encoding that gets signed or MAC’d contains enough information to unambigously identify the field boundaries: it contains length fields, a closing XML tag, or whatever. A signed message {Payee, Amount} K_A should not allow a payment of $3 to Bob12 to be mutated by the attacker into a payment of $23 to Bob1. But ISO 9798 (and a bunch of others) don’t say that. There’s nothing that says a conforming implementation can’t send the length field without authentication.

No of course, an implementor probably wouldn’t do that. But they might.

More generally: do these abstract protocols make a bunch of implicit, undocumented assumptions about the underlying crypto primitives and encodings that might turn out not to be true?

See also: Boyd, C. Hidden assumptions in cryptographic protocols. Computers and Digital Techniques, volume 137, issue 6, November 1990.

Marksmen, on your marks!

The beginning of a Call of Duty 4 Search and Destroy game is essentially a race. When the game starts, experienced players all make a mad dash from the starting post, head for their preferred defensive or offensive positions, to dig in before the enemy can bring their guns to bear. From these choice spots, they engage the enemy within seconds, and despite moderately large maps which are a few hundred metres across, up to a third of the kills in a 3-5 minute game do take place in the first 15 seconds. Of course there is skill in figuring out what to do next (the top 1% of players distinguish themselves through adaptability and quick thinking), but the fact remains that the opening of an S&D match is critically important.

I have previously posted about “Neo-Tactics” – unintended side-effects of low-level game algorithms which create competitive advantage. Once a player seems to win without a visible justification this sort of effect causes a problem – it creates the perception of cheating. At a second level, actual cheats might deliberately manipulate their network infrastructure or game client to take advantage of the effect. Well I think I might have found a new one…

The screenshots below give a flavour of the sort of sneaky position that players might hope to be first to reach, affording a narrow but useful line of sight through multiple windows and doorways, crossing most of the map. NB: Seasoned COD4 players will laugh at my choice of so-called sneaky position, but I am a novice and I cannot hope to reach the ingenious hideouts they have discovered after years of play-testing.


Continue reading Marksmen, on your marks!

Root of Trust ?

I’ve given some talks this year about the Internet’s insecure infrastructure — stressing that fundamental protocols such as BGP and DNS cannot really be trusted at the moment. Although they work just fine most of the time, they are susceptible to attacks which can mean, for example, that you visit the wrong website, or your email is intercepted.

Steps are now being taken, rather faster since Dan Kaminsky came up with a really effective DNS poisoning attack, to secure DNS by using DNSSEC.

The basic idea of DNSSEC is that when you get an answer from the DNS it will be signed by someone you trust. At some point the “trust anchor” for the system will be “.” the DNS root, but for the moment there’s just a handful of “trust anchors” one level down from that. One such anchor is the “.se” country code domain for Sweden. Additionally, Brazil (.br), Puerto Rico (.pr), and Bulgaria (.bg) have signed their zones, but that’s about it for today.

So, wishing to get some experience with the brave new world of DNSSEC, I decided that Sweden was the “in” place to be, and to purchase “cloudba.se” and roll out my first DNSSEC signed domain.

The purchase wasn’t as easy as it might have been — when you buy a domain, Sweden insists that people provide their identity numbers (albeit they have absolutely no way of checking if you’re telling the truth) — or if a company they want a VAT or registration number (which are checkable, albeit I suspect they didn’t bother). I also found that they don’t like spaces in the VAT number — which held things up for a while!

However, eventually they sent me a PGP signed email to tell me I was now the proud owner of “cloudba.se”. Unfortunately, this email wasn’t in RFC3156 PGP/MIME format (or any other format that my usually pretty capable email client understood).

The email was signed with key 0xF440EE9B which was reassuring because the .se registry gives the fingerprint for this key on their website here. Rather less reassuringly footnote (*) next to the fingerprint says “.SE signature for outgoing e-mail. (**) June 1 through August 31.” (the (**) is for a second level of footnote, which is absent — and of course it is now September).

They also enable you to fetch the key through a link on this page to their “PGP nyckel-ID” at http://subkeys.pgp.net.

Unfortunately, fetching the key shows that the signature on the email is invalid. [Update 1 Oct: I’ve finally now managed to validate it, see comment.]

Since the email seems to have originated in the Windows world, but was signed on a Linux box (giving it a mixture of 0D 0A and 0A line endings), then pushed through a three year old copy of MIME-tools I suppose the failure isn’t too surprising. But strictly the invalid signature means that I shouldn’t trust the email’s contents at all — because the contents have definitely been tampered with since the signature was applied.

Since the point of the email was to get me to login for the first time to the registry website and set my password to control the domain, this is a little unfortunate.

Even if the signature had been correct, then should I trust the PGP key?

Well it is pointed to from the registry website which is a Good Thing. However, they do themselves no favours by referencing a version on the public key servers. I checked who had signed the key (which is an alternative way of trusting its provenance — since the email had arrived to a non-DNSSEC secured domain). Turned out there was no-one I knew, and of 4 individual signatures, 2 were from expired keys. The other signature was the IIS root key — which sounds promising. That has 8 signatures, once again not people I know — but only 1 from a non-expired key, so perhaps I can get to know some of the other 7?

Of course, anyone can sign a key on a public key server, so perhaps it makes sense for .se to suggest that people fetch a key with as many signatures as possible — there’s more chance of it being signed by someone they know. Anyway, I have now added my own signature, using an email address at my nice shiny new domain. However, it is possible that I may not have increased the level of trust šŸ™

Anti-theft Protocols

At last Friday’s Security Group meeting, we talked about security protocols that are intended to deter or reduce the consquences of theft, and how they go wrong.

Examples include:

  • GSM mobile phones have an identifier for the phone (separate from the identifier for the user) that can be blacklisted when the phone is stolen.
  • Some car radios will stop working when the battery is disconnected, and only start working again when a numeric code is entered. This is intended to deter theft of the radio.
  • In Windows Vista, Bitlocker can be used to encrypt files. One of the intended applications for this is that if someone steals your laptop, it will be difficult for them to gain access to your encrypted files.

Ross told a story of what happened when he needed to disconnect the battery on his car: the radio stopped working, and the code he had been given to reactivate it didn’t work – it was the wrong code.
Ross argues that these reactivation codes are unecessary, because other measures taken by the car manufacturers – such as making radios non-standard sizes, and hence not refittable in other car models – have made them redundant.

I described how the motherboard on a laptop had needed to be replaced recently. The motherboard contains the TPM chip, which contains the encryption keys needed to decrypt files protected with Bitlocker. If you replace the motherboard, the files on your hard disk will become unreadable, even if the disk is physically OK. Domain-joined Vista machines can be configured so that a sysadmin somewhere within your organization is able to recover the keys when this happens.

Both of these situations suffer from classic usability problems: the recovery procedures are invoked rarely (so users may not know what they’re supposed to do), and, if your system is configured incorrectly, you only find out when it is too late: you key in the code to your radio and it remains a doorstop; the admin you hoped was escrowing your keys turns out not to have the private key corresponding to the public key you were encrypting under (or, more subtly: the person with the authority to ask for your laptop’s key to be recovered is not you, because the appropriate admin has the wrong name for the laptop’s owner in their database).

I also described what happens when an XBox 360 is stolen. When you buy XBox downloadable content, you buy two licenses: one that’s valid on any XBox, as long as you’re logged in to XBox live; and one that’s valid on just your XBox, regardless of who’s logged in. If a burglar steals your Xbox, and you buy a new one, you need to get another license of the second type (for all the other people in your household who make use of it). The software makes this awkward, because it knows that you already have a license of the first type, and assumes that you couldn’t possibly want to buy it again. The work-around is to get a new email address, a new Microsoft Live Account, and a new Gamer Tag, and use these to repurchase the license. You can’t just change the gamertag, because XBox live doesn’t let the same Microsoft Live account have two gamertags. And yes, I know, your buddies in the MMORPG you were playing know you by your gamertag, so you don’t want to change it.

An insecurity in OpenID, not many dead

Back in May it was realised that, thanks to an ill-advised change to some random number generation code, for over 18 months Debian systems had been generating crypto keys chosen from a set of 32,768 possibilities, rather than from billions and billions. Initial interest centred around the weakness of SSH keys, but in practice lots of different applications were at risk (see long list here).

In particular, SSL certificates (as used to identify https websites) might contain one of these weak keys — and so it would be possible for an attacker to successfully impersonate a secure website. Of course the attacker would need to persuade you to mistakenly visit their site — but it just so happens that one of the more devastating attacks on DNS has recently been discovered; so that’s not as unlikely as it must have seemed back in May.

Anyway, my old friend Ben Laurie (who is with Google these days) and I have been trawling the Internet to determine how many certificates there are containing these weak keys — and there’s a lot: around 1.5% of the certs we’ve examined.

But more of that another day! because earlier this week, Ben spotted that one of the weak certs was for Sun’s “OpenID” website, and that two more OpenID sites were weak as well (by weak we mean that a database lookup could reveal the private key!)

OpenID, for those who are unfamiliar with it, is a scheme for allowing you to prove your identity to site A (viz: provide your user name and password) and then use that identity on site B. There’s a queue of people offering the first bit, but rather less offering the second : because it means you rely on someone else’s due diligence in knowing who their users are — where “who” is a hard sort of thing to get your head around in an online environment.

The problem that Ben and I have identified (advisory here), is that an attacker can poison a DNS cache so it serves up the wrong IP address for openid.sun.com. Then, even if the victim is really cautious and uses https and checks the cert, their credentials can be phished. Thereafter, anyone who trusts Sun as an identity provider could be very disappointed. There’s other attacks as well, but you’ve probably got the general idea by now.

In principle Sun should make a replacement certificate and that should be it (and so they have — read Robin Wilton’s comments here). Except that they need to put the old certificate onto a Certificate Revocation List (CRL) because otherwise it will still be trusted from now until it expires (a fair while off). Sadly, many web browsers, and most of the OpenID codebases haven’t bothered with CRLs (or they don’t enable their checking by default so it’s as if it wasn’t there for most users).

One has to conclude that Sun (and the other two providers) should not be trusted by anyone for quite a while to come. But does that matter ? Since OpenID didn’t promise all that much anyway, does a serious flaw (which does require a certain amount of work to construct an attack) make any difference? At present this looks like the modern equivalent of a small earthquake in Chile.

Additional: Sun’s PR department tell me that the dud certificate has indeed been revoked with Verisign and placed onto the CRL. Hence any system that checks the CRL cannot now be fooled.

Finland privacy judgment

In a case that will have profound implications, the European Court of Human Rights has issued a judgment against Finland in a medical privacy case.

The complainant was a nurse at a Finnish hospital, and also HIV-positive. Word of her condition spread among colleagues, and her contract was not renewed. The hospital’s access controls were not sufficient to prevent colleages accessing her record, and its audit trail was not sufficient to determine who had compromised her privacy. The court’s view was that health care staff who are not involved in the care of a patient must be unable to access that patientā€™s electronic medical record: “What is required in this connection is practical and effective protection to exclude any possibility of unauthorised access occurring in the first place.” (Press coverage here.)

A “practical and effective” protection test in European law will bind engineering, law and policy much more tightly together. And it will have wide consequences. Privacy compaigners, for example, can now argue strongly that the NHS Care Records service is illegal. And what will be the further consequences for the Transformational Government initiative – the “Database State”?