Category Archives: Security engineering

Bad security, good security, case studies, lessons learned

Securing Network Location Awareness with Authenticated DHCP

During April–June 2006, I was an intern at Microsoft Research, Cambridge. My project, supervised by Tuomas Aura and Michael Roe, was to improve the privacy and security of mobile computer users. A paper summarizing our work was published at SecureComm 2007, but I’ve only just released the paper online: “Securing Network Location Awareness with Authenticated DHCP”.

How a computer should behave depends on its network location. Existing security solutions, like firewalls, fail to adequately protect mobile users because they assume their policy is static. This results in laptop computers being configured with fairly open policies, in order to facilitate applications appropriate for a trustworthy office LAN (e.g. file and printer sharing, collaboration applications, and custom servers). When the computer is taken home or roaming, this policy leaves an excessively large attack surface.

This static approach also harms user privacy. Modern applications broadcast a large number of identifiers which may leak privacy sensitive information (name, employer, office location, job role); even randomly generated identifiers allow a user to be tracked. When roaming, a laptop should not broadcast identifiers unless necessary, and on moving location either pseudonymous identifiers should be re-used or anonymous ones generated.

Both of these goals require a computer to be able to identify which network it is on, even when an attacker is attempting to spoof this information. Our solution was to extend DHCP to include an network location identifier, authenticated by a public-key signature. I built a proof-of-concept implementation for the Microsoft Windows Server 2003 DHCP server, and the Vista DHCP client.

A scheme like this should ideally work on both small PKI-less home LANs and still permit larger networks to aggregate multiple access points into one logical network. Achieving this requires some subtle naming and key management tricks. These techniques, and how to implement the protocols in a privacy-preserving manner are described in our paper.

Chip & PIN terminals vulnerable to simple attacks

Steven J. Murdoch, Ross Anderson and I looked at how well PIN entry devices (PEDs) protect cardholder data. Our paper will be published at the IEEE Symposium on Security and Privacy in May, though an extended version is available as a technical report. A segment about this work will appear on BBC Two’s Newsnight at 22:30 tonight.

We were able to demonstrate that two of the most popular PEDs in the UK — the Ingenico i3300 and Dione Xtreme — are vulnerable to a “tapping attack” using a paper clip, a needle and a small recording device. This allows us to record the data exchanged between the card and the PED’s processor without triggering tamper proofing mechanisms, and in clear violation of their supposed security properties. This attack can capture the card’s PIN because UK banks have opted to issue cheaper cards that do not use asymmetric cryptography to encrypt data between the card and PED.

Ingenico attack Dione attack

In addition to the PIN, as part of the transaction, the PED reads an exact replica of the magnetic strip (for backwards compatibility). Thus, if an attacker can tap the data line between the card and the PED’s processor, he gets all the information needed to create a magnetic strip card and withdraw money out of an ATM that does not read the chip.

We also found that the certification process of these PEDs is flawed. APACS has been effectively approving PEDs for the UK market as Common Criteria (CC) Evaluated, which does not equal Common Criteria Certified (no PEDs are CC Certified). What APACS means by “Evaluated” is that an approved lab has performed the “evaluation”, but unlike CC Certified products, the reports are kept secret, and governmental Certification Bodies do not do quality control.

This process causes a race to the bottom, with PED developers able to choose labs that will approve rather than improve PEDs, at the lowest price. Clearly, the certification process needs to be more open to the cardholders, who suffer from the fraud. It also needs to be fixed such that defective devices are refused certification.

We notified APACS, Visa, and the PED manufactures of our results in mid-November 2007 and responses arrived only in the last week or so (Visa chose to respond only a few minutes ago!) The responses are the usual claims that our demonstrations can only be done in lab conditions, that criminals are not that sophisticated, the threat to cardholder data is minimal, and that their “layers of security” will detect fraud. There is no evidence to support these claims. APACS state that the PEDs we examined will not be de-certified or removed, and the same for the labs who certified them and would not even tell us who they are.

The threat is very real: tampered PEDs have already been used for fraud. See our press release and FAQ for basic points and the technical report where we discuss the work in detail.

Update 1 (2008-03-09): The segment of Newsnight featuring our contribution has been posted to Google Video.

Update 2 (2008-03-21): If the link above doesn’t work try YouTube: part1 and part 2.

Inane security questions

I am the trustee of a small pensions scheme, which means that every few years I have to fill in a form for The Pensions Regulator. This year the form-filling is required to be done online.

In order to register for the online system I need to supply an email address and a password (“at least 8 characters long and contain at least 1 numeric or non-alphabetic character”). So far so good.

If I forget this password, I will be required to answer two security questions, which I get to choose from a little shortlist. They’ve eschewed “mother’s maiden name”, but the system designer seems to have copied them from Bebo or Disney’s Mickey Mouse Club:

  • Name of your favourite entertainer?
  • Your main childhood phone number?
  • Your favourite place to visit as a child?
  • Name of your favourite teacher?
  • Your grandfather’s occupation?
  • Your best childhood friend?
  • Name your childhood hero?

Since most pension fund trustees, the people who have to provide good answers to these questions, will be in their 50’s and 60’s, these questions are quite clearly unsuitable.

I’ve gone with the last two… each of which turn out to be different from the password, but the answers, weirdly enough, are also at least 8 characters long and contain at least one numeric or non-alphabetic character!

www.e-victims.org

A new UK website, launched today, has a subtly (and I think importantly) different “spin” on online security.

The site is www.e-victims.org, where the emphasis is not so much on offering up-front security advice (for that, the UK-oriented site I’d recommend is www.getsafeonline.org), and not on reporting incidents to the police (who probably don’t have the capability to investigate anyway), but on offering practical down-to-earth advice on your rights and your next steps in complaining or getting recompense.

In many cases, you’re in trouble — pay for a cheap camera from China using Western Union or a debit card, and you’re going to have to chalk it up to experience. However, if you order from a UK company with your credit card and the goods arrive damaged then this is the site for you [contact the seller, not the courier company to deal with the damage; the Sale of Goods Act means that what you receive must be of satisfactory quality; and if you spent between 100 and 30000 pounds then the Consumer Credit Act means that the credit card company should reimburse you].

The site has launched with content for e-shopping victims (no Virginia, not that sort of victim) — and over the coming year will add more topics (phishing is specifically mentioned). If the site continues to give clear and down-to-earth advice as to whether or not you’ll be able to do anything about your problem, and if so what, then it will serve a very useful purpose indeed. Bookmark it for when you need it!

ObDisclaimer: The site is run by people I’ve known for decades, and I was so enthusiastic that I’ve been asked onto their Advisory Council. So you’d expect me to be enthusiastic here as well!

Relay attacks on card payment: vulnerabilities and defences

At this year’s Chaos Communication Congress (24C3), I presented some work I’ve been doing with Saar Drimer: implementing a smart card relay attack and demonstrating that it can be prevented by distance bounding protocols. My talk (abstract) was filmed and the video can be found below. For more information, we produced a webpage and the details can be found in our paper.

[ slides (PDF 9.6M) | video (BitTorrent — MPEG4, 106M) ]

Update 2008-01-15:
Liam Tung from ZDNet Australia has written an article on my talk: Bank card attack: Only Martians are safe.

Other highlights from the conference…

Hackers get busted

There is an article on BBC News about how yet another hacker running a botnet got busted. When I read the sentence “…he is said to be very bright and very skilled …”, I started thinking. How did they find him? He clearly must have made some serious mistakes, what sort of mistakes? How can isolation influence someone’s behaviour, what is the importance of external opinions on objectivity?

When we write a paper, we very much appreciate when someone is willing to read it, and give back some feedback. It allows to identify loopholes in thinking, flaws in descriptions, and so forth. The feedback does not necessarily have to imply large changes in the text, but it very often clarifies it and makes it much more readable.

Hackers do use various tools – either publicly available, or made by the hacker themself. There may be errors in the tools, but they will be probably fixed very quickly, especially if they are popular. Hackers often allow others to use the tools – if it is for testing or fame. But hacking for profit is a quite creative job, and there is plenty left for actions that cannot be automated.

So what is the danger of these manual tasks? Is it the case that hackers write down descriptions of all the procedures with checklists and stick to them, or do they do the stuff intuitively and become careless after a few months or years? Clearly, the first option is how intelligence agencies would deal with the problem, because they know that human is the weakest link. But what about hackers? “…very bright and very skilled…”, but isolated from the rest of the world?

So I keep thinking, is it worth trying to reconstruct “operational procedures” for running a botnet, analyse them, identify the mistakes most likely to happen, and use such knowledge against the “cyber-crime groups”?

A cryptographic hash function reading guide

After a few years of spectacular advances in breaking cryptographic hash function NIST has announced a competition to determine the next Secure Hash Algorithm, SHA-3. SHA-0 is considered broken, SHA-1 is still secure but no one knows for how long, and the SHA-2 family are desperately slow. (Do not even think about using MD5, or MD4 for which Prof. Wang can find collisions by hand, but RIPEMD-160 still stands.) Cryptographers are ecstatic about this development: as if they were a bit bored since the last NIST AES competition and depressed by the prospect of not having to design another significant block cipher for the next few years.

The rest of us should expect the next four years to be filled with news, first about advances in the design, then advances in the attacks against Hash functions, as teams with candidate hash algorithms will bitterly try to find flaws in each other’s proposals to ensure that their function becomes SHA-3. To fully appreciate the details of this competition, some of us may want a quick refresher on how to build secure hash function.

Here is a list of on-line resources for catching up with the state of the art:

  1. A very quick overview of hash functions and their applications is provided by Ilya Mironov. This is very introductory material, and does not go into the deeper details of what makes these functions secure, or how to break them.
  2. Chapter 9 on Hash Functions and Data Integrity of the Handbook of Applied Cryptography (Alfred J. Menezes, Paul C. van Oorschot and Scott A. Vanstone) provides a very good first overview of the properties expected from collision resistant hash function. It also presents the basic constructions for such functions from block ciphers (too slow for SHA-3), as well as from dedicated compression functions. Chapter 3 also quickly presents Floyd’s cycle finding algorithm to find collisions with negligible storage requirements.
  3. If your curiosity has not been satisfied, the second stop is Prof. Bart Preneel’s thesis entitled “Analysis and Design of Cryptographic Hash Functions“. This work provides a very good overview of the state of the art in hash function design up to the middle of the nineties (before SHA-1 was commissioned.) The back to the basics approach is very instructive, and frankly the thesis could be entitled “everything you wanted to know about hash functions and never dared ask.” Bart is one of the authors of RIPEMD-160 that is still considered secure, an algorithm worth studying.
  4. Hash functions do look like block ciphers under the hood, and an obvious idea might be to adapt aspects of AES and turn it into such a function. Whirlpool does exactly this, and is worth reading about. One of its authors, Paulo Barreto, also maintains a very thorough bibliography of hash function proposals along with all known cryptanalytic results against them (and a cute health status indicating their security.)
  5. Prof. Wang’s attacks that forced NIST to look for better functions are a must-read, even though they get very technical very soon. A gentler introduction to these attacks is provided in Martin Schlaffer’s Master’s thesis describing how the attacks are applied to MD4.
  6. Finally it is no fun observing a game without knowing the rules: the NIST SHA-3 requirements provide detailed descriptions of what the algorithm should look like, as well as the families of attacks it should resist. After reading it you might even be tempted to submit your own candidate!

Action Replay Justice

It is a sad fact that cheating and rule-breaking in sport gives rise to a lot of bile amongst both competitors and supporters. Think of the furore when a top athlete fails a drugs test, or when the result of a championship final comes down to a judgement call about offside. Multiplayer computer games are no different, and while there may be some rough team sports out there, in no other setting are team players so overtly trying to beat the crap out of each other as in an online first-person shooter. Throw in a bit of teenage angst in 1/3rd of the player base and you have a massive “bile bomb” primed to explode at any moment.

For this reason, cheating and the perception of cheating is a really big deal in the design of online shooters. In Boom! Headshot! I voiced some theories of mine on how a lot of the perception of cheating in computer games may be explained by skilled players inadvertently exploiting the game mechanics, but I have recently seen a shining example in the form of the game Call of Duty 4: Modern Warfare (COD4) of how to address and mitigate the perception of cheating.

First lets review two sorts of cheating that have really captured the imagination of the popular player base: wall hacks and aimbots. With a wall hack, the opponent can see his target even though he is concealed behind an object because the cheat has modified the graphics drivers to display walls as translucent rather than opaque (slight simplification). Aimbots can identify enemy players and assist a cheat in bringing his rifle to bear on the body of the enemy, usually the head. Many players who meet their death in situations where they cannot see how the person has managed to hit them (because they have been hiding, have been moving evasively, or are at great distance) get frustrated and let rip with accusations of cheating. Ironically this sort of cheating is pretty rare, because widespread adoption can be effectively countered by cheat detection software such as punkbuster. There will always be one or two cheats with their own custom software, but the masses simply cannot cheat.

But the trick the Call of Duty 4 developers have used is to make an action replay. Now this has been done before in games for dramatic effect, but crucially COD4 makes the replay from first-person view of the enemy who makes the kill, and winds back a full 5 or 6 seconds before the kill. Should you be unconcerned to see the replay, you may of course skip it. The embedded youtube video shows multiplayer gameplay, with a action replay occurring about 40 seconds in. Now, read on to consider the effect of this…


http://www.youtube.com/watch?v=jOMik2TXLec

Continue reading Action Replay Justice

WordPress cookie authentication vulnerability

In my previous post, I discussed how I analyzed the recent attack on Light Blue Touchpaper. What I did not disclose was how the attacker gained access in the first place. It turned out to incorporate a zero-day exploit, which is why I haven’t mentioned it until now.

As a first step, the attacker exploited an SQL injection vulnerability. When I noticed the intrusion, I upgraded WordPress then restored the database and files from off-server backups. WordPress 2.3.1 was released less than a day before my upgrade, and was supposed to fix this vulnerability, so I presumed I would be safe.

I was therefore surprised when the attacker broke in again, the following day (and created himself an administrator account). After further investigation, I discovered that he had logged into the “admin” account — nobody knows the password for this because I set it to a long random string. Neither me nor other administrators ever used that account, so it couldn’t have been XSS or another cookie stealing attack. How was this possible?

From examining the WordPress authentication code I discovered that the password hashing was backwards! While the attacker couldn’t have obtained the password from the hash stored in the database, by simply hashing the entry a second time, he generated a valid admin cookie. On Monday I posted a vulnerability disclosure (assigned CVE-2007-6013) to the BugTraq and Full-Disclosure mailing lists, describing the problem in more detail.

It is disappointing to see that people are still getting this type of thing wrong. In their 1978 summary, Morris and Thompson describe the importance of one way hashing and password salting (neither of which WordPress does properly). The issue is currently being discussed on LWN.net and the wp-hackers mailing list. Hopefully some progress will be made at getting it right this time around.

Google as a password cracker

One of the steps used by the attacker who compromised Light Blue Touchpaper a few weeks ago was to create an account (which he promoted to administrator; more on that in a future post). I quickly disabled the account, but while doing forensics, I thought it would be interesting to find out the account password. WordPress stores raw MD5 hashes in the user database (despite my recommendation to use salting). As with any respectable hash function, it is believed to be computationally infeasible to discover the input of MD5 from an output. Instead, someone would have to try out all possible inputs until the correct output is discovered.

So, I wrote a trivial Python script which hashed all dictionary words, but that didn’t find the target (I also tried adding numbers to the end). Then, I switched to a Russian dictionary (because the comments in the shell code installed were in Russian) but that didn’t work either. I could have found or written a better password cracker, which varies the case of letters, and does common substitutions (e.g. o → 0, a → 4) but that would have taken more time than I wanted to spend. I could also improve efficiency with a rainbow table, but this needs a large database which I didn’t have.

Instead, I asked Google. I found, for example, a genealogy page listing people with the surname “Anthony”, and an advert for a house, signing off “Please Call for showing. Thank you, Anthony”. And indeed, the MD5 hash of “Anthony” was the database entry for the attacker. I had discovered his password.

In both the webpages, the target hash was in a URL. This makes a lot of sense — I’ve even written code which does the same. When I needed to store a file, indexed by a key, a simple option is to make the filename the key’s MD5 hash. This avoids the need to escape any potentially dangerous user input and is very resistant to accidental collisions. If there are too many entries to store in a single directory, by creating directories for each prefix, there will be an even distribution of files. MD5 is quite fast, and while it’s unlikely to be the best option in all cases, it is an easy solution which works pretty well.

Because of this technique, Google is acting as a hash pre-image finder, and more importantly finding hashes of things that people have hashed before. Google is doing what it does best — storing large databases and searching them. I doubt, however, that they envisaged this use though. 🙂