Category Archives: Security engineering

Bad security, good security, case studies, lessons learned

Interview with Steven Murdoch on Finextra

Today, Finextra (a financial technology news website), has published a video interview with me, discussing my research on banks using card readers for online banking, which was recently featured on TV.

In this interview, I discuss some of the more technical aspects of the attacks on card readers, including the one demonstrated on TV (which requires compromising a Chip & PIN terminal), as well as others which instead require that the victim’s PC be compromised, but which can be carried out on a larger scale.

I also compare the approaches taken by the banking community to protocol design, with that of the Internet community. Financial organizations typically develop protocols internally, and so are subject to public scrutiny late in deployment, if at all. This is in contrast with Internet protocols which are commonly first discussed within industry and academia, then the specification is made public, and only then is it implemented. As a consequence, vulnerabilities in banking security systems are often more expensive to fix.

Also, I discuss some of the non-technical design decisions involved in the deployment of security technology. Specifically, their design needs to take into account risk analysis, psychology and usability, not just cryptography. Organizational structures also need to incentivize security; groups who design security mechanisms should be responsible for failure. Organizational structures should also discourage knowledge of security failings from being hidden from management. If necessary a separate penetration testing team should report directly to board level.

Finally I mention one good design principle for security protocols: “make everything as simple as possible, but not simpler”.

The video (7 minutes) can be found below, and is also on the Finextra website.

TV coverage of online banking card-reader vulnerabilities

This evening (Monday 26th October 2009, at 19:30 UTC), BBC Inside Out will show Saar Drimer and I demonstrating how the use of smart card readers, being issued in the UK to authenticate online banking transactions, can be circumvented. The programme will be broadcast on BBC One, but only in the East of England and Cambridgeshire, however it should also be available on iPlayer.

In this programme, we demonstrate how a tampered Chip & PIN terminal could collect an authentication code for Barclays online banking, while a customer thinks they are buying a sandwich. The criminal could then, at their leisure, use this code and the customer’s membership number to fraudulently transfer up to £10,000.

Similar attacks are possible against all other banks which use the card readers (known as CAP devices) for online banking. We think that this type of scenario is particularly practical in targeted attacks, and circumvents any anti-malware protection, but criminals have already been seen using banking trojans to attack CAP on a wide scale.

Further information can be found on the BBC online feature, and our research summary. We have also published an academic paper on the topic, which was presented at Financial Cryptography 2009.

Update (2009-10-27): The full programme is now on BBC iPlayer for the next 6 days, and the segment can also be found on YouTube.

BBC Inside Out, Monday 26th October 2009, 19:30, BBC One (East)

Security psychology

I have put together a web page on psychology and security. There is a fascinating interplay between these two subjects, and their intersection is now emerging as a new research discipline, encompassing deception, risk perception, security usability and a number of other important topics. I hope that the new web page will be as useful in spreading the word as my security economics page has been in that field.

Tuning in to random numbers

Tomorrow at Cryptographic Hardware and Embedded Systems 2009 I’m going to be presenting a frequency injection attack on random number generators formed from ring oscillators.

Random numbers are a vital part of cryptography — if predictable numbers are being used an attacker may be able to read secret messages, impersonate either party, or replay transactions. In addition, many countermeasures to attacks such as Differential Power Analysis involve adding randomness to operations — without the randomness algorithms such as RSA become susceptible.

To create unpredictable random numbers in a predictable computer involves measuring some kind of physical process. Examples include circuit noise, radioactive decay and timing variations. One method commonly used in low-cost circuits such as smartcards is measuring the jitter from free-running ring oscillators. The ring oscillators’ frequencies depend on environmental factors such as voltage and temperature, and by having many independent ring oscillators we can harvest small timing differences between them (jitter).

But what happens if they aren’t independent? In particular, what happens if the circuit is faced with an attacker who can manipulate the outside of the system?

The attack turns out to be fairly straightforward. An effect called injection locking, known since 1665, considers what happens if you have two oscillators very lightly connected. For example, two pendulum clocks mounted on a wall tend to synchronise the swing of their pendula through small vibrations transmitted through the wall.

In an electronic circuit, the attacker can inject a signal to force the ring oscillators to injection-lock. The simplest way involves forcing a frequency onto the power supply from which the ring oscillators are powered. If there are any imbalances in the circuit we suggest that this causes the circuit to ring to be more susceptible at that point to injection locking. So we examined the effects of power supply injection, and can envisage a similar attack by irradiation with electromagnetic fields.

And it works surprisingly well. We tried an old version of a secure microcontroller that has been used in banking ATMs (and is still recommended for new ones). For the 32 random bits that are used in an ATM transaction, we managed to reduce the number of possibilities from 4 billion to about 225.

So if an attacker can have access to your card and PIN, in a modified shop terminal for example, he can record some ATM transactions. Then he needs to take a fake card to the ATM containing this microcontroller. On average he’ll need to record 15 transactions (the square root of 225) on the card and try 15 transactions at the ATM before he can steal the money. This number may be small enough not to set off alarms at the bank. The customer’s card and PIN were used for the transaction, but at a time when he was nowhere near an ATM.

While we looked at power supply injection, the ATM could also be attacked electromagnetically. Park a car next to the ATM emitting a 10 GHz signal amplitude modulated by the ATM’s vulnerable frequency (1.8 MHz in our example). The 10 GHz will penetrate the ventilation slots but then be filtered away, leaving 1.8 MHz in the power supply. When the car drives away there’s no evidence that the random numbers were bad – and bad random numbers are very difficult to detect anyway.

We also tried the same attack on an EMV (‘Chip and PIN’) bank card. Before injection, the card failed only one of the 188 tests in the standard NIST suite for random number testing. With injection it failed 160 of 188. While we can’t completely predict the random number generator, there are some sequences that can be seen.

So, as ever, designing good random number generators turns out to be a hard problem not least because the attacker can tamper with your system in more ways than you might expect.

You can find the paper and slides on my website.

Defending against wedge attacks in Chip & PIN

The EMV standard, which is behind Chip & PIN, is not so much a protocol, but a toolkit from which protocols can be built. One component it offers is card authentication, which allows the terminal to discover whether a card is legitimate, without having to go online and contact the bank which issued it. Since the deployment of Chip & PIN, cards issued in the UK only offer SDA (static data authentication) for card authentication. Here, the card contains a digital signature over a selection of data records (e.g. card number, expiry date, etc). This digital signature is generated by the issuing bank, and the bank’s public key is, in turn, signed by the payment scheme (e.g. Visa or MasterCard).

The transaction process for an SDA card goes roughly as follows:

Card auth. Card → Terminal: records, sigBANK{records}
Cardholder verif. Terminal → Card: PIN entered
Card → Terminal: PIN OK
Transaction auth. Terminal → Card: transaction, nonce
Card → Terminal: MAC{transaction, nonce, PIN OK}

Some things to note here:

  • The card contains a static digital signature, which anyone can read and replay
  • The response to PIN verification is not authenticated

This means that anyone who has brief access to the card, can read the digital signature to create a clone which will pass card authentication. Moreover, the fraudster doesn’t need to know the customer’s PIN in order to use the clone, because it can simply return “yes” to any PIN and the terminal will be satisfied. Such clones (so-called “yes cards”) have been produced by security consultants as a demo, and also have been found in the wild.

Continue reading Defending against wedge attacks in Chip & PIN

Security and Human Behaviour 2009

I’m at SHB 2009, which brings security engineers together with psychologists, behavioral economists and others interested in deception, fraud, fearmongering, risk perception and how we make security systems more usable. Here is the agenda.

This workshop was first held last year, and most of us who attended reckoned it was the most exciting event we’d been to in some while. (I blogged SHB 2008 here.) In followups that will appear as comments to this post, I’ll be liveblogging SHB 2009.

How Privacy Fails: The Facebook Applications Debacle

I’ve been writing a lot about privacy in social networks, and sometimes the immediacy gets lost during the more theoretical debates. Recently though I’ve been investigating a massive privacy breach on Facebook’s application platform which serves as a sobering case study. Even to me, the extent of unauthorised data flow I found and the cold economic motivations keeping it going were surprising. Facebook’s application platform remains a disaster from a privacy standpoint, dampening one of the more compelling features of the network.

Continue reading How Privacy Fails: The Facebook Applications Debacle

Attack of the Zombie Photos

One of the defining features of Web 2.0 is user-uploaded content, specifically photos. I believe that photo-sharing has quietly been the killer application which has driven the mass adoption of social networks. Facebook alone hosts over 40 billion photos, over 200 per user, and receives over 25 million new photos each day. Hosting such a huge number of photos is an interesting engineering challenge. The dominant paradigm which has emerged is to host the main website from one server which handles user log-in and navigation, and host the images on separate special-purpose photo servers, usually on an external content-delivery network. The advantage is that the photo server is freed from maintaining any state. It simply serves its photos to any requester who knows the photo’s URL.

This setup combines the two classic forms of enforcing file permissions, access control lists and capabilities. The main website checks each request for a photo against an ACL, it then grants a capability to view a photo in the form of an obfuscated URL which can be sent to the photo-server. We wrote earlier about how it was possible to forge Facebook’s capability-URLs and gain unauthorised access to photos. Fortunately, this has been fixed and it appears that most sites use capability-URLs with enough randomness to be unforgeable. There’s another traditional problem with capability systems though: revocation. My colleagues Jonathan Anderson, Andrew Lewis, Frank Stajano and I ran a small experiment on 16 social-networking, blogging, and photo-sharing web sites and found that most failed to remove image files from their photo servers after they were deleted from the main web site. It’s often feared that once data is uploaded into “the cloud,” it’s impossible to tell how many backup copies may exist and where, and this provides clear proof that content delivery networks are a major problem for data remanence. Continue reading Attack of the Zombie Photos

Optimised to fail: Card readers for online banking

A number of UK banks are distributing hand-held card readers for authenticating customers, in the hope of stemming the soaring levels of online banking fraud. As the underlying protocol — CAP — is secret, we reverse-engineered the system and discovered a number of security vulnerabilities. Our results have been published as “Optimised to fail: Card readers for online banking”, by Saar Drimer, Steven J. Murdoch, and Ross Anderson.

In the paper, presented today at Financial Cryptography 2009, we discuss the consequences of CAP having been optimised to reduce both the costs to the bank and the amount of typing done by customers. While the principle of CAP — two factor transaction authentication — is sound, the flawed implementation in the UK puts customers at risk of fraud, or worse.

When Chip & PIN was introduced for point-of-sale, the effective liability for fraud was shifted to customers. While the banking code says that customers are not liable unless they were negligent, it is up to the bank to define negligence. In practice, the mere fact that Chip & PIN was used is considered enough. Now that Chip & PIN is used for online banking, we may see a similar reduction of consumer protection.

Further information can be found in the paper and the talk slides.