Category Archives: Banking security

The security of the banking system, as well as hardware and software commonly used in such installations

Temporal Correlations between Spam and Phishing Websites

Richard Clayton and I have been studying phishing website take-down for some time. We monitored the availability of phishing websites, finding that while most phishing websites are removed with a day or two, a substantial minority remain for much longer. We later found that one of the main reasons why so many websites slip through the cracks is that the take-down companies responsible for removal refuse to share their URL lists with each other.

One nagging question remained, however. Do long-lived phishing websites cause any harm? Would removing them actually help? To get that answer, we had to bring together data on the timing of phishing spam transmission (generously shared by Cisco IronPort) with our existing data on phishing website lifetimes. In our paper co-authored with Henry Stern and presented this week at the USENIX LEET Workshop in Boston, we describe how a substantial portion of long-lived phishing websites continue to receive new spam until the website is removed. For instance, fresh spam continues to be sent out for 75% of phishing websites alive after one week, attracting new victims. Furthermore, around 60% of phishing websites still alive after a month keep receiving spam advertisements.

Consequently, removal of websites by the banks (and the specialist take-down companies they hire) is important. Even when the sites stay up for some time, there is value in continued efforts to get them removed, because this will limit the damage.

However, as we have pointed out before, the take-down companies cause considerable damage by their continuing refusal to share data on phishing attacks with each other, despite our proposals addressing their competitive concerns. Our (rough) estimate of the financial harm due to longer-lived phishing websites was $330 million per year. Given this new evidence of persistent spam campaigns, we are now more confident of this measure of harm.

There are other interesting insights discussed in our new paper. For instance, phishing attacks can be broken down into two main categories: ordinary phishing hosted on compromised web servers and fast-flux phishing hosted on a botnet infrastructure. It turns out that fast-flux phishing spam is more tightly correlated with the uptime of the associated phishing host. Most spam is sent out around the time the fast-flux website first appears and stops once the website is removed. For phishing websites hosted on compromised web servers, there is much greater variation between the time a website appears and when the spam is sent. Furthermore, fast-flux phishing spam was 68% of the total email spam detected by IronPort, despite this being only 3% of all the websites.

So there seems to be a cottage industry of fairly disorganized phishing attacks, with perhaps a few hundred people involved. Each compromises a small number of websites, while sending a small amount of spam. Conversely there are a small number of organized gangs who use botnets for hosting, send most of the spam, and are extremely efficient on every measure we consider. We understand that the police are concentrating their efforts on the second set of criminals. This appears to be a sound decision.

Chip and PIN on Trial

The trial of Job v Halifax plc has been set down for April 30th at 1030 in the Nottingham County Court, 60 Canal Street, Nottingham NG1 7EJ. Alain Job is an immigrant from the Cameroon who has had the courage to sue his bank over phantom withdrawals from his account. The bank refused to refund the money, making the usual claim that its systems were secure. There’s a blog post on the cavalier way in which the Ombudsman dealt with his case. Alain’s case was covered briefly in Guardian in the run-up to a previous hearing; see also reports in Finextra here, here and (especially) here.

The trial should be interesting and I hope it’s widely reported. Whatever the outcome, it may have a significant effect on consumer protection in the UK. For years, financial regulators have been just as credulous about the banks’ claims to be in control of their information-security risk management as they were about the similar claims made in respect of their credit risk management (see our blog post on the ombudsman for more). It’s not clear how regulatory capture will (or can) be fixed in respect of credit risk, but it is just possible that a court could fix the consumer side of things. (This happened in the USA with the Judd case, as described in our submission to the review of the ombudsman service — see p 13.)

For further background reading, see blog posts on the technical failures of chip and PIN, the Jane Badger case, the McGaughey case and the failures of fraud reporting. Go back into the 1990s and we find the Halifax again as the complainant in R v Munden; John Munden was prosecuted for attempted fraud after complaining about phantom withdrawals. The Halifax couldn’t produce any evidence and he was acquitted.

A truly marvellous proof of a transaction

When you transact with an EMV payment card (a “Chip and PIN” card), typical UK operation is for the bank to exchange three authentication “cryptograms”. First comes the request from the card (the ARQC), then comes the response from the bank (the ARPC) and finally the transaction certificate (TC). The idea with the transaction certificate is that the card signs off on the correct completion of the protocol, having received the response from the bank and accepted it. The resulting TC is supposed to be a sort of “proof of transaction”, and because it certifies the completion of the entire transaction at a technical level, one can presume that both the money was deducted and the goods were handed over. In an offline transaction, one can ask the card to produce a TC straight away (no AQRC and ARPC), and depending on various risk management settings, it will normally do so. So, what can you do with a TC?

Well, the TC is sent off to the settlment and clearing system, along with millions of others, and there it sits in the transaction logs. These logs get validated as part of clearing and the MAC is checked (or it should be). In practice the checks may get deferred until there is a dispute over a transaction (i.e. a human complains). The downside is, if you don’t check all TCs, you can’t spot offline fraud in the chip and pin system. Should a concerned party dispute a transaction, the investigation team will pull the record, and use a special software tool to validate the TC – objective proof that the card was involved.

Another place the TC gets put is on your receipt. An unscientific poll of my wallet reveals 13 EMV receipts with TCs present (if you see a hex string like D3803D679B33F16E8 you’ve found it) and 7 without – 65% of my receipts have one. The idea is that the receipt can stand as a cryptographic record of the transaction, should the banks logs fail or somehow be munged.

But there is a bit of a problem: sometimes there isn’t enough space for all the data. It’s a common problem: Mr. Fermat had a truly marvellous proof of a proposition but it wouldn’t fit in the margin, and unfortunately while the cryptogram fits on the receipt, you can’t check a MAC without knowing the input data, and EMV terminal software developers have a pretty haphazard approach to including this on the receipts… after all, paper costs money!

Look at the following receipts. Cab-Inn Aarhus has the AID, the ATC and various other input goodies to the MAC (sorry if I’m losing you in EMV technicalities); Ted Baker has the TC, the CVMR, the TSI, TVR and IACs (but no ATC), and the Credit Agricole cash withdrawal has the TC and little else. One doesn’t want to be stuck like Andrew Wiles spending seven years guessing the input data! Only at one shop have I seen the entire data required on the receipt (including CID et al) and it was (bizarrely) from a receipt for a Big Mac at McDonalds!

Various POS receipts

Now in case of dispute one can brute force the missing data and guess up to missing 40 bits of data or so without undue difficulty. And there is much wrangling (indeed the subject of previous and current legal actions) about exactly what data should be provided, whether the card UDKs should be disclosed to both sides in a dispute, and generally what constitutes adequate proof.

A "fake" transaction certificate
A “fake” transaction certificate

But most worrying is something I observed last week, making a credit card purchase for internet access at an Airport: a fake transaction certificate. I can tell its a fake because it firstly its the wrong length, and secondly, it’s online — I only typed in my card number, and never plugged my card into a smartcard reader. Now I can only assume that some aesthetically minded developer decided that the online confirmation needed a transaction certificate so took any old hex field and just dumped it. It could be an innocent mistake. But whether or not the “margin was too small” to contain this TC, its a sad reflection on the state of proof in EMV when cryptographic data fields only appear for decorative purposes. Maybe some knowledgeable reader can shed some light on this?

National Fraud Strategy

Today the Government “launches” its National Fraud Strategy. I qualify the verb because none of the quality papers seems to be running the story, and the press releases have not yet appeared on the websites of the Attorney General or the Ministry of Justice.

And well might Baroness Scotland be ashamed. The Strategy is a mishmash of things that are being done already with one new initiative – a National Fraud Reporting Centre, to be run by the City of London Police. This is presumably intended to defuse the Lords’ criticisms of the current system whereby fraud must be reported to the banks, not to the police. As our blog has frequently reported, banks dump liability for fraud on customers by making false claims about system security and imposing unreasinable terms and conditions. This is a regulatory failure: the FSA has been just as gullible in accepting the banking industry’s security models as they were about accepting its credit-risk models. (The ombudsman has also been eager to please.)

So what’s wrong with the new arrangements? Quite simply, the National Fraud Reporting Centre will nestle comfortably alongside the City force’s Dedicated Cheque and Plastic Crime Unit, which investigates card fraud but is funded by the banks. Given this disgraceful arrangement, which is more worthy of Uzbekistan than of Britain, you have to ask how eager the City force will be to investigate offences that bankers don’t want investigated, such as the growing number of insider frauds and chip card cloning? And how vigorously will City cops investigate their paymasters for the fraud of claiming that their systems are secure, when they’re not, in order to avoid paying compensation to defrauded accountholders? The purpose of the old system was to keep the fraud figures artificially low while enabling the banks to control such investigations as did take place. And what precisely has changed?

The lessons of the credit crunch just don’t seem to have sunk in yet. The Government just can’t kick the habit of kowtowing to bankers.

Optimised to fail: Card readers for online banking

A number of UK banks are distributing hand-held card readers for authenticating customers, in the hope of stemming the soaring levels of online banking fraud. As the underlying protocol — CAP — is secret, we reverse-engineered the system and discovered a number of security vulnerabilities. Our results have been published as “Optimised to fail: Card readers for online banking”, by Saar Drimer, Steven J. Murdoch, and Ross Anderson.

In the paper, presented today at Financial Cryptography 2009, we discuss the consequences of CAP having been optimised to reduce both the costs to the bank and the amount of typing done by customers. While the principle of CAP — two factor transaction authentication — is sound, the flawed implementation in the UK puts customers at risk of fraud, or worse.

When Chip & PIN was introduced for point-of-sale, the effective liability for fraud was shifted to customers. While the banking code says that customers are not liable unless they were negligent, it is up to the bank to define negligence. In practice, the mere fact that Chip & PIN was used is considered enough. Now that Chip & PIN is used for online banking, we may see a similar reduction of consumer protection.

Further information can be found in the paper and the talk slides.

Evil Searching

Tyler Moore and I have been looking into how phishing attackers locate insecure websites on which to host their fake webpages, and our paper is being presented this week at the Financial Cryptography conference in Barbados. We found that compromised machines accounted for 75.8% of all the attacks, “free” web hosting accounts for 17.4%, and the rest is various specialist gangs — albeit those gangs should not be ignored; they’re sending most of the phishing spam and (probably) scooping most of the money!

Sometimes the same machine gets compromised more than once. Now this could be the same person setting up multiple phishing sites on a machine that they can attack at will… However, we often observe that the new site is in a completely different directory — strongly suggesting a different attacker has broken into the same machine, but in a different way. We looked at all the recompromises where there was a delay of at least a week before the second attack and found that in 83% of cases a different directory was used… and using this definition of a “recompromise” we found that around 10% of machines were recompromised within 4 weeks, rising to 20% after six months. Since there’s a lot of vulnerable machines out there, there is something slightly different about the machines that get attacked again and again.

For 2486 sites we also had summary website logging data from The Webalizer; where sites had left their daily visitor statistics world-readable. One of the bits of data The Webalizer documents is which search terms were used to locate the website (because these are available in the “Referrer” header, and that will document what was typed into search engines such as Google).

We found that some of these searches were “evil” in that they were looking for specific versions of software that contained security vulnerabilities (“If you’re running version 1.024 then I can break in”); or they were looking for existing phishing websites (“if you can break in, then so can I”); or they were seeking the PHP “shells” that phishing attackers often install to help them upload files onto the website (“if you haven’t password protected your shell, then I can upload files as well”).

In all, we found “evil searches” on 204 machines that hosted phishing websites AND that, in the vast majority of cases, these searches corresponded in time to when the website was broken into. Furthermore, in 25 cases the website was compromised twice and we were monitoring the daily log summaries after the first break-in: here 4 of the evil searches occurred before the second break in, 20 on the day of the second break in, and just one after the second break-in. Of course, where people didn’t “click through” from Google search results, perhaps because they had an automated tool, then we won’t have a record of their searches — but neverthless, even at the 18% incidence we can be sure of, searches are an important mechanism.

The recompromise rates for sites where we found evil searches were a lot higher: 20% recompromised after 4 weeks, nearly 50% after six months. There are lots of complicating factors here, not least that sites with world-readable Webalizer data might simply be inherently less secure. However, overall we believe that it clearly indicates that the phishing attackers are using search to find machines to attack; and that if one attacker can find the site, then it is likely that others will do so independently.

There’s a lot more in the paper itself (which is well-worth reading before commenting on this article, since it goes into much more detail than is possible here)… In particular, we show that publishing URLs in PhishTank slightly decreases the recompromise rate (getting the sites fixed is a bigger effect than the bad guys locating sites that someone else has compromised); and we also have a detailed discussion of various mitigation strategies that might be employed, now that we have firmly established that “evil searching” is an important way of locating machines to compromise.

Card fraud — what can one do?

People often ask me what can they do to prevent themselves from being victims of card fraud when they pay with their cards at shops or use them in ATMs (for on-line card fraud tips see e-victims.org, for example). My short answer is usually “not much, except checking your statements and reporting anomalies to the bank”. This post is the longer answer — little practical things, some a bit over the top, I admit — that cardholders can do to decrease the risk of falling victim to card fraud. (Some of these will only apply to UK issued cards, some to all smartcards, and the rest applies to all types of cards.)

Practical:

1. If you have a UK EMV card, ask the bank to send you a new card if it was issued before the first quarter of 2008. APACS has said that cards issued from January 2008 have an iCVV (‘integrated circuit card verification value‘) in the chip that isn’t the same as the one on the magnetic stripe (CVV1). This means that if the magstripe data was read off the chip (it’s there for fallback) and written onto a blank magstripe card, it shouldn’t — if iCVVs are indeed checked — work at ATMs anywhere. The bad news is that in February 2008 only two out of four newly minted cards that we tested had iCVV, though today your chances may be better.

A PIN entry device taped together

2. In places that you are able to pick up the PIN entry device (PED), do it (Sainsbury’s actually encourages this). Firstly, it may allow you to hide your PIN from the people behind you in the queue. Secondly, it allows you to give it a cursory inspection: if there is more than one wire coming out from the back, or the thing falls apart, you shouldn’t use it. (In the picture on the right you see a mounted PED at a high-street shop that is crudely taped together.) In addition, be suspicious of PEDs that are mounted in an irregular way such that you can’t move or comfortably use them; this may indicate that the merchant has a very good camera angle on the keypad, and if you move the PED, it may get out of focus. Of course, some stores mount their PEDs such that they can’t be moved, so you’ll have to use your judgment.

Continue reading Card fraud — what can one do?

How can we co-operate to tackle phishing?

Richard Clayton and I recently presented evidence of the adverse impact of take-down companies not sharing phishing feeds. Many phishing websites are missed by the take-down company which has the contract for removal; unsurprisingly, these websites are not removed very fast. Consequently, more consumers’ identities are stolen.

In the paper, we propose a simple solution: take-down companies should share their raw, unverified feeds of phishing URLs with their competitors. Each company can examine the raw feed, pick out the websites impersonating their clients, and focus on removing these sites.

Since we presented our findings to the Anti-Phishing Working Group eCrime Researchers Summit, we have received considerable feedback from take-down companies. Take-down companies attending the APWG meeting understood that sharing would help speed up response times, but expressed reservations at sharing their feeds unless they were duly compensated. Eric Olsen of Cyveillance (another company offering take-down services) has written a comprehensive rebuttal of our recommendations. He argues that competition between take-down companies drives investment in efforts to detect more websites. Mandated sharing of phishing URL feeds, in his view, would undermine these detection efforts and cause take-down companies such as Cyveillance to exit the business.

I do have some sympathy for the objections raised by the take-down companies. As we state in the paper, free-riding (where one company relies on another to invest in detection so they don’t have to) is a concern for any sharing regime. Academic research studying other areas of information security (e.g., here and here), however, has shown that free-riding is unlikely to be so rampant as to drive all the best take-down companies out of offering service, as Mr. Olsen suggests.

While we can quibble over the extent of the threat from free free-riding, it should not detract from the conclusions we draw over the need for greater sharing. In our view, it would be unwise and irresponsible to accept the current status quo of keeping phishing URL feeds completely private. After all, competition without sharing has approximately doubled the lifetimes of phishing websites! The solution, then, is to devise a sharing mechanism that gives take-down companies the incentive to keep detecting more phishing URLs.
Continue reading How can we co-operate to tackle phishing?

Liberal Democrat leader visits our lab

This week, Nick Clegg, leader of the UK Liberal Democrat Party, and David Howarth, MP for Cambridgeshire, visited our hardware security lab for a demonstration of Chip & PIN fraud techniques.

They used this visit to announce their new party policy on protections against identity fraud. At present, credit rating companies are exempt from aspects of the Data Protection Act and can forward personal information about an individual’s financial history to companies without the subject’s consent. Clegg proposes to give individuals the rights to “freeze” their credit records, making it more difficult for fraudsters to impersonate others.

See also the Cambridge Evening News article and video interview.