Tyler Moore and myself have a paper (An Empirical Analysis of the Current State of Phishing Attack and Defence) accepted at this year’s Workshop on the Economics of Information Security (WEIS 2007) in which we examine how long phishing websites remain available before the impersonated bank gets them “taken-down”.
We monitored the availability of several thousand phishing websites over a two month period and our results show that a typical phishing website can be visited for an average of 58 hours, but this average is skewed by very long-lived sites — we find that the distribution is lognormal — with the median lifetime being just 20 hours.
We also identified a significant subset of websites (over half of all URLs being reported to the PhishTank database we used) which were clearly being operated by a single “rock-phish” gang. These sites attacked multiple banks and used pools of IP addresses and domain names. We found that these sites remained available for an average of 94 hours (again with a lognormal distribution, but with a median of 55 hours). A newer architectural innovation dubbed “fast-flux” that used hundreds of different compromised machines per week, extended the website availability to a median of 202 hours.
The relative success of the rock-phish gang was a rather unexpected result — you’d think that with more banks wanting the sites removed, they’d disappear faster. It’s hard to say whether or not the rock-phish techies are evil genuises, or whether they just move around so fast that, pretty much by chance, they end up persisting in the locations where take-down is slow.
We believe that one important advance would be to reduce the information asymmetry for the defenders. Phishers obfuscate their behaviour and make sites appear independent and thereby phishing appears to many to be an intractable problem. Security vendors are happy to accept inflated (and ever-increasing) statistics to make the problem seem more important and even PhishTank trumpets the increase in the number of reports rather than their true uniqueness. Law enforcement will not prioritise investigations if there appear to be hundreds of small-scale phishing attacks, whereas their response would be different if there were just a handful of people involved. Hence, improving the measurement systems, and better identifying patterns of similar behaviour, will give defenders the opportunity to focus their response upon a smaller number of unique phishing gangs.
We were also able to examine web log summaries at a number of sites, along with some detailed records of visitors that a handful of phishers inadvertently disclosed. This allowed us to create a ball-park estimate of the number of visitors who divulged their data on a typical site, which was 25 if it remained up for one day, and growing by 10 more per day thereafter.
Our figures do demonstrate that the reactive strategy pursued by the banks reduces the damage done by phishing websites. However, it is clearly not occurring fast enough to prevent losses from occurring, and so it must not be the only response. In particular, we used the lifetime and visitor numbers to show that, on fairly conservative extrapolations, the banks’ losses that can be directly attributed to phishing websites are some $175m per annum, with a further $175m or so being raked in by the rock-phish gang. This total of $350m falls short of the $2000m estimated last November by Gartner. The disparity will be partly the very rough estimates we used (and the rough estimates in Gartner’s figures), and partly also other mechanisms such as theft of merchant databases and malware that scans your hard disk for passwords and installs keyloggers — we certainly cannot say that all phishing losses are phishing, but a chunk certainly is.
There’s some interesting commentary over on Financial Cryptography discussing Paul Ohm’s article on “The Myth of the Superuser”. The suggestion is that we’re promoting the rock-phish gang as Superusers… I don’t think we are, we’re merely saying that we can measure an enhanced ability to evade “take-down”, but it will be spam-sending prowess that will attract people to their sites, and their ability to cash-out accounts that determines their profits.
One thing that kind of annoys me is that simple phishing would not be to dificult to stop but the banks do not apear to be interested in making the effort.
A simple example would be that the bank issues an account holder with a dongle or other personal Chip-n-Pin type reader which works in both directions as oposed to the single direction of the devices they are starting to send out.
If each user gets sent a “check number” on screeen that must match the one on the device then the user will know if it does not match that the site is fake.
If however the number does match then the user knows that the bank site must have been in the communication chain somewhere. This does not prevent a man in the middle attack but it does make it more difficult to perform and more importantly more easily detected by the bank (if they can be bothered).
IF (and it’s a big if) the same reader device could be used to authenticate the transaction in some way then this would make phishing even more difficult.
I appreciate that these solutions are not perfect (few things ever are), however the current situation where the bank makes a very small improvment to security means that the phishing operators have a reasonable chance of overcoming it before they run out of resources in which case they are unfortunatly back in the game.
I have made the point before that the banks need to make a major not minor improvment in security so that the bar is lifted beyond the point that the majority of phishing and other attacks cannot be made to work without a very considerable investment of time and resoures by the fraudsters.
However I do not think the banks will do this as long as they can externalise the risk and pass of any losses onto the customer (which is why I guess that a great many security people do not use online banking…).
Phishing is in fact very difficult indeed to stop. The main reason is that it’s almost impossible to reegineer the users, and they’re the weakest link by far. Propping them up with technology only gets you so far.
Some of the banks (including Barclays in the UK) are issuing end-user devices for authentication. They are expected to work to the extent that the phishers go elsewhere to where pickings are easier– however, they are vulnerable to Man-In-The-Middle attacks, so you need to deploy them along with suitable traffic-analysis techniques at the bank webservers.
The main reason that the banks haven’t issued these devices up to now is that they’re far too expensive compared with the actual losses sustained.: you can only really justify them to protect your good name. This was one of the key points made by Matthew Pemble in this week’s security seminar — along with a clear indication that what was being effective was back-office controls on money transfers. The bad guys are putting a lot of funds at risk, transferring some of this money — and actually getting away with only about half of the amount that they managed to move — and only being really successful against a small number of banks whose losses dominated the overall statistics.
For some years now, I’ve been arguing that one of the biggest obstacles to banking security is asymmetric authentication. Every new technology rolled out by the banks is dedicated to authenticating the customer to the bank, but almost no attention is paid to authenticating the bank to the customer.
Indeed, many banks’ marketing departments (and even some security departments) go even further, calling customers out of the blue, claiming to be from their bank and asking the customer to authenticate themselves. Spending a trivial amount on anti-phishing PR seems worthless, when a bank’s day to day practice is to train customers to fall victim to pretext calls.
Richard,
If your last statment is true (and I tend to concure from what I have seen) then an independant league table might just encorage the bad banks to pull their socks up.
The problem I see with the tokens currently being issued is that they are only one way (customer -> bank) and do not authenticate transactions which makes a man in the midle attack all to easy to perform.
Two way authentication with transaction authentication would make the MITM task much harder especialy if the numbers sent back to the customer where small pictures sufficiently obsficated to prevent machine reading.
This shifts the workload back onto the Phishers who would have to be effectivly present in the transaction window to make it possible at all.
As I indicated it would not be a perfect solution but it would be a major change from the current methods.
I suspect that phishing will stop when the costs / risks involved become to great, so the banks should do more on cracking down on transfers etc (which is a very open subject in it’s own right).
The costs of a C-n-P reader with keypad and display in large numbers is actually not going to be that large compared to other costs a bank makes such as the (supposed) 40-60GBP price of checking out a customer when opening an account.
Also if you are going to issue a token the extra software etc to do two way and transaction authentication is not likley to add greatly to the cost of the device.
Further if expense was not a limitation an out of channel authentication process would make the phising task very very difficult. It has been suggested that SMS might be used for this but that has it’s own issues that I looked into back in late 2000 which are still only partialy resolved.
Maybe people need to raise the pain threshold for the banks to make them raise the bar on fraud etc.
Richard and Tyler,
Congratulations on a superb piece of research. Yours is one of the most in depth and lucid explorations in this area.
My own research and statistical modelling of # of users who fall for a phishing site, # of sites, uptime and hence annual losses concurs closely with yours. I believe that the average loss figure however is much higher than the $500 range. Javelin Strategies has done user surveys and also bank surveys that indicate that the per-incident loss is much higher.
On the topic of mutual authentication, and authentication in general… In the USA we have about 50M online banking users. Let’s say we roll out some kind of authentication technology that costs $20 per user per year. Most banks feel that this estimate is far too low for any end-user technology given the costs of support, enrollment, etc. But lets use it as a figure. That works out to about $1B per year in authentication costs.
If the losses are less than $1B, the industry will never pursue this. In fact, the losses would have to be in the $5B to $10B range for the industry to bother pursuing it.
In fact, many of the losses come from purloined credit card numbers, which are used by phishers for Internet transactions. Thus the bank never is impacted by the losses at all. The Internet merchants eat the losses due to Card Not Present rules.
Therefore we can hardly expect strong mutual authentication, which requires any training or effort on the end users’s part, to be deployed by banks in any large numbers, particularly in the USA. In the UK it’s a bit better of a scenario, because APACS (largely due to the efforts of Colin Wittaker) has developed a “standard” for using EMV cards and offline readers to do user and transaction authentication. It will be interesting to see how Barclay’s rollout of said technology progresses.
– Dave Jevans
Chairman, Anti-Phishing Working Group
My personal blog.
Dear Dave,
Thanks for your comments. The $572 estimate from Gartner is certainly an imprecise estimate, since it was based on surveys for all types of identity theft. Your point on the impractical cost of mutual authentication is well taken, as is the fact that many costs are borne by merchants and not just the banks. Yet as I’m sure you are aware the indirect costs of phishing — namely, undermining consumer trust in online banking — are quite high, even if they cannot be so straightforwardly measured as direct losses. This may justify mutual authentication mechanisms even if it is hard for the banks to make a business case for them.
As an anecdote to underline Tyler’s comments about the impact to trust and brand, HarborOne Credit Union is billing TJX over their data breach for $590,000 USD. The figure is calculated as $90,000 for replacing credit cards and $500K as the impact to the credit union brand.
From a computer world article:
“The bill was for both direct operational costs that we incurred reissuing new debit cards to our customers, as well as the costs to us from a reputational standpoint,” he said. According to Blake, the TJX breach resulted in HarborOne having to block and reissue about 9,000 cards at a cost of around $90,000. The remaining $500,000 is what Blake believes the breach cost the credit union in terms of brand damage.
“We had to notify customers of the fact that their account was breached. There were some questions on their part whether or not we were responsible [for the breach] when in fact it was TJX’s responsibility,” Blake said.
This is a nice piece of research, I can’t believe I’ve only just stumbled upon it.
I work for a bank, and we do go to great lengths to prevent, detect and take down phishing sites.
Research we have done shows that the impact of changes to authentication mechanisms (mutual or otherwise) only serves to postpone the inevitable, that is within 3 months phishing levels have surpassed what they were prior to the change. Phishers are employing more and more complex techniques to hoodwink the unsuspecting and they are dynamic in their approach, quickly adapting to changes and producing kits which are widely available, at a price.
Phishing isn’t necessarily the fault of the systems put in place by the banks, but more to do with educating the customer, the customer will always be the weak link in this chain. Unfortunately it is also about the responsiveness of the hosting companies.
We receive and act upon over 500 phishing sites/URLs per day, some hosting companies will take down sites within hours, whilst a small number don’t act at all.
I honestly think that some form of legislation needs to be passed which holds the ISP/Host accountable for losses suffered as a result of phishing, something which will encourage these companies to be more proactive in identifying and removing such content.
As it stands in the UK, as far as I am aware, the act of phishing is a crime. However the hosting of the phishing site can, at most, be pursued under copyright infringement. This helps nobody other than the perpetrators of the crime.
Admittedly, new legislation in UK would most likely only push the phishers to shores where the ISP/Hosts are not forced down the proactive route, but in all honesty it is the UK companies which we have most difficulty with.
@Toby
Phishing isn’t necessarily the fault of the systems put in place by the banks, but more to do with educating the customer, the customer will always be the weak link in this chain. Unfortunately it is also about the responsiveness of the hosting companies.
If you read more of our work, you’ll see that we disagree and feel that the main way forward is for the banks to be incontrovertably liable for losses (they choose the security mechanisms and must be incentivised to choose the correct ones). Educating customers may assist the banks in that a wider range of mechanisms may be available to them — but ultimately you can fool pretty much anyone with a “man in the browser” attack, or a BGP route injection, or a DNS poisoning attack at the ISP’s cache.
The banks don’t necessarily have to prevent loss of credentials, it’s equally viable to soup up back end controls to detect suspicious transfers. It’s also possible to discourage phishing by catching and locking up some of the criminals (currently not a very common occurrence).
As it stands in the UK, as far as I am aware, the act of phishing is a crime. However the hosting of the phishing site can, at most, be pursued under copyright infringement.
The Fraud Act 2006 made the setting up of a phishing site illegal, even if you are caught before sending any email. The hosting company has immunity under the Regulations transposing the eCommerce Directive — at least until they have “actual knowledge” of the site. (IANAL!)
Your study was done 2007. Do you have any update, follow up study related to take down of phishing sites/suspected malicious sites? Things maybe far different by now. I am interested on the information about experiences of banks, how their behavior might have changed in terms of reacting to any detection of a phishing site, what is the time threshold for a take down.
here, let me Google that for you:
https://www.google.com/search?q=phishing+website+takedown+clayton&rlz=1C1CHBF_en-GBGB756GB756&biw=1728&bih=895&source=lnt&tbs=cdr%3A1%2Ccd_min%3A1%2F1%2F2015%2Ccd_max%3A7%2F7%2F2020&tbm=