Category Archives: Security economics

Social-science angles of security

Evil Searching

Tyler Moore and I have been looking into how phishing attackers locate insecure websites on which to host their fake webpages, and our paper is being presented this week at the Financial Cryptography conference in Barbados. We found that compromised machines accounted for 75.8% of all the attacks, “free” web hosting accounts for 17.4%, and the rest is various specialist gangs — albeit those gangs should not be ignored; they’re sending most of the phishing spam and (probably) scooping most of the money!

Sometimes the same machine gets compromised more than once. Now this could be the same person setting up multiple phishing sites on a machine that they can attack at will… However, we often observe that the new site is in a completely different directory — strongly suggesting a different attacker has broken into the same machine, but in a different way. We looked at all the recompromises where there was a delay of at least a week before the second attack and found that in 83% of cases a different directory was used… and using this definition of a “recompromise” we found that around 10% of machines were recompromised within 4 weeks, rising to 20% after six months. Since there’s a lot of vulnerable machines out there, there is something slightly different about the machines that get attacked again and again.

For 2486 sites we also had summary website logging data from The Webalizer; where sites had left their daily visitor statistics world-readable. One of the bits of data The Webalizer documents is which search terms were used to locate the website (because these are available in the “Referrer” header, and that will document what was typed into search engines such as Google).

We found that some of these searches were “evil” in that they were looking for specific versions of software that contained security vulnerabilities (“If you’re running version 1.024 then I can break in”); or they were looking for existing phishing websites (“if you can break in, then so can I”); or they were seeking the PHP “shells” that phishing attackers often install to help them upload files onto the website (“if you haven’t password protected your shell, then I can upload files as well”).

In all, we found “evil searches” on 204 machines that hosted phishing websites AND that, in the vast majority of cases, these searches corresponded in time to when the website was broken into. Furthermore, in 25 cases the website was compromised twice and we were monitoring the daily log summaries after the first break-in: here 4 of the evil searches occurred before the second break in, 20 on the day of the second break in, and just one after the second break-in. Of course, where people didn’t “click through” from Google search results, perhaps because they had an automated tool, then we won’t have a record of their searches — but neverthless, even at the 18% incidence we can be sure of, searches are an important mechanism.

The recompromise rates for sites where we found evil searches were a lot higher: 20% recompromised after 4 weeks, nearly 50% after six months. There are lots of complicating factors here, not least that sites with world-readable Webalizer data might simply be inherently less secure. However, overall we believe that it clearly indicates that the phishing attackers are using search to find machines to attack; and that if one attacker can find the site, then it is likely that others will do so independently.

There’s a lot more in the paper itself (which is well-worth reading before commenting on this article, since it goes into much more detail than is possible here)… In particular, we show that publishing URLs in PhishTank slightly decreases the recompromise rate (getting the sites fixed is a bigger effect than the bad guys locating sites that someone else has compromised); and we also have a detailed discussion of various mitigation strategies that might be employed, now that we have firmly established that “evil searching” is an important way of locating machines to compromise.

Missing the Wood for the Trees

I’ve just submitted a (rather critical) public response to an ICANN working group report on fast-flux hosting (read the whole thing here).

Many phishing websites (and other types of wickedness) are hosted on botnets, with the hostname resolving to different machines every few minutes or hours (hence the “fast” in fast-flux). This means that in order to remove the phishing website you either have to shut-down the botnet — which could take months — or you must get the domain name suspended.

ICANN’s report goes into lots of detail about how fast-flux hosting has been operated up to now, and sets out all sorts of characteristics that it currently displays (but of course the criminals could do something different tomorrow). It then makes some rather ill-considered suggestions about how to tackle some of these symptoms — without really understanding how that behaviour might be being used by legimitate companies and individuals.

In all this concentration on the mechanics they’ve lost sight of the key issue, which is that the domain name must be removed — and this is an area where ICANN (who look after domain names) might have something to contribute. However, their report doesn’t even tackle the different roles that registries (eg Nominet who look after the .UK infrastructure) and registrars (eg Gradwell who sell .UK domain names) might have.

From my conclusion:

The bottom line on fast-flux today is that it is almost entirely associated with a handful of particular botnets, and a small number of criminal gangs. Law enforcement action to tackle these would avoid a further need for ICANN consideration, and it would be perfectly rational to treat the whole topic as of minor importance compared with other threats to the Internet.

If ICANN are determined to deal with this issue, then they should leave the technical issues almost entirely alone. There is little evidence that the working group has the competence for considering these. Attention should be paid instead to the process issues involved, and the minimal standards of behaviour to be expected of registries, registrars, and those investigators who are seeking to have domain names suspended.

I strongly recommend adopting my overall approach of an abstract definition of the problem: The specific distinguisher of a fast-flux attack is that the dynamic nature of the DNS is exploited so that if a website is to be suppressed then it is essential to prevent the hostname resolving, rather than attempting to stop the website being hosted. The working group should consider the policy and practice issues that flow from considering how to prevent domain name resolution; rather than worrying about the detail of current attacks.

Another link spammer

Yet another link spammer is cluttering up my in-box. You’d think that after exposing this one, and this one, and this one, they’d know better.

The latest set of miscreants operates under the brand “goodeyeforlinks.com” and claim to “use white hat SEO techniques in order to get high quality, do-follow links to your website”. They also claim to be “professional” which in this case must mean you pay for their services, since sending out bulk unsolicited email is anything but professional.

Nevertheless, although their long term aim may indeed be to make money from legitimate, albeit foolish, businesses seeking a higher profile, the sites they have been promoting so far are anything but legitimate. In fact they’ve been fake sites covered with Google adverts (so-called “Made for AdSense” (MFA) sites).

They started by asking me to link to “entovation.net” which they claim is “page rank 3”. In fact it is page rank 3 (!) and a blatant copy of http://www.acentesolutions.com which appears entirely genuine (albeit only page rank 1). They have also been promoting “poland-translation-services.com“, which claims to be a site offering “A large team of 2,500 translators specializing in each sector, located in over 30 countries” …

However, this site is clearly fake as well. I haven’t tracked down where it all comes from, but much of this page comes from this Argentinian page, the text of which has been pushed through Google’s Spanish to English translation tools… which sadly (for example) renders

Comentarios: Se considera foja al equivalente a 500 palabras. Si el documento a traducir es menor a una foja, se lo considerará como una foja.

into

Comments: foja is considered the equivalent of 500 words. If the document is translated to a lesser foja, we will consider as a foja.

which makes the 2500 translators look more than a little bit foolish!

The fake websites are hosted by EuroAccess Enterprises Ltd. in The Netherlands (which is also where the email spam has been sent from). I’m not alone in receiving this type of email, further examples can be found here, and here, and here, and here, and here, and here, and even here (in Spanish).

EuroAccess have a fine ticketing system for abuse complaints… so I’m able to keep track of what they’re doing about my emails drawing their attention to the fraudsters they are hosting. I am therefore fully aware that they’ve so far marked my missives as “Priority: Low”, and nothing else is recorded to have been done… However, the tickets are still “Status: Open”, so perhaps a little publicity will encourage them to reassess their prioritisation.

How can we co-operate to tackle phishing?

Richard Clayton and I recently presented evidence of the adverse impact of take-down companies not sharing phishing feeds. Many phishing websites are missed by the take-down company which has the contract for removal; unsurprisingly, these websites are not removed very fast. Consequently, more consumers’ identities are stolen.

In the paper, we propose a simple solution: take-down companies should share their raw, unverified feeds of phishing URLs with their competitors. Each company can examine the raw feed, pick out the websites impersonating their clients, and focus on removing these sites.

Since we presented our findings to the Anti-Phishing Working Group eCrime Researchers Summit, we have received considerable feedback from take-down companies. Take-down companies attending the APWG meeting understood that sharing would help speed up response times, but expressed reservations at sharing their feeds unless they were duly compensated. Eric Olsen of Cyveillance (another company offering take-down services) has written a comprehensive rebuttal of our recommendations. He argues that competition between take-down companies drives investment in efforts to detect more websites. Mandated sharing of phishing URL feeds, in his view, would undermine these detection efforts and cause take-down companies such as Cyveillance to exit the business.

I do have some sympathy for the objections raised by the take-down companies. As we state in the paper, free-riding (where one company relies on another to invest in detection so they don’t have to) is a concern for any sharing regime. Academic research studying other areas of information security (e.g., here and here), however, has shown that free-riding is unlikely to be so rampant as to drive all the best take-down companies out of offering service, as Mr. Olsen suggests.

While we can quibble over the extent of the threat from free free-riding, it should not detract from the conclusions we draw over the need for greater sharing. In our view, it would be unwise and irresponsible to accept the current status quo of keeping phishing URL feeds completely private. After all, competition without sharing has approximately doubled the lifetimes of phishing websites! The solution, then, is to devise a sharing mechanism that gives take-down companies the incentive to keep detecting more phishing URLs.
Continue reading How can we co-operate to tackle phishing?

Non-cooperation in the fight against phishing

Tyler Moore and I are presenting another one of our academic phishing papers today at the Anti-Phishing Working Group’s Third eCrime Researchers Summit here in Atlanta, Georgia. The paper “The consequence of non-cooperation in the fight against phishing” (pre-proceedings version here) goes some way to explaining anomalies we found in our previous analysis of phishing website lifetimes. The “take-down” companies reckon to get phishing websites removed within a few hours, whereas our measurements show that the average lifetimes are a few days.

These “take-down” companies are generally specialist offshoots of more general “brand protection” companies, and are hired by banks to handle removal of fake phishing websites.

When we examined our data more carefully we found that we were receiving “feeds” of phishing website URLs from several different sources — and the “take-down” companies that were passing the data to us were not passing the data to each other.

So it often occurs that take-down company A knows about a phishing website targeting a particular bank, but take-down company B is ignorant of its existence. If it is company B that has the contract for removing sites for that bank then, since they don’t know the website exists, they take no action and the site stays up.

Since we were receiving data feeds from both company A and company B, we knew the site existed and we measured its lifetime — which is much extended. In fact, it’s somewhat of a mystery why it is removed at all! Our best guess is that reports made directly to ISPs trigger removal.

The paper contains all the details, and gives all the figures to show that website lifetimes are extended by about 5 days when the take-down company is completely unaware of the site. On other occasions the company learns about the site some time after it is first detected by someone else; and this extends the lifetimes by an average of 2 days.

Since extended lifetimes equate to more unsuspecting visitors handing over their credentials and having their bank accounts cleaned out, these delays can also be expressed in monetary terms. Using the rough and ready model we developed last year, we estimate that an extra $326 million per annum is currently being put at risk by the lack of data sharing. This figure is from our analysis of just two companies’ feeds, and there are several more such companies in this business.

Not surprisingly, our paper suggests that the take-down companies should be sharing their data, so that when they learn about websites attacking banks they don’t have contracts with, they pass the details on to another company who can start to get the site removed.

We analyse the incentives to make this change (and the incentives the companies have not to do so) and contrast the current arrangements with the anti-virus/malware industry — where sample suspect code has been shared since the early 1990s.

In particular, we note that it is the banks who would benefit most from data sharing — and since they are paying the bills, we think that they may well be in a position to force through changes in policy. To best protect the public, we must hope that this happens soon.

Making bank reimbursement statutory

Many of the recommendations of the House of Lords Science and Technology Committee report on Personal Internet Security have been recycled into Conservative Party policy [*] — as announced back in March. So, if you believe the polls, we might see some changes after the next election or, if you’re cynical, even before then as the Government implements opposition policy!

However, one of the Committee recommendations that the Conservatives did not take up was that the law should be changed so that banks become liable for all eBanking and ATM losses — just as they have been liable since 1882 if they honour a forged cheque. Of course, if the banks can prove fraud (for cheques or for the e-equivalents) then the end-user is liable (and should be locked up).

At present the banks will cover end-users under the voluntary Banking Code… so they say that there would be no difference with a statutory regime. This is a little weak as an objection, since if you believe their position it would make no difference either way to them. But, in practice it will make a difference because the voluntary code doesn’t work too well for a minority of people.

Anyway, at present the banks don’t have a lot of political capital and so their views are carrying far less weight. This was particularly clear in last week’s House of Lords debate on “Personal Internet Security”, where Viscount Bridgeman speaking for the Conservatives said:

“I entirely agree with the noble Lord, Lord Broers, that statutory control of the banks in this respect is required and that we cannot rely on the voluntary code.”

which either means he forgot his brief! or that this really is a new party policy. If so then, in my view, it’s very welcome.

[*] the policy document has inexplicably disappeared from the Conservative website, but a Word version is available from Microsoft here.

Lords debate "Personal Internet Security"

Last Friday the House of Lords debated their Science and Technology Committee’s report on Personal Internet Security (from Summer 2007) and — because the Government’s response was so weak — the additional follow-up report that was published in Spring 2008. Since I had acted as the specialist adviser to the Committee, I went down to Westminster to sit “below the bar“, in one of the best seats in the House, and observe.

Lord Broers, the Committee Chairman during the first inquiry, kicked things off, followed by various Lords who had sat on the Committee (and two others who hadn’t) then the opposition lead, Viscount Bridgeman, who put his party’s point of view (of which more in another article). Lord Brett (recently elevated to a Lord in Waiting — ie a whip), then replied to the debate and finally Lord Broers summarised and formally moved the “take note” motion which, as is custom and practice, the Lords then consented to nem con.

The Government speech in such a debate is partially pre-written, and should then consist of a series of responses to the various issues raised and answers to the questions put in the previous speeches. The Minister himself doesn’t write any of this, that’s done by civil servants from his department, sitting in a special “box” at the end of the chamber behind him.

However, since the previous speeches were so strongly critical of the Government’s position, and so many questions were put as to what was to be done next, I was able to see from my excellent vantage point (as TV viewers would never be able to) the almost constant flow of hastily scribbled notes from the box to the Minister — including one note that went to Lord Broers, due to an addressing error by the scribblers!

The result of this barrage of material was that Lord Brett ended up with so many bits of paper that he completely gave up trying to juggle them, read out just one, and promised to write to everyone concerned with the rest of the ripostes.

Of course it didn’t help that he’d only been in the job for five days and this was his first day at the dispatch box. But the number of issues he had to address would almost certainly have flummoxed a five-year veteran as well.

Amusing though this might be to watch, this does not bode well for the Government getting to grips with the issues raised in the reports. In technical areas such as “Personal Internet Security”, policy is almost entirely driven by the civil servants and not by the politicians.

So it is particularly disappointing that the pre-written parts of the Minister’s speech — the issues that the civil servants expected to come up and which they felt positive about addressing — were only a small proportion of the issues that were actually addressed in the debate.

It still seems as if the penny hasn’t dropped in Whitehall 🙁

ePolicing – Tomorrow the world?

This week has finally seen an announcement that the Police Central e-crime Unit (PCeU) is to be funded by the Home Office. However, the largesse amounts to just £3.5 million of new money spread over three years, with the Met putting up a further £3.9 million — but whether the Met’s contribution is “new” or reflects a move of resources from their existing Computer Crime Unit I could not say.

The announcement is of course Good News — because once the PCeU is up and running next Spring, it should plug (to the limited extent that £2 million a year can plug) the “level 2” eCrime gap that I’ve written about before. viz: that SOCA tackles “serious and organised crime” (level 3), your local police force tackles local villains (level 1), but if criminals operate outside their force’s area — and on the Internet this is more likely than not — yet they don’t meet SOCA’s threshold, then who is there to deal with them?

In particular, the PCeU is envisaged to be the unit that deals with the intelligence packages coming from the City of London Fraud Squad’s new online Fraud Reporting website (once intended to launch in November 2008, now scheduled for Summer 2009).

Of course everyone expects the website to generate more reports of eCrime than could ever be dealt with (even with much more money), so the effectiveness of the PCeU in dealing with eCriminality will depend upon their prioritisation criteria, and how carefully they select the cases they tackle.

Nevertheless, although the news this week shows that the Home Office have finally understood the need to fund more ePolicing, I don’t think that they are thinking about the problem in a sufficiently global context.

A little history lesson might be in order to explain why.
Continue reading ePolicing – Tomorrow the world?

Personal Internet Security: follow-up report

The House of Lords Science and Technology Committee have just completed a follow-up inquiry into “Personal Internet Security”, and their report is published here. Once again I have acted as their specialist adviser, and once again I’m under no obligation to endorse the Committee’s conclusions — but they have once again produced a useful report with sound conclusions, so I’m very happy to promote it!

Their initial report last summer, which I blogged about at the time, was — almost entirely — rejected by the Government last autumn (blog article here).

The Committee decided that in the light of the Government’s antipathy they would hold a rapid follow-up inquiry to establish whether their conclusions were sound or whether the Government was right to turn them down, and indeed, given the speed of change on the Internet, whether their recommendations were still timely.

The written responses broadly endorsed the Committee’s recommendations, with the main areas of controversy being liability for software vendors, making the banks statutorily responsible for phishing/skimming fraud, and how such fraud should be reported.

There was one oral session where, to everyone’s surprise, two Government ministers turned up and were extremely conciliatory. Baroness Vadera (BERR) said that the report “was somewhat more interesting than our response” and Vernon Coaker (Home Office) apologised to the Committee “if they felt that our response was overdefensive” adding “the report that was produced by this Committee a few months ago now has actually helped drive the agenda forward and certainly the resubmission of evidence and the re-thinking that that has caused has also helped with respect to that. So may I apologise to all of you; it is no disrespect to the Committee or to any of the members.

I got the impression that the ministers were more impressed with the Committee’s report than were the civil servants who had drafted the Government’s previous formal response. Just maybe, some of my comments made a difference?

Given this volte face, the Committee’s follow-up report is also conciliatory, whilst recognising that the new approach is very much in the “jam tomorrow” category — we will all have to wait to see if they deliver.

The report is still in favour of software vendor liability as a long term strategy to improving software security, and on a security breach notification law the report says “we hold to our view that data security breach notification legislation would have the twin impacts of increasing incentives on businesses to avoid data loss, and should a breach occur, giving individuals timely information so that they can reduce the risk to themselves“. The headlines have been about the data lost by the Government, but recent figures from the ICO show that private industry is doing pretty badly as well.

The report also revisits the recommendations relating to banking, reiterating the committee’s view that “the liability of banks for losses incurred by electronic fraud should be underpinned by legislation rather than by the Banking Code“. The reasoning is simple, the banks choose the security mechanisms and how much effort they put into detecting patterns of fraud, so they should stand the losses if these systems fail. Holding individuals liable for succumbing to ever more sophisticated attacks is neither fair, nor economically efficient. The Committee also remained concerned that where fraud does take place, reports are made to the banks, who then choose whether or not to forward them to the police. They describe this approach as “wholly unsatisfactory and that it risks undermining public trust in the police and the Internet“.

This is quite a short report, a mere 36 paragraphs, but comes bundled with the responses received, all of which from Ross Anderson and Nicholas Bohm, through to the Metropolitan Police and Symantec are well worth reading to understand more about a complex problem, yet one where we’re beginning to see the first glimmers of consensus as to how best to move forward.

Security psychology

I’m currently in the first Workshop on security and human behaviour; at MIT, which brings together security engineers, psychologists and others interested in topics ranging from deception through usability to fearmongering. Here’s the agenda and here are the workshop papers.

The first session, on deception, was fascinating. It emphasised the huge range of problems, from detecting deception in interpersonal contexts such as interrogation through the effects of context and misdirection to how we might provide better trust signals to computer users.

Over the past seven years, security economics has gone from nothing to a thriving research field with over 100 active researchers. Over the next seven I believe that security psychology should do at least as well. I hope I’ll find enough odd minutes to live blog this first workshop as it happens!

[Edited to add:] See comments for live blog posts on the sessions; Bruce Schneier is also blogging this event.