Category Archives: Security economics

Social-science angles of security

Fatal wine waiters

I’ve written before about “made for adware” (MFA) websites — those parts of the web that are created solely to host lots of (mainly Google) ads, and thereby make their creators loads of money.

Well, this one “hallwebhosting.com” is a little different. I first came across it a few months back when it was clearly still under development, but it seems to have settled down now — so that it’s worth looking at exactly what they’re doing.

The problem that such sites have is that they need to create lots of content really quickly, get indexed by Google so that people can find them, and then wait for the clicks (and the money) to roll in. The people behind hallwebhosting have had a cute idea for this — they take existing content from other sites and do word substitutions on sentences to produce what they clearly intend to be identical in meaning (so the site will figure in web search results), but different enough that the indexing spider won’t treat it as identical text.

So, for example, this section from Wikipedia’s page on Windows Server 2003:

Released on April 24, 2003, Windows Server 2003 (which carries the version number 5.2) is the follow-up to Windows 2000 Server, incorporating compatibility and other features from Windows XP. Unlike Windows 2000 Server, Windows Server 2003’s default installation has none of the server components enabled, to reduce the attack surface of new machines. Windows Server 2003 includes compatibility modes to allow older applications to run with greater stability.

becomes:

Released on April 24, 2003, Windows Server 2003 (which carries the form quantity 5.2) is the follow-up to Windows 2000 Server, incorporating compatibility and other skin from Windows XP. Unlike Windows 2000 Server, Windows Server 2003’s evasion installation has none of the attendant workings enabled, to cut the molest outward of new machines. Windows Server 2003 includes compatibility modes to allow big applications to gush with larger stability.

I first noticed this site because they rendered a Wikipedia article about my NTP DDoS work, entitled “NTP server misuse and abuse” into “NTP wine waiter knock about and abuse” … the contents of which almost makes sense:

“In October 2002, one of the first known hand baggage of phase wine waiter knock about resulted in troubles for a mess wine waiter at Trinity College, Dublin”

for doubtless a fine old university has wine waiters to spare, and a mess for them to work in.

Opinions around here differ as to whether this is machine translation (as in all those old stories about “Out of sight, out of mind” being translated to Russian and then back as “Invisible idiot”) or imaginative use of a thesaurus where “wine waiter” is a hyponym of “server”.

So fas as I can see, this is all potentially lawful — Wikipedia is licensed under the GNU Free Documentation License so if there was an acknowledgement of the original article’s authors then all would be fine. But there isn’t — so in fact, all is not fine!

However, even if this (perhaps) oversight was corrected, some articles are clearly copyright infringements.

For example, this article from shellaccounts.biz entitled Professional Web Site Hosting Checklist appears to be entirely covered by copyright, yet it has been rendered into this amusement:

In harmony to create sure you get what you’ve been looking for from a qualified confusion put hosting server, here are a few stuff you should take into tally before deciding on a confusion hosting provider.

where you’ll see that “site” has become “put”, “web” has become “confusion” (!) and later on “requirements” becomes “food” which leads to further hilarity.

However, beyond the laughter, this is pretty clearly yet another ham-fisted attempt to clutter up the web with dross in the hopes of making money. This time it’s not Google adwords, but banner ads, and other franchised links, but it’s still essentially “MFA”. These types of site will continue until advertisers get more savvy about the websites that they don’t wish to be associated with — at which point the flow of money will cease and the sites will disappear.

To finish by being lighthearted again, the funniest page (so far) is the reworking of the Wikipedia article on “Terminal Servers” … since servers once again becomes “wine waiters”, but “terminal” naturally enough, becomes “fatal”. The image is clear.

Hackers get busted

There is an article on BBC News about how yet another hacker running a botnet got busted. When I read the sentence “…he is said to be very bright and very skilled …”, I started thinking. How did they find him? He clearly must have made some serious mistakes, what sort of mistakes? How can isolation influence someone’s behaviour, what is the importance of external opinions on objectivity?

When we write a paper, we very much appreciate when someone is willing to read it, and give back some feedback. It allows to identify loopholes in thinking, flaws in descriptions, and so forth. The feedback does not necessarily have to imply large changes in the text, but it very often clarifies it and makes it much more readable.

Hackers do use various tools – either publicly available, or made by the hacker themself. There may be errors in the tools, but they will be probably fixed very quickly, especially if they are popular. Hackers often allow others to use the tools – if it is for testing or fame. But hacking for profit is a quite creative job, and there is plenty left for actions that cannot be automated.

So what is the danger of these manual tasks? Is it the case that hackers write down descriptions of all the procedures with checklists and stick to them, or do they do the stuff intuitively and become careless after a few months or years? Clearly, the first option is how intelligence agencies would deal with the problem, because they know that human is the weakest link. But what about hackers? “…very bright and very skilled…”, but isolated from the rest of the world?

So I keep thinking, is it worth trying to reconstruct “operational procedures” for running a botnet, analyse them, identify the mistakes most likely to happen, and use such knowledge against the “cyber-crime groups”?

Government ignores Personal Internet Security

At the end of last week the Government published their response to the House of Lords Science and Technology Committee Report on Personal Internet Security. The original report was published in mid-August and I blogged about it (and my role in assisting the Committee) at that time.

The Government has turned down pretty much every recommendation. The most positive verbs used were “consider” or “working towards setting up”. That’s more than a little surprising, because the report made a great deal of sense, and their lordships aren’t fools. So is the Government ignorant, stupid, or in the thrall of some special interest group?

On balance I think it starts from ignorance.

Some of the most compelling evidence that the Committee heard was at private meetings in the USA from companies such as Microsoft, Cisco, Verisign, and in particular from Team Cymru, who monitor the “underground economy”. I don’t think that the Whitehall mandarins have heard these briefings, or have bothered to read the handful of published articles such as this one in ;login, or this more recent analysis that will appear at CCS next week. If the Government was up-to-speed on what researchers are documenting, they wouldn’t be arguing that there is more crime solely because there are more users — and they could not possibly say that they “refute the suggestion […] that lawlessness is rife”.

However, we cannot rule out stupidity.

Some of the Select Committee recommendations were intended to address the lack of authoritative data — and these were rejected as well. The Government doesn’t think its urgently necessary to capture more information about the prevalence of eCrime; they don’t think that having the banks collate crime reports gets all the incentives wrong; and they “do not accept that the incidence of loss of personal data by companies is on an upward path” (despite there being no figures in the UK to support or refute that notion, and considerable evidence of regular data loss in the United States).

The bottom line is that the Select Committee did some “out-of-the-box thinking” and came up with a number of proposals for measurement, for incentive alignment, and for bolstering law enforcement’s response to eCrime. The Government have settled for complacency, quibbling about the wording of the recommendations, and picking out a handful of the more minor recommendations to “note” to “consider” and to “keep under review”.

A whole series of missed opportunities.

Phishing take-down paper wins 'Best Paper Award' at APWG eCrime Researcher's Summit

Richard Clayton and I have been tracking phishing sites for some time. Back in May, we reported on how quickly phishing websites are removed. Subsequently, we have also compared the performance of banks in removing websites and found evidence that ISPs and registrars are initially slow to remove malicious websites.

We have published our updated results at eCrime 2007, sponsored by the Anti-Phishing Working Group. The paper, ‘Examining the Impact of Website Take-down on Phishing’ (slides here), was selected for the ‘Best Paper Award’.

A high-level abridged description of this work also appeared in the September issue of Infosecurity Magazine.

Web content labelling

As we all know, the web contains a certain amount of content that some people don’t want to look at, and/or do not wish their children to look at. Removing the material is seldom an option (it may well be entirely lawfully hosted, and indeed many other people may be perfectly happy for it to be there). Since centralised blocking of such material just isn’t going to happen, the best way forward is the installation of blocking software on the end-user’s machine. This software will have blacklists and whitelists provided from a central server, and it will provide some useful reassurance to parents that their youngest children have some protection. Older children can of course just turn the systems off, as has recently been widely reported for the Australian NetAlert system.

A related idea is that websites should rate themselves according to widely agreed criteria, and this would allow visitors to know what to expect on the site. Such ratings would of course be freely available, unlike the blocking software which tends to cost money (to pay for the people making the whitelists and blacklists).

I’ve never been a fan of these self-rating systems whose criteria always seem to be based on a white, middle-class, presbyterian view of wickedness, and — at least initially — were hurriedly patched together from videogame rating schemes. More than a decade ago I lampooned the then widely hyped RSACi system by creating a site that scored “4 4 4 4”, the highest (most unacceptable) score in every category: http://www.happyday.demon.co.uk/awful.htm and just recently, I was reminded of this in the context of an interview for an EU review of self-regulation.

Continue reading Web content labelling

Mapping the Privila network

Last week, Richard Clayton described his investigation of the Privila internship programme. Unlike link farms, Privila doesn’t link to its own websites. Instead, they apparently solely depend on the links made to the site before they took over the domain name, and new ones solicited through spamming. This means that normal mapping techniques, just following links, will not uncover Privila sites. This might be one reason they took this approach, or perhaps it was just to avoid being penalized by search engines.

The mapping approach which I implemented, as suggested by Richard, was to exploit the fact that Privila authors typically write for several websites. So, starting with one seed site, you can find more by searching for the names of authors. I used the Yahoo search API to automate this process, since the Google API has been discontinued. From the new set of websites discovered, the list of authors is extracted, allowing yet more sites to be found. These steps are repeated until no new sites are discovered (effectively a breadth first search).

The end result was that starting from bustem.com, I found 294 further sites, with a total of 3 441 articles written by 124 authors (these numbers are lower than the ones in the previous post since duplicates have now been properly removed). There might be even more undiscovered sites, with a disjoint set of authors, but the current network is impressive in itself.

I have implemented an interactive Java applet visualization (using the Prefuse toolkit) so you can explore the network yourself. Both the source code, and the data used to construct the graph can also be downloaded.

Screenshot of PrivilaView applet

The interns of Privila

Long-time readers will recall that I was spammed with an invitation to swap links with the European Human Rights Centre, a plagiarised site that exists to make money out of job listings and Google ads. Well, some more email spam has drawn my attention to something rather different:

From: “Elanor Radaker” <links@bustem.com>
Subject: Wanna Swap Links
Date: Thu, 19 Apr 2007 01:42:37 -0500

Hi,H

I’ve been working extremely hard on my friend’s website bustem.com and if you like what we’ve done, a link from <elided> would be greatly appreciated. If you are interested in a link exchange please …

<snip>

Thank you we greatly appreciate the help! If you have any questions please let me know!

Respectfully,

Elanor Radaker

This site, bustem.com, is not quite as the email claims. However, it is not plagiarised. Far from it, the content has been written to order for Privila Inc by some members of a small army of unpaid interns… and when one starts looking, there are literally hundreds of similar sites.

Continue reading The interns of Privila

Econometrics of wickedness

Last Thursday I gave a tech talk at Google; you can now watch it online. It’s about work a number of us have done on searching for covert communities, with a focus on reputation thieves, phisherman, fake banks and other dodgy businesses.

While in California I also gave a talk on Information Security Economics, first as a keynote talk at Crypto and later as a seminar at Berkeley (the slides are here).

Phishing website removal — comparing banks

Following on from our comparison of phishing website removal times for different freehosting webspace providers, Tyler Moore and I have now crunched the numbers so as to be able to compare take-down times by different banks.

The comparison graph is below (click on it to get a more readable version). The sites compared are phishing websites that were first reported in an 8-week period from mid February to mid April 2007 (you can’t so easily compare relatively recent periods because of the “horizon effect” which makes sites that appear later in the period count less). Qualification for inclusion is that there were at least 5 different websites observed during the time period. It’s also important to note that we didn’t count sites that were removed too quickly for us to inspect them and (this matters considerably) we ignored “rock-phish” websites which attack multiple banks in parallel.

Phishing website take-down times (5 or more sites, Feb-Apr 2007)

Although the graph clearly tells us something about relative performance, it is important not to immediately ascribe this to relative competence or incompetence. For example, Bank of America and CitiBank sites stay up rather longer than most. But they have been attacked for years, so maybe their attackers have learnt where to place their sites so as to be harder to remove? This might also apply to eBay? — although around a third of their sites are on freehosting, and those come down rather quicker than average, so many of their sites stay up even longer than the graph seems to show.

A lot of the banks outsource take-down to specialist companies (usually more general “brand protection” companies who have developed a side-line in phishing website removal). Industry insiders tell me that many of the banks at the right hand side of the graph, with lower take-down times, are in this category… certainly some of the specialists are looking forward to this graph appearing in public, so that they can use it to promote their services 🙂

However, once all the caveats (especially about not counting almost instantaneous removal) have been taken on board, one cannot be completely sure that this particular graph conclusively demonstrates that any particular bank or firm is better than another.

Latest on security economics

Tyler and I have a paper appearing tomorrow as a keynote talk at Crypto: Information Security Economics – and Beyond. This is a much extended version of our survey that appeared in Science in October 2006 and then at Softint in January 2007.

The new paper adds recent research in security economics and sets out a number of ideas about security psychology, into which the field is steadily expanding as economics and psychology become more intertwined. For example, many existing security mechanisms were designed by geeks for geeks; but if women find them harder to use, and as a result are more exposed to fraud, then could system vendors or operators be sued for unlawful sex discrimination?

There is also the small matter of the extent to which human intelligence evolved because people who were good at deceit, and at detecting deception in others, were likely to have more surviving offspring. Security and psychology might be more closely entwined than anyone ever thought.