Category Archives: Academic papers

How hard are PINs to guess?

Note: this research was also blogged today at the NY Times’ Bits technology blog.

I’ve personally been researching password statistics for a few years now (as well as personal knowledge questions) and our research group has a long history of research on banking security. In an upcoming paper at next weel’s Financial Cryptography conference written with Sören Preibusch and Ross Anderson, we’ve brought the two research threads together with the first-ever quantitative analysis of the difficulty of guessing 4-digit banking PINs. Somewhat amazingly given the importance of PINs and their entrenchment in infrastructure around the world, there’s never been an academic study of how people actually choose them. After modeling banking PIN selection using a combination of leaked data from non-banking sources and a massive online survey, we found that people are significantly more careful choosing PINs then online passwords, with a majority using an effectively random sequence of digits. Still, the persistence of a few weak choices and birthdates in particular suggests that guessing attacks may be worthwhile for an opportunistic thief. Continue reading How hard are PINs to guess?

FreeBSD 9.0 ships with experimental Capsicum support

Jon Anderson, Ben Laurie, Kris Kennaway, and I were pleased to see prominent mention of Capsicum in the recent FreeBSD 9.0 press release:

Continuing its heritage of innovating in the area of security research, FreeBSD 9.0 introduces Capsicum. Capsicum is a lightweight framework which extends a POSIX UNIX kernel to support new security capabilities and adds a userland sandbox API. Originally developed as a collaboration between the University of Cambridge Computer Laboratory and Google and sponsored by a grant from Google, FreeBSD was the prototype platform and Chromium was the prototype application. FreeBSD 9.0 provides kernel support as an experimental feature for researchers and early adopters. Application support will follow in a later FreeBSD release and there are plans to provide some initial Capsicum-protected applications in FreeBSD 9.1.

“Google is excited to see the award-winning Capsicum work incorporated in FreeBSD 9.0, bringing native capability security to mainstream UNIX for the first time,” said Ulfar Erlingsson, Manager, Security Research at Google.

We first wrote about Capsicum, a hybridisation of the capability system security model with POSIX operating system semantics developed with support from Google, in Capsicum: practical capabilities for UNIX (USENIX Security 2010 and ;login magazine). Capsicum targets the problem of operating system support for application compartmentalisation — the restructuring of applications into a set of sandboxed components in order to enforce policies and mitigate security vulnerabilities. While Capsicum’s hybrid capability model is not yet used by the FreeBSD userspace, experimental kernel support will make Capsicum more accessible to researchers and software developers interested in deploying application sandboxing. For example, the Policy Weaving project at the University of Wisconsin has been investigating automated application compartmentalisation in support of security policy enforcement using Capsicum.

Metrics for dynamic networks

There’s a huge literature on the properties of static or slowly-changing social networks, such as the pattern of friends on Facebook, but almost nothing on networks that change rapidly. But many networks of real interest are highly dynamic. Think of the patterns of human contact that can spread infectious disease; you might be breathed on by a hundred people a day in meetings, on public transport and even in the street. Yet if we were facing a flu pandemic, how could we measure whether the greatest spreading risk came from high-order static nodes, or from dynamic ones? Should we close the schools, or the Tube?

Today we unveiled a paper which proposes new metrics for centrality in dynamic networks. We wondered how we might measure networks where mobility is of the essence, such as the spread of plague in a medieval society where most people stay in their villages and infection is carried between them by a small number of merchants. We found we can model the effects of mobility on interaction by embedding a dynamic network in a larger time-ordered graph to which we can apply standard graph theory tools. This leads to dynamic definitions of centrality that extend the static definitions in a natural way and yet give us a much better handle on things than aggregate statistics can. I spoke about this work today at a local workshop on social networking, and the paper’s been accepted for Physical Review E. It’s joint work with Hyoungshick Kim.

Bankers’ Christmas present

Every Christmas we give our friends in the banking industry a wee present. Sometimes it’s the responsible disclosure of a vulnerability, which we publish the following February: 2007’s was PED certification, 2008’s was CAP while in 2009 we told the banking industry of the No-PIN attack. This year too we have some goodies in the hamper: watch our papers at Financial Crypto 2012.

In other years, we’ve had arguments with the bankers’ PR wallahs. In 2010, for example, their trade association tried to censor one of our students’ thesis. That saga also continues; Britain’s bankers tried once more to threaten us so we told them once more to go away. We have other conversations in progress with bankers, most of them thankfully a bit more constructive.

This year’s Christmas present is different: it’s a tale with a happy ending. Eve Russell was a fraud victim whom Barclays initially blamed for her misfortune, as so often happens, and the Financial Ombudsman Service initially found for the bank as it routinely does. Yet this was clearly not right; after many lawyers’ letters, two hearings at the ombudsman, two articles in The Times and a TV appearance on Rip-off Britain, Eve won. This is the first complete case file since the ombudsman came under the Freedom of Information Act; by showing how the system works, it may be useful to fraud victims in the future.

(At Eve’s request, I removed the correspondence and case papers from my website on 5 Oct 2015. Eve was getting lots of calls and letters from other fraud victims and was finally getting weary. I have left just the article in the Times.)

Oral evidence to the malware inquiry

The House of Commons Science and Technology Select Committee is currently holding an inquiry into malware.

I submitted written evidence in September and today I was one of three experts giving oral evidence to the MPs. The session was televised and so conceivably it may turn up on the TV in some strange timeslot — but if you’re interested then there’s a web version for viewing at your convenience. Shortly there will be a written transcript as well.

The Committee’s original set of questions included one about whether malware infection might usefully be treated as a public health issue — of particular interest to me because I have a published paper which considers the role that Governments might play in countering malware for the public good!

In the event, this wasn’t asked about at all. The questions were much more basic, covering the security of hardware and software, the role of the police (and at one point, bizarrely, considering the merits of the Amstrad PCW; a product I was jointly involved in designing and building, some 25 years ago).

In fact it was all rather more about dealing with crime than dealing with malware — which is fine (and obviously closely connected) but it wasn’t the topic on which everyone submitted evidence. This may mean that the Committee has a shortage of material if their report aims to address the questions that they raised today.

Will LBT be blocked?

Back in July I wrote a blog article “Will Newzbin be blocked?” which discussed the granting of an injunction to a group of movie companies to force BT to block access to “Newzbin2“.

The parties were back in court this last week to hammer out the exact details of the injunction.

The final wording of the injunction requires BT to block customer access to Newzbin2 by #1(1) rerouting traffic to relevant IPs and #1(2) applying “DPI based” URL blocking. The movie companies have to tell BT which IPs and which URLs are relevant.

#2 of the injunction says that BT can use its existing “Cleanfeed” system (which I wrote about here and at greater length in my PhD thesis here) to meet the requirements of #1, even though Cleanfeed isn’t believed to use DPI at all !

#3 and #4 of the injunction allows the parties to agree to suspend blocking and to come back to court in the future, and #5 relates to the costs of the court action.

One of the (few) upsides of this injunction will be to permit lawful experimentation as to the effectiveness of the Cleanfeed system, assuming that it is used — if the studios ask for all URLs on a website to be blocked, I expect that null routing the website entirely will be simpler for BT than redirecting traffic to the Cleanfeed proxy.

Up until now, discovering a flaw in the technical implementation of Cleanfeed would result in successful access to a child sexual abuse image website. Anyone monitoring the remote end of the connection might then draw the conclusion that images had been viewed and a criminal offence committed. Although careful experimental design could avoid law-breaking, it might be some time into the investigation process before this was properly understood by the criminal justice system, and the intervening period would be somewhat stressful for the investigator.

There is no law that prevents viewing of the contents of Newsbin2, and so the block circumvention techniques proposed over the past few years (starting of course with just using “https”) can now start to be evaluated as to their actual effectiveness.

However, there is more to #1 of the injunction, in that it applies to:

[…] www.newzbin.com, its domains and sub-domains and including payments.newzbin.com and any other IP address or URL whose sole or predominant purpose is to enable or facilitate access to the Newzbin2 website.

I don’t expect that publishing circumvention experience here on LBT could be seen as the predominant purpose of this blog… so I don’t really expect these pages to suddenly become invisible to BT customers. But, since the whole process has an Alice in Wonderland feel to it (someone who believes that blocking websites is possible clearly had little else to do before breakfast), it cannot be entirely ruled out.

Fashion crimes: trending-term exploitation on the web

News travels fast. Blogs and other websites pick up a news story only about 2.5 hours on average after it has been reported by traditional media. This leads to an almost continuous supply of new “trending” topics, which are then amplified across the Internet, before fading away relatively quickly. Many web companies track these terms, on search engines and in social media.

However narrow, these first moments after a story breaks present a window of opportunity for miscreants to infiltrate web and social network search results in response. The motivation for doing so is primarily financial. Websites that rank high in response to a search for a trending term are likely to receive considerable amounts of traffic, regardless of their quality.

In particular, the sole goal of many sites designed in response to trending terms is to produce revenue through the advertisements that they display in their pages, without providing any original content or services. Such sites are often referred to as “Made for AdSense” (MFA) after the name of the Google advertising platform they are often targeting. Whether such activity is deemed to be criminal or merely a nuisance remains an open question, and largely depends on the tactics used to prop the sites up in the search-engine rankings. Some other sites devised to respond to trending terms have more overtly sinister motives. For instance, a number of malicious sites serve malware in hopes of infecting visitors’ machines, or peddle fake anti-virus software.

Together with Nektarios Leontiadis and Nicolas Christin, I have carried out a large-scale measurement and analysis of trending-term exploitation on the web, and the results are being presented at the ACM Conference on Computer and Communications Security (CCS) in Chicago this week. Based on a collection of over 60 million search results and tweets gathered over nine months, we characterize how trending terms are used to perform web search-engine manipulation and social-network spam. The full details can be found in the paper and presentation. Continue reading Fashion crimes: trending-term exploitation on the web

Pico: no more passwords (at Usenix Security)

The usability community has long complained about the problems of passwords (remember the Adams and Sasse classic). These days, even our beloved XKCD has something to say about the difficulties of coming up with a password that is easy to memorize and hard to brute-force. The sensible strategy suggested in the comic, of using a passphrase made of several common words, is also the main principle behind Jakobsson and Akavipat’s fastwords. It’s a great suggestion. However, in the long term, no solution that requires users to remember secrets is going to scale to hundreds of different accounts, if all those remembered secrets have to be different (and changed every couple of months).

This is why, as I previously blogged, I am exploring the space of solutions that do not require the memorization of any secrets—whether passwords, passphrases, PINs, faces, graphical squiggles or anything else. My SPW paper, Pico: No more passwords, was finalized in June (including improvements suggested in the comments to the previous blog post) and I am about to give an invited talk on Pico at Usenix Security 2011 in San Francisco.

Usenix talks are recorded and the video is posted next to the abstracts: if you are so inclined, you will be able to watch my presentation shortly after I give it.

To encourage adoption, I chose not to patent any aspect of Pico. If you wish to collaborate, or fund this effort, talk to me. If you wish to build or sell it on your own, be my guest. No royalties due—just cite the paper.

Measuring Search-Redirection Attacks in the Illicit Online Prescription Drug Trade

Unauthorized online pharmacies that sell prescription drugs without requiring a prescription have been a fixture of the web for many years. Given the questionable legality of the shops’ business models, it is not surprising that most pharmacies resort to illegal methods for promoting their wares. Most prominently, email spam has relentlessly advertised illicit pharmacies. Researchers have measured the conversion rate of such spam, finding it to be surprisingly low. Upon reflection, this makes sense, given the spam’s unsolicited and untargeted nature. A more successful approach for the pharmacies would be to target users who have expressed an interest in purchasing drugs, such as those searching the web for online pharmacies. The trouble is that dodgy pharmacy websites don’t always garner the highest PageRanks on their own merits, and so some form of black-hat search-engine optimization may be required in order to appear near the top of web search results.

Indeed, by gathering daily the top search web results for 218 drug-related queries over nine months in 2010-2011, Nektarios Leontiadis, Nicolas Christin and I have found evidence of substantial manipulation of web search results to promote unauthorized pharmacies. In particular, we find that around one-third of the collected search results were one of 7,000 infected hosts triggered to redirect to a few hundred pharmacy websites. In the pervasive search-redirection attacks, miscreants compromise high-ranking websites and dynamically redirect traffic different pharmacies based on the particular search terms issued by the consumer. The full details of the study can be found in a paper appearing this week at the 20th USENIX Security Symposium in San Francisco.
Continue reading Measuring Search-Redirection Attacks in the Illicit Online Prescription Drug Trade

Will Newzbin be blocked?

This morning the UK High Court granted an injunction to a group of movie companies which is intended to force BT to block access to “newzbin 2” by their Internet customers. The “newzbin 2” site provides an easy way to search for and download metadata files that can be used to automate the downloading of feature films (TV shows, albums etc) from Usenet servers. ie it’s all about trying to prevent people from obtaining content without paying for a legitimate copy (so called “piracy“).

The judgment is long and spends a lot of time (naturally) on legal matters, but there is some technical discussion — which is correct so far as it goes (though describing redirection of traffic based on port number inspection as “DPI” seems to me to stretch the jargon).

But what does the injunction require of BT? According to the judgment BT must apply “IP address blocking in respect of each and every IP address [of newzbin.com]” and “DPI based blocking utilising at least summary analysis in respect of each and every URL available at the said website and its domains and sub domains“. BT is then told that the injunction is “complied with if the Respondent uses the system known as Cleanfeed“.

There is almost nothing about the design of Cleanfeed in the judgment, but I wrote a detailed account of how it works in a 2005 paper (a slightly extended version of which appears as Chapter 7 of my 2005 PhD thesis). Essentially it is a 2-stage system, the routing system redirects port 80 (HTTP) traffic for relevant IP addresses to a proxy machine — and that proxy prevents access to particular URLs.

So if BT just use Cleanfeed (as the injunction indicates) they will resolve newzbin.com (and www.newzbin.com) which are currently both on 85.112.165.75, and they will then filter access to http://www.newzbin.com/, http://newzbin.com and http://85.112.165.75. It will be interesting to experiment to determine how good their pattern matching is on the proxy (currently Cleanfeed is only used for child sexual abuse image websites, so experiments currently pose a significant risk of lawbreaking).

It will also be interesting to see whether BT actually use Cleanfeed or if they just ‘blackhole’ all access to 85.112.165.75. The quickest way to determine this (once the block is rolled out) will be to see whether or not https://newzbin.com works or not. If it does work then BT will have obeyed the injunction but the block will be trivial to evade (add a “s” to the URL). If it does not work then BT will not be using Cleanfeed to do the blocking!

BT users will still of course be able to access Newzbin (though perhaps not by using https), but depending on the exact mechanisms which BT roll out it may be a little less convenient. The simplest method (but not the cheapest) will be to purchase a VPN service — which will tunnel traffic via a remote site (and access from there won’t be blocked). Doubtless some enterprising vendors will be looking to bundle a VPN with a Newzbin subscription and an account on a Usenet server.

The use of VPNs seems to have been discussed in court, along with other evasion techniques (such as using web and SOCKS proxies), but the judgment says “It is common ground that, if the order were to be implemented by BT, it would be possible for BT subscribers to circumvent the blocking required by the order. Indeed, the evidence shows the operators of Newzbin2 have already made plans to assist users to circumvent such blocking. There are at least two, and possibly more, technical measures which users could adopt to achieve this. It is common ground that it is neither necessary nor appropriate for me to describe those measures in this judgment, and accordingly I shall not do so.

There’s also a whole heap of things that Newzbin could do to disrupt the filtering or just to make their site too mobile to be effectively blocked. I describe some of the possibilities in my 2005 academic work, and there are doubtless many more. Too many people consider the Internet to be a static system which looks the same from everywhere to everyone — that’s just not the case, so blocking systems that take this as a given (“web sites have a single IP address that everyone uses”) will be ineffective.

But this is all moot so far as the High Court is concerned. The bottom line within the judgment is that they don’t actually care if the blocking works or not! At paragraph #198 the judge writes “I agree with counsel for the Studios that the order would be justified even if it only prevented access to Newzbin2 by a minority of users“. Since this case was about preventing economic damage to the movie studios, I doubt that they will be so sanguine if it is widely understood how to evade the block — but the exact details of that will have to wait until BT have complied with their new obligations.