Last week, in retaliation against the heavy-handed response to planned protests against the BART metro system in California, the hacktivist group Anonymous hacked into several BART servers. They leaked part of a database of users from myBART, a website which provides frequent BART riders with email updates about activities near BART stations. An interesting aspect of the leak is that 1,346 of the 2,002 accounts seem to have randomly-generated passwords-a rare opportunity to study this approach to password security. Continue reading Randomly-generated passwords at myBART
Monthly Archives: August 2011
Pico: no more passwords (at Usenix Security)
The usability community has long complained about the problems of passwords (remember the Adams and Sasse classic). These days, even our beloved XKCD has something to say about the difficulties of coming up with a password that is easy to memorize and hard to brute-force. The sensible strategy suggested in the comic, of using a passphrase made of several common words, is also the main principle behind Jakobsson and Akavipat’s fastwords. It’s a great suggestion. However, in the long term, no solution that requires users to remember secrets is going to scale to hundreds of different accounts, if all those remembered secrets have to be different (and changed every couple of months).
This is why, as I previously blogged, I am exploring the space of solutions that do not require the memorization of any secrets—whether passwords, passphrases, PINs, faces, graphical squiggles or anything else. My SPW paper, Pico: No more passwords, was finalized in June (including improvements suggested in the comments to the previous blog post) and I am about to give an invited talk on Pico at Usenix Security 2011 in San Francisco.
Usenix talks are recorded and the video is posted next to the abstracts: if you are so inclined, you will be able to watch my presentation shortly after I give it.
To encourage adoption, I chose not to patent any aspect of Pico. If you wish to collaborate, or fund this effort, talk to me. If you wish to build or sell it on your own, be my guest. No royalties due—just cite the paper.
Measuring Search-Redirection Attacks in the Illicit Online Prescription Drug Trade
Unauthorized online pharmacies that sell prescription drugs without requiring a prescription have been a fixture of the web for many years. Given the questionable legality of the shops’ business models, it is not surprising that most pharmacies resort to illegal methods for promoting their wares. Most prominently, email spam has relentlessly advertised illicit pharmacies. Researchers have measured the conversion rate of such spam, finding it to be surprisingly low. Upon reflection, this makes sense, given the spam’s unsolicited and untargeted nature. A more successful approach for the pharmacies would be to target users who have expressed an interest in purchasing drugs, such as those searching the web for online pharmacies. The trouble is that dodgy pharmacy websites don’t always garner the highest PageRanks on their own merits, and so some form of black-hat search-engine optimization may be required in order to appear near the top of web search results.
Indeed, by gathering daily the top search web results for 218 drug-related queries over nine months in 2010-2011, Nektarios Leontiadis, Nicolas Christin and I have found evidence of substantial manipulation of web search results to promote unauthorized pharmacies. In particular, we find that around one-third of the collected search results were one of 7,000 infected hosts triggered to redirect to a few hundred pharmacy websites. In the pervasive search-redirection attacks, miscreants compromise high-ranking websites and dynamically redirect traffic different pharmacies based on the particular search terms issued by the consumer. The full details of the study can be found in a paper appearing this week at the 20th USENIX Security Symposium in San Francisco.
Continue reading Measuring Search-Redirection Attacks in the Illicit Online Prescription Drug Trade
DCMS illustrates the key issue about blocking
This morning the Department for Culture Media and Sport (DCMS) have published a series of documents relating to the implementation of the Digital Economy Act 2010.
One of those documents, from OFCOM, describes how “Site Blocking” might be used to prevent access to websites that are involved in copyright infringement (ie: torrent sites, Newzbin, “cyberlockers” etc.).
The report appears, at a quick glance, to cover the ground pretty well, describing the various options available to ISPs to block access to websites (and sometimes to block access altogether — since much infringement is not “web” based).
The site also explains how each of the systems can be circumvented (and how easily) and makes it clear (in big bold type) “All techniques can be circumvented to some degree by users and site owners who are willing to make the additional effort.”
I entirely agree — and seem to recall a story from my childhood about the Emperor’s New Blocking System — and note that continuing to pursue this chimera will just mean that time and money will be pointlessly wasted.
However OFCOM duly trot out the standard line one hears so often from the rights holders: “Site blocking is likely to deter casual and unintentional infringers and by requiring some degree of active circumvention raise the threshold even for determined infringers.”
The problem for the believers in blocking is that this just isn’t true — pretty much all access to copyright infringing material involves the use of tools (to access the torrents, to process NZB files, or just to browse [one tends not to look at web pages in Notepad any more]). Although these tools need to be created by competent people, they are intended for mass use (point and click) and so copyright infringement by the masses will always be easy. They will not even know that the hurdles were there, because the tools will jump over them.
Fortuitously, the DCMS have provided an illustration of this in their publishing of the OFCOM report…
The start of the report says “The Department for Culture, Media and Sport has redacted some parts of this document where it refers to techniques that could be used to circumvent website blocks. There is a low risk of this information being useful to people wanting to bypass or undermine the Internet Watch Foundation‟s blocks on child sexual abuse images. The text in these sections has been blocked out.”
What the DCMS have done (following in the footsteps of many other incompetents) is to black out the text they consider to be sensitive. Removing this blacking out is simple but tedious … you can get out a copy of Acrobat and change the text colour to white — or you can just cut and paste the black bits into Notepad and see the text.
So I confidently expect that within a few hours, non-redacted (non-blocked!) versions of the PDF will be circulating (they may even become more popular than the original — everyone loves to see things that someone thought they should not). The people who look at these non-blocked versions will not be technically competent, they won’t know how to use Acrobat, but they will see the material.
So the DCMS have kindly made the point in the simplest of ways… the argument that small hurdles make any difference is just wishful thinking; sadly for Internet consumers in many countries (who will end up paying for complex blocking systems that make no practical difference) these wishes will cost them money.
PS: the DCMS do actually understand that blocking doesn’t work, or at least not at the moment. Their main document says “Following advice from Ofcom – which we are publishing today – we will not bring forward site blocking regulations under the DEA at this time.” Sadly however, this recognition of reality is too late for the High Court.