Category Archives: News coverage

Media reports that may interest you

Trusted Computing 2.1

We’re steadily learning more about the latest Trusted Computing proposals. People have started to grok that building signed boot into UEFI will extend Microsoft’s power over the markets for AV software and other security tools that install around boot time; while ‘Metro’ style apps (i.e. web/tablet/html5 style stuff) could be limited to distribution via the MS app store. Even if users can opt out, most of them won’t. That’s a lot of firms suddenly finding Steve Ballmer’s boot on their jugular.

We’ve also been starting to think about the issues of law enforcement access that arose during the crypto wars and that came to light again with CAs. These issues are even more wicked with trusted boot. If the Turkish government compelled Microsoft to include the Tubitak key in Windows so their intelligence services could do man-in-the-middle attacks on Kurdish MPs’ gmail, then I expect they’ll also tell Microsoft to issue them a UEFI key to authenticate their keylogger malware. Hey, I removed the Tubitak key from my browser, but how do I identify and block all foreign governments’ UEFI keys?

Our Greek colleagues are already a bit cheesed off with Wall Street. How happy will they be if in future they won’t be able to install the security software of their choice on their PCs, but the Turkish secret police will?

DCMS illustrates the key issue about blocking

This morning the Department for Culture Media and Sport (DCMS) have published a series of documents relating to the implementation of the Digital Economy Act 2010.

One of those documents, from OFCOM, describes how “Site Blocking” might be used to prevent access to websites that are involved in copyright infringement (ie: torrent sites, Newzbin, “cyberlockers” etc.).

The report appears, at a quick glance, to cover the ground pretty well, describing the various options available to ISPs to block access to websites (and sometimes to block access altogether — since much infringement is not “web” based).

The site also explains how each of the systems can be circumvented (and how easily) and makes it clear (in big bold type) “All techniques can be circumvented to some degree by users and site owners who are willing to make the additional effort.

I entirely agree — and seem to recall a story from my childhood about the Emperor’s New Blocking System — and note that continuing to pursue this chimera will just mean that time and money will be pointlessly wasted.

However OFCOM duly trot out the standard line one hears so often from the rights holders: “Site blocking is likely to deter casual and unintentional infringers and by requiring some degree of active circumvention raise the threshold even for determined infringers.

The problem for the believers in blocking is that this just isn’t true — pretty much all access to copyright infringing material involves the use of tools (to access the torrents, to process NZB files, or just to browse [one tends not to look at web pages in Notepad any more]). Although these tools need to be created by competent people, they are intended for mass use (point and click) and so copyright infringement by the masses will always be easy. They will not even know that the hurdles were there, because the tools will jump over them.

Fortuitously, the DCMS have provided an illustration of this in their publishing of the OFCOM report…

The start of the report says “The Department for Culture, Media and Sport has redacted some parts of this document where it refers to techniques that could be used to circumvent website blocks. There is a low risk of this information being useful to people wanting to bypass or undermine the Internet Watch Foundation‟s blocks on child sexual abuse images. The text in these sections has been blocked out.

What the DCMS have done (following in the footsteps of many other incompetents) is to black out the text they consider to be sensitive. Removing this blacking out is simple but tedious … you can get out a copy of Acrobat and change the text colour to white — or you can just cut and paste the black bits into Notepad and see the text.

So I confidently expect that within a few hours, non-redacted (non-blocked!) versions of the PDF will be circulating (they may even become more popular than the original — everyone loves to see things that someone thought they should not). The people who look at these non-blocked versions will not be technically competent, they won’t know how to use Acrobat, but they will see the material.

So the DCMS have kindly made the point in the simplest of ways… the argument that small hurdles make any difference is just wishful thinking; sadly for Internet consumers in many countries (who will end up paying for complex blocking systems that make no practical difference) these wishes will cost them money.

PS: the DCMS do actually understand that blocking doesn’t work, or at least not at the moment. Their main document says “Following advice from Ofcom – which we are publishing today – we will not bring forward site blocking regulations under the DEA at this time.” Sadly however, this recognition of reality is too late for the High Court.

Will Newzbin be blocked?

This morning the UK High Court granted an injunction to a group of movie companies which is intended to force BT to block access to “newzbin 2” by their Internet customers. The “newzbin 2” site provides an easy way to search for and download metadata files that can be used to automate the downloading of feature films (TV shows, albums etc) from Usenet servers. ie it’s all about trying to prevent people from obtaining content without paying for a legitimate copy (so called “piracy“).

The judgment is long and spends a lot of time (naturally) on legal matters, but there is some technical discussion — which is correct so far as it goes (though describing redirection of traffic based on port number inspection as “DPI” seems to me to stretch the jargon).

But what does the injunction require of BT? According to the judgment BT must apply “IP address blocking in respect of each and every IP address [of newzbin.com]” and “DPI based blocking utilising at least summary analysis in respect of each and every URL available at the said website and its domains and sub domains“. BT is then told that the injunction is “complied with if the Respondent uses the system known as Cleanfeed“.

There is almost nothing about the design of Cleanfeed in the judgment, but I wrote a detailed account of how it works in a 2005 paper (a slightly extended version of which appears as Chapter 7 of my 2005 PhD thesis). Essentially it is a 2-stage system, the routing system redirects port 80 (HTTP) traffic for relevant IP addresses to a proxy machine — and that proxy prevents access to particular URLs.

So if BT just use Cleanfeed (as the injunction indicates) they will resolve newzbin.com (and www.newzbin.com) which are currently both on 85.112.165.75, and they will then filter access to http://www.newzbin.com/, http://newzbin.com and http://85.112.165.75. It will be interesting to experiment to determine how good their pattern matching is on the proxy (currently Cleanfeed is only used for child sexual abuse image websites, so experiments currently pose a significant risk of lawbreaking).

It will also be interesting to see whether BT actually use Cleanfeed or if they just ‘blackhole’ all access to 85.112.165.75. The quickest way to determine this (once the block is rolled out) will be to see whether or not https://newzbin.com works or not. If it does work then BT will have obeyed the injunction but the block will be trivial to evade (add a “s” to the URL). If it does not work then BT will not be using Cleanfeed to do the blocking!

BT users will still of course be able to access Newzbin (though perhaps not by using https), but depending on the exact mechanisms which BT roll out it may be a little less convenient. The simplest method (but not the cheapest) will be to purchase a VPN service — which will tunnel traffic via a remote site (and access from there won’t be blocked). Doubtless some enterprising vendors will be looking to bundle a VPN with a Newzbin subscription and an account on a Usenet server.

The use of VPNs seems to have been discussed in court, along with other evasion techniques (such as using web and SOCKS proxies), but the judgment says “It is common ground that, if the order were to be implemented by BT, it would be possible for BT subscribers to circumvent the blocking required by the order. Indeed, the evidence shows the operators of Newzbin2 have already made plans to assist users to circumvent such blocking. There are at least two, and possibly more, technical measures which users could adopt to achieve this. It is common ground that it is neither necessary nor appropriate for me to describe those measures in this judgment, and accordingly I shall not do so.

There’s also a whole heap of things that Newzbin could do to disrupt the filtering or just to make their site too mobile to be effectively blocked. I describe some of the possibilities in my 2005 academic work, and there are doubtless many more. Too many people consider the Internet to be a static system which looks the same from everywhere to everyone — that’s just not the case, so blocking systems that take this as a given (“web sites have a single IP address that everyone uses”) will be ineffective.

But this is all moot so far as the High Court is concerned. The bottom line within the judgment is that they don’t actually care if the blocking works or not! At paragraph #198 the judge writes “I agree with counsel for the Studios that the order would be justified even if it only prevented access to Newzbin2 by a minority of users“. Since this case was about preventing economic damage to the movie studios, I doubt that they will be so sanguine if it is widely understood how to evade the block — but the exact details of that will have to wait until BT have complied with their new obligations.

Phone hacking, technology and policy

Britain’s phone hacking scandal touches many issues of interest to security engineers. Murdoch’s gumshoes listened to celebs’ voicemail messages using default PINs. They used false-pretext phone calls – blagging – to get banking and medical records.

We’ve known for years that private eyes blag vast amounts of information (2001 book, from page 167; 2006 ICO Report). Centralisation and the ‘Cloud’ are making things worse. Twenty years ago, your bank records were available only in your branch; now any teller at any branch can look them up. The dozen people who work at your doctor’s surgery used to be able to keep a secret, but the 840,000 staff with a logon to our national health databases?

Attempts to fix the problem using the criminal justice system have failed. When blagging was made illegal in 1995, the street price of medical records actually fell from £200 to £150! Parliament increased the penalty from fines to jail in 2006 but media pressure scared ministers off implementing this law.

Our Database State report argued that the wholesale centralisation of medical and other records was unsafe and illegal; and the NHS Population Demographics Service database appears to be the main one used to find celebs’ ex-directory numbers. Celebs can opt out, but most of them are unaware of PDS abuse, so they don’t. Second, you can become a celeb instantly if you are a victim of crime, war or terror. Third, even if you do opt out, the gumshoes can just bribe policemen, who have access to just about everything.

In future, security engineers must pay much more attention to compartmentation (even the Pentagon is now starting to get it), and we must be much more wary about the risk that law-enforcement access to information will be abused.

TalkTalk's new blocking system

Back in January I visited TalkTalk along with Jim Killock of the Open Rights Group (ORG) to have their new Internet blocking system explained to us. The system was announced yesterday, and I’m now publishing my technical description of how it works (note that it was called “BrightFeed” when we saw it, but is now named “HomeSafe”).

Buried in all the detail of how the system works are two key points — the first is the notion that it is possible for a centralised checking system (especially one that tells a remote site its identity) to determine whether sites are malicious are not. This is problematic and I doubt that malware distributors will see this as much of a challenge — although on the other hand, perhaps by setting your browser’s User Agent string to pretend to be the checking system you might become rather safer!

The second is that although the system is described as “opt in”, that only applies to whether or not websites you visit might be blocked. What is not “opt in” is whether or not TalkTalk learns the details of the URLs that all of their customers visit, whether they have opted in or not. All of these sites will be visited by TalkTalk’s automated system — which may take some explaining if the remote system told you a URL in confidence and is checking their logs to see who visits.

On their site, ORG have expressed an opinion as to whether the system can be operated lawfully, along with TalkTalk’s own legal analysis. TalkTalk argue that the system’s purpose is to protect their network, which gives them a statutory exemption from wire-tapping legislation; whereas all the public relations material seems to think it’s been developed to protect the users….

… in the end though, the system will be judged by its effectiveness, and in a world where less than 20% of new threats are detected — that may not be all that high.

Everyone’s spam is unique

How much spam you get depends on three main things, how many spammers know (or guess) your email address, how good your spam filtering is, and of course, how active the spammers are.

A couple of years back I investigated how spam volumes varied depending on the first letter of your email address (comparing aardvark@example.com with zebra@example.com), with the variations almost certainly coming down to “guessability” (an email address of john@ is easier to guess than yvette@).

As to the impact of filtering, I investigated spam levels in the aftermath of the disabling of McColo — asking whether it was the easy-to-block spam that disappeared? The impact of that closure will have been different for different people, depending on the type (and relative effectiveness) of their spam filtering solution.

Just at the moment, as reported upon in some detail by Brian Krebs, we’re seeing a major reduction in activity. In particular, the closure of an affiliate system for pharmacy spam in September reduced global spam levels considerably, and since Christmas a number of major systems have practically disappeared.

I’ve had a look at spam data going back to January 2010 from my own email server, which handles email for a handful of domains, and that shows a different story!

It shows that spam was up in October … so the reduction didn’t affect how many of the spam emails came to me, just how many “me’s” there were worldwide. Levels have been below the yearly average for much of December, but I am seeing most (but not all of) the dropoff since Christmas Day.

Click on the graph for an bigger version… and yes, the vertical axis is correct, I really do get up to 60,000 spam emails a day, and of course none at all on the days when the server breaks altogether.

A Merry Christmas to all Bankers

The bankers’ trade association has written to Cambridge University asking for the MPhil thesis of one of our research students, Omar Choudary, to be taken offline. They complain it contains too much detail of our No-PIN attack on Chip-and-PIN and thus “breaches the boundary of responsible disclosure”; they also complain about Omar’s post on the subject to this blog.

Needless to say, we’re not very impressed by this, and I made this clear in my response to the bankers. (I am embarrassed to see I accidentally left Mike Bond off the list of authors of the No-PIN vulnerability. Sorry, Mike!) There is one piece of Christmas cheer, though: the No-PIN attack no longer works against Barclays’ cards at a Barclays merchant. So at least they’ve started to fix the bug – even if it’s taken them a year. We’ll check and report on other banks later.

The bankers also fret that “future research, which may potentially be more damaging, may also be published in this level of detail”. Indeed. Omar is one of my coauthors on a new Chip-and-PIN paper that’s been accepted for Financial Cryptography 2011. So here is our Christmas present to the bankers: it means you all have to come to this conference to hear what we have to say!

The Gawker hack: how a million passwords were lost

Almost a year to the date after the landmark RockYou password hack, we have seen another large password breach, this time of Gawker Media. While an order of magnitude smaller, it’s still probably the second largest public compromise of a website’s password file, and in many ways it’s a more interesting case than RockYou. The story quickly made it to the mainstream press, but the reported details are vague and often wrong. I’ve obtained a copy of the data (which remains generally available, though Gawker is attempting to block listing of the torrent files) so I’ll try to clarify the details of the leak and Gawker’s password implementation (gleaned mostly from the readme file provided with the leaked data and from reverse engineering MySQL dumps). I’ll discuss the actual password dataset in a future post. Continue reading The Gawker hack: how a million passwords were lost

Wikileaks, security research and policy

A number of media organisations have been asking us about Wikileaks. Fifteen years ago we kicked off the study of censorship resistant systems, which inspired the peer-to-peer movement; we help maintain Tor, which provides the anonymous communications infrastructure for Wikileaks; and we’ve a longstanding interest in information policy.

I have written before about governments’ love of building large databases of sensitive data to which hundreds of thousands of people need access to do their jobs – such as the NHS spine, which will give over 800,000 people access to our health records. The media are now making the link. Whether sensitive data are about health or about diplomacy, the only way forward is compartmentation. Medical records should be kept in the surgery or hospital where the care is given; and while an intelligence analyst dealing with Iraq might have access to cables on Iraq, Iran and Saudi Arabia, he should have no routine access to stuff on Korea or Brazil.

So much for the security engineering; now to policy. No-one questions the US government’s right to try one of its soldiers for leaking the cables, or the right of the press to publish them now that they’re leaked. But why is Wikileaks treated as the leaker, rather than as a publisher?

This leads me to two related questions. First, does a next-generation censorship-resistant system need a more resilient technical platform, or more respectable institutions? And second, if technological change causes respectable old-media organisations such as the Guardian and the New York Times to go bust and be replaced by blogs, what happens to freedom of the press, and indeed to freedom of speech?

Resumption of the crypto wars?

The Telegraph and Guardian reported yesterday that the government plans to install deep packet inspection kit at ISPs, a move considered and then apparently rejected by the previous government (our Database State report last year found their Interception Modernisation Programme to be almost certainly illegal). An article in the New York Times on comparable FBI/NSA proposals makes you wonder whether policy is being coordinated between Britain and America.

In each case, the police and spooks argue that they used to have easy access to traffic data — records of who called whom and when — so now people communicate using facebook, gmail and second life rather than with phones, they should be allowed to harvest data about who wrote on your wall, what emails appeared on your gmail inbox page, and who stood next to you in second life. This data will be collected on everybody and will be available to investigators who want to map suspects’ social networks. A lot of people opposed this, including the Lib Dems, who promised to “end the storage of internet and email records without good reason” and wrote this into the Coalition Agreement. The Coalition seems set to reinterpret this now that the media are distracted by the spending review.

We were round this track before with the debate over key escrow in the 1990s. Back then, colleagues and I wrote of the risks and costs of insisting that communications services be wiretap-ready. One lesson from the period was that the agencies clung to their old business model rather than embracing all the new opportunities; they tried to remain Bletchley Park in the age of Google. Yet GCHQ people I’ve heard recently are still stuck in the pre-computer age, having learned nothing and forgotten nothing. As for the police, they can’t really cope with the forensics for the PCs, phones and other devices that fall into their hands anyway. This doesn’t bode well, either for civil liberties or for national security.