All posts by Richard Clayton

Happy Birthday ORG!

The Open Rights Group (ORG) has, today, published a report about their first two years of operation.

ORG’s origins lie in an online pledge, which got a thousand people agreeing to pay a fiver a month to fund a campaigning organisation for digital rights. This mass membership gives it credibility, and it’s used that credibility on campaigns on Digital Rights Management, Copyright Term Extension (“release the music“), Software Patents, Crown Copyright and E-Voting (for one small part of which Steven Murdoch and I stayed up into the small hours to chronicle the debacle in Bedford).

ORG is now lobbying in the highest of circles (though as everyone else who gives the Government good advice they aren’t always listened to), and they are getting extensively quoted in the press, as journalists discover their expertise, and their unique constituency.

Naturally ORG needs even more members, to become even more effective, and to be able to afford to campaign on even more issues in the future. So whilst you look at their annual report, do think about whether you can really afford not to support them!

ObDisclaimer: I’m one of ORG’s advisory council members. I’m happy to advise them to keep it up!

Government ignores Personal Internet Security

At the end of last week the Government published their response to the House of Lords Science and Technology Committee Report on Personal Internet Security. The original report was published in mid-August and I blogged about it (and my role in assisting the Committee) at that time.

The Government has turned down pretty much every recommendation. The most positive verbs used were “consider” or “working towards setting up”. That’s more than a little surprising, because the report made a great deal of sense, and their lordships aren’t fools. So is the Government ignorant, stupid, or in the thrall of some special interest group?

On balance I think it starts from ignorance.

Some of the most compelling evidence that the Committee heard was at private meetings in the USA from companies such as Microsoft, Cisco, Verisign, and in particular from Team Cymru, who monitor the “underground economy”. I don’t think that the Whitehall mandarins have heard these briefings, or have bothered to read the handful of published articles such as this one in ;login, or this more recent analysis that will appear at CCS next week. If the Government was up-to-speed on what researchers are documenting, they wouldn’t be arguing that there is more crime solely because there are more users — and they could not possibly say that they “refute the suggestion […] that lawlessness is rife”.

However, we cannot rule out stupidity.

Some of the Select Committee recommendations were intended to address the lack of authoritative data — and these were rejected as well. The Government doesn’t think its urgently necessary to capture more information about the prevalence of eCrime; they don’t think that having the banks collate crime reports gets all the incentives wrong; and they “do not accept that the incidence of loss of personal data by companies is on an upward path” (despite there being no figures in the UK to support or refute that notion, and considerable evidence of regular data loss in the United States).

The bottom line is that the Select Committee did some “out-of-the-box thinking” and came up with a number of proposals for measurement, for incentive alignment, and for bolstering law enforcement’s response to eCrime. The Government have settled for complacency, quibbling about the wording of the recommendations, and picking out a handful of the more minor recommendations to “note” to “consider” and to “keep under review”.

A whole series of missed opportunities.

Time to forget?

In a few hours time Part III of the Regulation of Investigatory Powers Act 2000 will come into effect. The commencement order means that as of October 1st a section 49 notice can be served which requires that encrypted data be “put into an intelligible form” (what you and I might call “decrypted”). Extended forms of such a notice may, under the provisions of s51, require you to hand over your decryption key, and/or under s54 include a “no tipping off” provision.

If you fail to comply with a notice (or breach a tipping off requirement by telling someone about it) then you will have committed an offence, for which the maximum penalty is two years and a fine or both. It’s five years for “tipping off” and also five years (an amendment in s15 of the Terrorism Act 2006) if the case relates to “national security”.

By convention, laws in the UK very seldom have retrospective effect, so that if you do something today, Parliament is very loth to pass a law tomorrow to make your actions illegal. However, the offences in Part III relate to failing to obey a s49 notice and that notice could be served on you tomorrow (or thereafter), but the material may have been encrypted by you today (or before).

Potentially therefore, the police could start demanding the putting into an intelligible form, not only of information that they seize in a raid tomorrow morning, but also of material that they seized weeks, months or years ago. In the 1995 Smith case (part of Operation Starburst), the defendant only received a suspended sentence because the bulk of the material was encrypted. In this particular example, the police may be constrained by double jeopardy or the time that has elapsed from serving a notice on Mr Smith, but there’s nothing in RIP itself, or the accompanying Code of Practice, to prevent them serving a s49 notice on more recently seized encrypted material if they deem it to be necessary and proportionate.

In fact, they might even be nipping round to Jack Straw’s house demanding a decryption key — as this stunt from 1999 makes possible (when the wording of a predecessor bill was rather more inane than RIP was (eventually) amended to).

There are some defences in the statute to failing to comply with a notice — one of which is that you can claim to have forgotten the decryption key (in practice, the passphrase under which the key is stored). In such a case the prosecution (the burden of proof was amended during the passage of the Bill) must show beyond a reasonable doubt that you have not forgotten it. Since they can’t mind-read, the expectation must be that they would attempt to show regular usage of the passphrase, and invite the jury to conclude that the forgetting has been faked — and this might be hard to manage if a hard disk has been in a police evidence store for over a decade.

However, if you’re still using such a passphrase and still have access to the disk, and if the contents are going to incriminate you, then perhaps a sledgehammer might be a suitable investment.

Me? I set up my alibi long ago 🙂

Web content labelling

As we all know, the web contains a certain amount of content that some people don’t want to look at, and/or do not wish their children to look at. Removing the material is seldom an option (it may well be entirely lawfully hosted, and indeed many other people may be perfectly happy for it to be there). Since centralised blocking of such material just isn’t going to happen, the best way forward is the installation of blocking software on the end-user’s machine. This software will have blacklists and whitelists provided from a central server, and it will provide some useful reassurance to parents that their youngest children have some protection. Older children can of course just turn the systems off, as has recently been widely reported for the Australian NetAlert system.

A related idea is that websites should rate themselves according to widely agreed criteria, and this would allow visitors to know what to expect on the site. Such ratings would of course be freely available, unlike the blocking software which tends to cost money (to pay for the people making the whitelists and blacklists).

I’ve never been a fan of these self-rating systems whose criteria always seem to be based on a white, middle-class, presbyterian view of wickedness, and — at least initially — were hurriedly patched together from videogame rating schemes. More than a decade ago I lampooned the then widely hyped RSACi system by creating a site that scored “4 4 4 4”, the highest (most unacceptable) score in every category: http://www.happyday.demon.co.uk/awful.htm and just recently, I was reminded of this in the context of an interview for an EU review of self-regulation.

Continue reading Web content labelling

The interns of Privila

Long-time readers will recall that I was spammed with an invitation to swap links with the European Human Rights Centre, a plagiarised site that exists to make money out of job listings and Google ads. Well, some more email spam has drawn my attention to something rather different:

From: “Elanor Radaker” <links@bustem.com>
Subject: Wanna Swap Links
Date: Thu, 19 Apr 2007 01:42:37 -0500

Hi,H

I’ve been working extremely hard on my friend’s website bustem.com and if you like what we’ve done, a link from <elided> would be greatly appreciated. If you are interested in a link exchange please …

<snip>

Thank you we greatly appreciate the help! If you have any questions please let me know!

Respectfully,

Elanor Radaker

This site, bustem.com, is not quite as the email claims. However, it is not plagiarised. Far from it, the content has been written to order for Privila Inc by some members of a small army of unpaid interns… and when one starts looking, there are literally hundreds of similar sites.

Continue reading The interns of Privila

Phishing website removal — comparing banks

Following on from our comparison of phishing website removal times for different freehosting webspace providers, Tyler Moore and I have now crunched the numbers so as to be able to compare take-down times by different banks.

The comparison graph is below (click on it to get a more readable version). The sites compared are phishing websites that were first reported in an 8-week period from mid February to mid April 2007 (you can’t so easily compare relatively recent periods because of the “horizon effect” which makes sites that appear later in the period count less). Qualification for inclusion is that there were at least 5 different websites observed during the time period. It’s also important to note that we didn’t count sites that were removed too quickly for us to inspect them and (this matters considerably) we ignored “rock-phish” websites which attack multiple banks in parallel.

Phishing website take-down times (5 or more sites, Feb-Apr 2007)

Although the graph clearly tells us something about relative performance, it is important not to immediately ascribe this to relative competence or incompetence. For example, Bank of America and CitiBank sites stay up rather longer than most. But they have been attacked for years, so maybe their attackers have learnt where to place their sites so as to be harder to remove? This might also apply to eBay? — although around a third of their sites are on freehosting, and those come down rather quicker than average, so many of their sites stay up even longer than the graph seems to show.

A lot of the banks outsource take-down to specialist companies (usually more general “brand protection” companies who have developed a side-line in phishing website removal). Industry insiders tell me that many of the banks at the right hand side of the graph, with lower take-down times, are in this category… certainly some of the specialists are looking forward to this graph appearing in public, so that they can use it to promote their services 🙂

However, once all the caveats (especially about not counting almost instantaneous removal) have been taken on board, one cannot be completely sure that this particular graph conclusively demonstrates that any particular bank or firm is better than another.

Phishing and the gaining of "clue"

Tyler Moore and I are in the final throes of creating a heavily revised version of our WEIS paper on phishing site take-down for the APWG eCrime Researchers Summit in early October in Pittsburgh.

One of the new results that we’ve generated, is that we’ve looked at take-down times for phishing sites hosted at alice.it, a provider of free webspace. Anyone who signs up (some Italian required) gets a 150MB web presence for free, and some of the phishing attackers are using the site to host fraudulent websites (mainly eBay (various languages), but a smattering of PayPal and Posteitaliane). When we generate a scatter plot of the take-down times we see the following effect:

Take-down times for phishing sites hosted at alice.it

Continue reading Phishing and the gaining of "clue"

Poor advice from SiteAdvisor

As an offshoot of our work on phishing, we’ve been getting more interested generally in reputation systems. One of these systems is McAfee’s SiteAdvisor, a free download of a browser add-on which will apparently “keep you safe from adware, spam and online scams”. Every time you search for or visit a website, McAfee gets told what you’re doing (why worry? they have a privacy policy!), and gives you their opinion of the site. As they put it “Safety ratings from McAfee SiteAdvisor are based on automated safety tests of Web sites (including of our own site) and are enhanced by feedback from our volunteer reviewers and insights from our own analysts”.

Doubtless, it works really well in many cases… but my experience is that you can’t necessarily rely on it 🙁

In particular, I visited http://www.hotshopgood.com (view this image if the site has been removed!). The prices are quite striking — significantly less than what you might expect to pay elsewhere. For example the Canon EOS-1DS Mark II is available for $1880.00, which frankly is a bargain : best price I can find elsewhere today is a whopping $5447.63.

So why is the camera so cheap? The clue is on the payments page — they don’t take credit cards, only Western Union transfers. Now Western Union are pretty clear about this: “Never send money to a stranger using a money transfer service” and “Beware of deals or opportunities that seem too good to be true”. So it’s not that the credit card companies aren’t taking a cut, but it is all about the inability to reverse Western Union transfers when the goods fail to turn up.

Here’s someone who fell for this scam, paying $270 for a TomTom Go 910 SatNav. The current going prices — 5 months later — for a non-refurbished unit start at $330, assuming you ignore the sellers who only seem to have email addresses at web portals… so the device was cheap, but not outrageously so like the camera.

I know about that particular experience because soemone has kindly entered the URL of the consumer forum into McAfee’s database as a “bad shopping experience”. Nevertheless, SiteAdvisor displays “green” for website in the status bar, and if I choose to visit the detailed page the main message (with a large tickmark on a green background) is that “We tested this site and didn’t find any significant problems” and I need to scroll down to locate the (not especially eye-catching) user-supplied warning.

This is somewhat disappointing — not just because of the nature of the site and the nature of the user complaint, but because since the 15th March 2007, www.hotshopgood.com has been listed as wicked by “Artists Against 419” a community list of bad websites, and it is on the current list of fraudulent websites at fraudwatchers.org. viz: there’s somewhat of a consensus that this isn’t a legitimate site, yet McAfee have failed to tap into the community’s opinion.

Now of course reputation is a complex thing, and there are many millions of websites out there, so McAfee have set themselves a complex task. I’ve no doubt they manage to justifiably flag many sites as wicked, but when they’re not really sure, and users are telling them that there’s an issue, they ought to be considering at least an amber traffic light, rather than the current green.

BTW: you may wish to note that SiteAdvisor currently considers www.lightbluetouchpaper.org to be deserving of a green tick. One of the reasons for this is that it mainly links to other sites that get green ticks. So presumably when they finally fix the reputation of hotshopgood.com, that will slightly reduce this site’s standing. A small price to pay! (though hopefully not a price that is too good to be true!)

House of Lords Inquiry: Personal Internet Security

For the last year I’ve been involved with the House of Lords Science and Technology Committee’s Inquiry into “Personal Internet Security”. My role has been that of “Specialist Adviser”, which means that I have been briefing the committee about the issues, suggesting experts who they might wish to question, and assisting with the questions and their understanding of the answers they received. The Committee’s report is published today (Friday 10th August) and can be found on the Parliamentary website here.

For readers who are unfamiliar with the UK system — the House of Lords is the second chamber of the UK Parliament and is currently composed mainly of “the great and the good” although 92 hereditary peers still remain, including the Earl of Erroll who was one of the more computer-literate people on the committee.

The Select Committee reports are the result of in-depth study of particular topics, by people who reached the top of their professions (who are therefore quick learners, even if they start by knowing little of the topic), and their careful reasoning and endorsement of convincing expert views, carries considerable weight. The Government is obliged to formally respond, and there will, at some point, be a few hours of debate on the report in the House of Lords.

My appointment letter made it clear that I wasn’t required to publicly support the conclusions that their lordships came to, but I am generally happy to do so. There’s quite a lot of these conclusions and recommendations, but I believe that three areas particularly stand out.

The first area where the committee has assessed the evidence, not as experts, but as intelligent outsiders, is where the responsibility for Personal Internet Security lies. Almost every witness was asked about this, but very few gave an especially wide-ranging answer. A lot of people, notably the ISPs and the Government, dumped a lot of the responsibility onto individuals, which neatly avoided them having to shoulder very much themselves. But individuals are just not well-informed enough to understand the security implications of their actions, and although it’s desirable that they aren’t encouraged to do dumb things, most of the time they’re not in a position to know if an action is dumb or not. The committee have a series of recommendations to address this — there should be BSI kite marks to allow consumers to select services that are likely to be secure, ISPs should lose mere conduit exemptions if they don’t act to deal with compromised end-user machines and the banks should be statutorily obliged to bear losses from phishing. None of these measures will fix things directly, but they will change the incentives, and that has to be the way forward.

Secondly, the committee are recommending that the UK bring in a data breach notification law, along the general lines of the California law, and 34 other US states. This would require companies that leaked personal data (because of a hacked website, or a stolen laptop, or just by failing to secure it) to notify the people concerned that this had happened. At first that might sound rather weak — they just have to tell people; but in practice the US experience shows that it makes a difference. Companies don’t like the publicity, and of course the people involved are able to take precautions against identity theft (and tell all their friends quite how trustworthy the company is…) It’s a simple, low-key law, but it produces all the right incentives for taking security seriously, and for deploying systems such as whole-disk encryption that mean that losing a laptop stops being synonymous with losing data.

The third area, and this is where the committee has been most far-sighted, and therefore in the short term this may well be their most controversial recommendation, is that they wish to see a software liability regime, viz: that software companies should become responsible for their security failures. The benefits of such a regime were cogently argued by Bruce Schneier, who appeared before the committee in February, and I recommend reading his evidence to understand why he swayed the committee. Unlike the data breach notification law the committee recommendation isn’t to get a statute onto the books sooner rather than later. There’s all sorts of competition issues and international ramifications — and in practice it may be a decade or two before there’s sufficient case law for vendors to know quite where they stand if they ship a product with a buffer overflow, or a race condition, or just a default password. Almost everyone who gave evidence, apart from Bruce Schneier, argued against such a law, but their lordships have seen through the special pleading and the self-interest and looked to find a way to make the Internet a safer place. Though I can foresee a lot of complications and a rocky road towards liability, looking to the long term, I think their lordships have got this one right.

"No confidence" in eVoting pilots

Back on May 3rd, Steven Murdoch, Chris Wilson and myself acted as election observers for the Open Rights Group (ORG) and looked at the conduct of the parish, council and mayoral elections in Bedford. Steven and I went back again on the 4th to observe their “eCounting” of the votes. In fact, we were still there on the 5th at half-one in the morning when the final result was declared after over fifteen hours.

Far from producing faster, more accurate, results, the eCounting was slower and left everyone concerned with serious misgivings — and no confidence whatsoever that the results were correct.

Today ORG launches its collated report into all of the various eVoting and eCounting experiments that took place in May — documenting the fiascos that occurred not only in Bedford but also in every other place that ORG observed. Their headline conclusion is “The Open Rights Group cannot express confidence in the results for areas observed” — which is pretty damning.

In Bedford, we noted that prior to the shambles on the 4th of May the politicians and voters we talked to were fairly positive about “e” elections — seeing it as inevitable progress. When things started to go wrong they then changed their minds…

However, there isn’t any “progress” here, and almost everyone technical who has looked at voting systems is concerned about them. The systems don’t work very well, they are inflexible, they are poorly tested and they are badly designed — and then when legitimate doubts are raised as to their integrity there is no way to examine the systems to determine that they’re working as one would hope.

We rather suspect that people are scared of being seen as Luddites if they don’t embrace “new technology” — whereas more technical people, who are more confident of their knowledge, are prepared to assess these systems on their merits, find them sadly lacking, and then speak up without being scared that they’ll be seen as ignorant.

The ORG report should go some way to helping everyone understand a little more about the current, lamentable, state of the art — and, if only just a little common sense is brought to bear, should help kill off e-Elections in the UK for a generation.

Here’s hoping!