Category Archives: Politics

A one-line software patent – and a fix

I have been waiting for this day for 17 years! Today, United States Patent 5,404,140 titled “Coding system” owned by Mitsubishi expires, 22 years after it was filed in Japan.

Why the excitement? Well, 17 years ago, I wrote JBIG-KIT, a free and open-source implementation of JBIG1, the image compression algorithm used in all modern fax machines. My software is about 4000 lines of code long (in C), and only one single “if” statement in it is covered by the above patent:

      if (s->a < lsz) { s->c += s->a; s->a = lsz; }

And sadly, there was no way to implement a JBIG1 encoder or decoder without using this patented line of code (in some form) while remaining compatible with all other JBIG1 implementations out there. Continue reading A one-line software patent – and a fix

Risk and privacy in payment systems

I’ve just given a talk on Risk and privacy implications of consumer payment innovation (slides) at the Federal Reserve Bank’s payments conference. There are many more attendees this year; who’d have believed that payment systems would ever become sexy? Yet there’s a lot of innovation, and regulators are starting to wonder. Payment systems now contain many non-bank players, from insiders like First Data, FICO and Experian to service firms like PayPal and Google. I describe a number of competitive developments and argue that although fraud may increase, so will welfare, so there’s no reason to panic. For now, bank supervisors should work on collecting better fraud statistics, so that if there ever is a crisis the response can be well-informed.

Bankers’ Christmas present

Every Christmas we give our friends in the banking industry a wee present. Sometimes it’s the responsible disclosure of a vulnerability, which we publish the following February: 2007’s was PED certification, 2008’s was CAP while in 2009 we told the banking industry of the No-PIN attack. This year too we have some goodies in the hamper: watch our papers at Financial Crypto 2012.

In other years, we’ve had arguments with the bankers’ PR wallahs. In 2010, for example, their trade association tried to censor one of our students’ thesis. That saga also continues; Britain’s bankers tried once more to threaten us so we told them once more to go away. We have other conversations in progress with bankers, most of them thankfully a bit more constructive.

This year’s Christmas present is different: it’s a tale with a happy ending. Eve Russell was a fraud victim whom Barclays initially blamed for her misfortune, as so often happens, and the Financial Ombudsman Service initially found for the bank as it routinely does. Yet this was clearly not right; after many lawyers’ letters, two hearings at the ombudsman, two articles in The Times and a TV appearance on Rip-off Britain, Eve won. This is the first complete case file since the ombudsman came under the Freedom of Information Act; by showing how the system works, it may be useful to fraud victims in the future.

(At Eve’s request, I removed the correspondence and case papers from my website on 5 Oct 2015. Eve was getting lots of calls and letters from other fraud victims and was finally getting weary. I have left just the article in the Times.)

Blood donation and privacy

The UK’s National Blood Service screens all donors for a variety of health and lifestyle risks prior donation. Many are highly sensitive, particularly sexual history and drug use. So I found it disappointing that, after consulting with a nurse who took detailed notes about specific behaviours and when they occurred, I was expected to consent to this information being stored indefinitely. When I pressed as to why this data is retained, I was told it was necessary so that I can be contacted as soon as I’m eligible again to donate blood, and to prevent me from donating before that.

The first reason seems weak, as contacting donors on an annual or semi-annual basis wouldn’t greatly decrease the level of donation (most risk-factor restrictions last at least 12 months or are indefinite). The second reason is a security fantasy, as it would only detect donors who lie at a second visit after being honest initially. I doubt donor dishonesty is a major problem and all blood is tested anyway. The purpose of lifestyle restrictions is to reduce the base rate of unsafe blood because all tests have false negatives. Storing detailed donor history doesn’t even have much time-saving benefit: history needs to be re-taken before each donation, since lifestyle risks can change.

I certainly don’t think the NBS is trying to stockpile data for nefarious reasons. I expect instead that the increasingly low technical costs of storing data speciously justify its very minor secondary uses if one ignores the risk of a massive compromise (NBS gets about 2 M donors per year). I wonder whether the inherent hazard of data collection was considered in the NBS’ cost/benefit analysis when this privacy policy was adopted . Security engineers and privacy advocates would do well to advocate non-collection of sensitive data before fancier privacy-enhancing technology. The NHS provides a vital service but they can’t do it without their donors, who are always in short supply. It would be a shame to discourage anybody from donating and being honest about their health history by demanding to store their data forever.

Privacy event on Wednesday

I will be talking in London on Wednesday at a workshop on Anonymity, Privacy, and Open Data about the difficulty of anonymising medical records properly. I’ll be on a panel with Kieron O’Hara who wrote a report on open data for the Cabinet Office earlier this year, and a spokesman from the ICO.

This will be the first public event on the technology and policy issues surrounding anonymisation since yesterday’s announcement that the government will give wide access to anonymous versions of our medical records. I’ve written extensively on the subject: for an overview, see my book chapter which explores the security of medical systems in general from p 282 and the particular problems of using “anonymous” records in research from p 298. For the full Monty, start here.

Anonymity is hard enough if the data controller is capable, and motivated to try hard. In the case of the NHS, anonymity has always been perfunctory; the default is to remove patient names and addresses but leave their postcodes and dates of birth. This makes it easy to re-identify about 99% of patients (the exceptions are mostly twins, soldiers, students and prisoners). And since I wrote that book chapter, the predicted problems have come to pass; for example the NHS lost a laptop containing over eight million patients’ records.

DNSChanger might change the BGPSEC landscape

In early November, a sophisticated fraud was shut down and a number of people arrested. Malware from a family called “DNSChanger” had been placed on around four million machines (Macs as well as Windows machines) over several years.

The compromised users had their DNS traffic redirected to criminally operated servers. The main aim of the criminals seems to have been to redirect search queries and thereby to make money from displaying adverts.

Part of the mitigation of DNSChanger involves ISC running DNS servers for a while (so that 4 million people whose DNS servers suddenly disappear don’t simultaneously ring their ISP helpdesks complaining that the Internet is broken).

To prevent bad people running the DNS servers instead, the address blocks containing the IPs of the rogue DNS servers which used to belong to the criminals (but are now pointed at ISC) have been “locked”.

This is easy for ARIN (the organisation who looks after North American address space) to acquiesce to, because they have US legal paperwork compelling their assistance. However, the Dutch police have generated some rather less compelling paperwork and served that on RIPE; so RIPE is now asking the Dutch court to clarify the position.

Further details of the issues with the legal paperwork can be found on (or linked from) the Internet Governance Project blog. The IGP is a group of mainly but not entirely US academics working on global Internet policy issues.

As the IGP rightly point out, this is going to be an important case because it is going to draw attention to the role of the RIRs — just at the time when that role is set to become even more important.

As we move to crypto-secured BGP routing, the RIRs (ARIN, RIPE etc) will be providing cryptographic assurance of the validity of address block ownership. Which means, in effect, that we are building a system where the courts in one country (five countries in all, for five RIRs) could remove ISPs and hosting providers from the Internet… and some ISPs [and their governments] (who are beginning to think ahead) are not entirely keen on this prospect.

If, as one might expect, the Dutch courts eventually uphold the DNSChanger compulsion on RIPE (even if the Dutch police have to have a second go at making the paperwork valid) then maybe this will prove the impetus to abandon a pyramid structure for BGP security and move to a “sea of certificates” model (where one independently chooses from several overlapping roots of authority) — which more closely approximates the reality of a global system which touches a myriad set of local jurisdictions.

Oral evidence to the malware inquiry

The House of Commons Science and Technology Select Committee is currently holding an inquiry into malware.

I submitted written evidence in September and today I was one of three experts giving oral evidence to the MPs. The session was televised and so conceivably it may turn up on the TV in some strange timeslot — but if you’re interested then there’s a web version for viewing at your convenience. Shortly there will be a written transcript as well.

The Committee’s original set of questions included one about whether malware infection might usefully be treated as a public health issue — of particular interest to me because I have a published paper which considers the role that Governments might play in countering malware for the public good!

In the event, this wasn’t asked about at all. The questions were much more basic, covering the security of hardware and software, the role of the police (and at one point, bizarrely, considering the merits of the Amstrad PCW; a product I was jointly involved in designing and building, some 25 years ago).

In fact it was all rather more about dealing with crime than dealing with malware — which is fine (and obviously closely connected) but it wasn’t the topic on which everyone submitted evidence. This may mean that the Committee has a shortage of material if their report aims to address the questions that they raised today.

Trusted Computing 2.1

We’re steadily learning more about the latest Trusted Computing proposals. People have started to grok that building signed boot into UEFI will extend Microsoft’s power over the markets for AV software and other security tools that install around boot time; while ‘Metro’ style apps (i.e. web/tablet/html5 style stuff) could be limited to distribution via the MS app store. Even if users can opt out, most of them won’t. That’s a lot of firms suddenly finding Steve Ballmer’s boot on their jugular.

We’ve also been starting to think about the issues of law enforcement access that arose during the crypto wars and that came to light again with CAs. These issues are even more wicked with trusted boot. If the Turkish government compelled Microsoft to include the Tubitak key in Windows so their intelligence services could do man-in-the-middle attacks on Kurdish MPs’ gmail, then I expect they’ll also tell Microsoft to issue them a UEFI key to authenticate their keylogger malware. Hey, I removed the Tubitak key from my browser, but how do I identify and block all foreign governments’ UEFI keys?

Our Greek colleagues are already a bit cheesed off with Wall Street. How happy will they be if in future they won’t be able to install the security software of their choice on their PCs, but the Turkish secret police will?

DCMS illustrates the key issue about blocking

This morning the Department for Culture Media and Sport (DCMS) have published a series of documents relating to the implementation of the Digital Economy Act 2010.

One of those documents, from OFCOM, describes how “Site Blocking” might be used to prevent access to websites that are involved in copyright infringement (ie: torrent sites, Newzbin, “cyberlockers” etc.).

The report appears, at a quick glance, to cover the ground pretty well, describing the various options available to ISPs to block access to websites (and sometimes to block access altogether — since much infringement is not “web” based).

The site also explains how each of the systems can be circumvented (and how easily) and makes it clear (in big bold type) “All techniques can be circumvented to some degree by users and site owners who are willing to make the additional effort.

I entirely agree — and seem to recall a story from my childhood about the Emperor’s New Blocking System — and note that continuing to pursue this chimera will just mean that time and money will be pointlessly wasted.

However OFCOM duly trot out the standard line one hears so often from the rights holders: “Site blocking is likely to deter casual and unintentional infringers and by requiring some degree of active circumvention raise the threshold even for determined infringers.

The problem for the believers in blocking is that this just isn’t true — pretty much all access to copyright infringing material involves the use of tools (to access the torrents, to process NZB files, or just to browse [one tends not to look at web pages in Notepad any more]). Although these tools need to be created by competent people, they are intended for mass use (point and click) and so copyright infringement by the masses will always be easy. They will not even know that the hurdles were there, because the tools will jump over them.

Fortuitously, the DCMS have provided an illustration of this in their publishing of the OFCOM report…

The start of the report says “The Department for Culture, Media and Sport has redacted some parts of this document where it refers to techniques that could be used to circumvent website blocks. There is a low risk of this information being useful to people wanting to bypass or undermine the Internet Watch Foundation‟s blocks on child sexual abuse images. The text in these sections has been blocked out.

What the DCMS have done (following in the footsteps of many other incompetents) is to black out the text they consider to be sensitive. Removing this blacking out is simple but tedious … you can get out a copy of Acrobat and change the text colour to white — or you can just cut and paste the black bits into Notepad and see the text.

So I confidently expect that within a few hours, non-redacted (non-blocked!) versions of the PDF will be circulating (they may even become more popular than the original — everyone loves to see things that someone thought they should not). The people who look at these non-blocked versions will not be technically competent, they won’t know how to use Acrobat, but they will see the material.

So the DCMS have kindly made the point in the simplest of ways… the argument that small hurdles make any difference is just wishful thinking; sadly for Internet consumers in many countries (who will end up paying for complex blocking systems that make no practical difference) these wishes will cost them money.

PS: the DCMS do actually understand that blocking doesn’t work, or at least not at the moment. Their main document says “Following advice from Ofcom – which we are publishing today – we will not bring forward site blocking regulations under the DEA at this time.” Sadly however, this recognition of reality is too late for the High Court.

Phone hacking, technology and policy

Britain’s phone hacking scandal touches many issues of interest to security engineers. Murdoch’s gumshoes listened to celebs’ voicemail messages using default PINs. They used false-pretext phone calls – blagging – to get banking and medical records.

We’ve known for years that private eyes blag vast amounts of information (2001 book, from page 167; 2006 ICO Report). Centralisation and the ‘Cloud’ are making things worse. Twenty years ago, your bank records were available only in your branch; now any teller at any branch can look them up. The dozen people who work at your doctor’s surgery used to be able to keep a secret, but the 840,000 staff with a logon to our national health databases?

Attempts to fix the problem using the criminal justice system have failed. When blagging was made illegal in 1995, the street price of medical records actually fell from £200 to £150! Parliament increased the penalty from fines to jail in 2006 but media pressure scared ministers off implementing this law.

Our Database State report argued that the wholesale centralisation of medical and other records was unsafe and illegal; and the NHS Population Demographics Service database appears to be the main one used to find celebs’ ex-directory numbers. Celebs can opt out, but most of them are unaware of PDS abuse, so they don’t. Second, you can become a celeb instantly if you are a victim of crime, war or terror. Third, even if you do opt out, the gumshoes can just bribe policemen, who have access to just about everything.

In future, security engineers must pay much more attention to compartmentation (even the Pentagon is now starting to get it), and we must be much more wary about the risk that law-enforcement access to information will be abused.