For the last year I’ve been involved with the House of Lords Science and Technology Committee’s Inquiry into “Personal Internet Security”. My role has been that of “Specialist Adviser”, which means that I have been briefing the committee about the issues, suggesting experts who they might wish to question, and assisting with the questions and their understanding of the answers they received. The Committee’s report is published today (Friday 10th August) and can be found on the Parliamentary website here.
For readers who are unfamiliar with the UK system — the House of Lords is the second chamber of the UK Parliament and is currently composed mainly of “the great and the good” although 92 hereditary peers still remain, including the Earl of Erroll who was one of the more computer-literate people on the committee.
The Select Committee reports are the result of in-depth study of particular topics, by people who reached the top of their professions (who are therefore quick learners, even if they start by knowing little of the topic), and their careful reasoning and endorsement of convincing expert views, carries considerable weight. The Government is obliged to formally respond, and there will, at some point, be a few hours of debate on the report in the House of Lords.
My appointment letter made it clear that I wasn’t required to publicly support the conclusions that their lordships came to, but I am generally happy to do so. There’s quite a lot of these conclusions and recommendations, but I believe that three areas particularly stand out.
The first area where the committee has assessed the evidence, not as experts, but as intelligent outsiders, is where the responsibility for Personal Internet Security lies. Almost every witness was asked about this, but very few gave an especially wide-ranging answer. A lot of people, notably the ISPs and the Government, dumped a lot of the responsibility onto individuals, which neatly avoided them having to shoulder very much themselves. But individuals are just not well-informed enough to understand the security implications of their actions, and although it’s desirable that they aren’t encouraged to do dumb things, most of the time they’re not in a position to know if an action is dumb or not. The committee have a series of recommendations to address this — there should be BSI kite marks to allow consumers to select services that are likely to be secure, ISPs should lose mere conduit exemptions if they don’t act to deal with compromised end-user machines and the banks should be statutorily obliged to bear losses from phishing. None of these measures will fix things directly, but they will change the incentives, and that has to be the way forward.
Secondly, the committee are recommending that the UK bring in a data breach notification law, along the general lines of the California law, and 34 other US states. This would require companies that leaked personal data (because of a hacked website, or a stolen laptop, or just by failing to secure it) to notify the people concerned that this had happened. At first that might sound rather weak — they just have to tell people; but in practice the US experience shows that it makes a difference. Companies don’t like the publicity, and of course the people involved are able to take precautions against identity theft (and tell all their friends quite how trustworthy the company is…) It’s a simple, low-key law, but it produces all the right incentives for taking security seriously, and for deploying systems such as whole-disk encryption that mean that losing a laptop stops being synonymous with losing data.
The third area, and this is where the committee has been most far-sighted, and therefore in the short term this may well be their most controversial recommendation, is that they wish to see a software liability regime, viz: that software companies should become responsible for their security failures. The benefits of such a regime were cogently argued by Bruce Schneier, who appeared before the committee in February, and I recommend reading his evidence to understand why he swayed the committee. Unlike the data breach notification law the committee recommendation isn’t to get a statute onto the books sooner rather than later. There’s all sorts of competition issues and international ramifications — and in practice it may be a decade or two before there’s sufficient case law for vendors to know quite where they stand if they ship a product with a buffer overflow, or a race condition, or just a default password. Almost everyone who gave evidence, apart from Bruce Schneier, argued against such a law, but their lordships have seen through the special pleading and the self-interest and looked to find a way to make the Internet a safer place. Though I can foresee a lot of complications and a rocky road towards liability, looking to the long term, I think their lordships have got this one right.