Category Archives: Legal issues

Security-related legislation, government initiatives, court cases

Democracy Theatre on Facebook

You may remember a big PR flap last month about Facebook‘s terms of service, followed by Facebook backing down and promising to involve users in a self-governing process of drafting their future terms. This is an interesting step with little precedent amongst commercial web sites. Facebook now has enough users to be the fifth largest nation on earth (recently passing Brazil), and operators of such immense online societies need to define a cyber-government which satisfies their users while operating lawfully within a multitude of jurisdictional boundaries, as well as meeting their legal obligations to the shareholders who own the company.

Democracy is an intriguing approach, and it is encouraging that Facebook is considering this path. Unfortunately, after some review my colleagues and I are left thoroughly disappointed by both the new documents and the specious democratic process surrounding them. We’ve outlined our arguments in a detailed report, the official deadline for commentary is midnight tonight.

The non-legally binding Statement of Principles outline an admirable set of goals in plain language, which was refreshing. However, these goals are then undermined for a variety of legal and business reasons by the “Statement of Rights and Responsibilities“, which would effectively be the new Terms of Service. For example, Facebook demands that application developers comply with user’s privacy settings which it doesn’t provide access to, states that users should have “programmatic access” and then bans users from interacting with the site via “automated means,” and states that the service will transcend national boundaries while banning users from signing up if they live in a country embargoed by the United States.

The stated goal of fairness and equality is also lost. The Statement of Rights and Responsibilities primarily assigns rights to Facebook and responsibilities on users, developers, and advertisers. Facebook still demands a broad license to all user content, shifts all responsibility for enforcing privacy onto developers, and sneakily disclaims itself of all liability. Yet it demands an unrealistic set of obligations: a literal reading of the document requires users to get explicit permission from other users before viewing their content. Furthermore, they have applied the banking industry’s well-known trick of shifting liability to customers, binding users to not do anything to “jeopardize the security of their account,” which can be used to dissolve the contract.

The biggest missed opportunity, however, is the utter failure to provide a real democratic process as promised. Users are free to comment on terms, but Facebook is under no obligation to listen. Facebook‘s official group for comments contains a disorganised jumble of thousands of comments, some insightful and many inane. It is difficult to extract intelligent analysis here. Under certain conditions a vote can be called, but this is hopelessly weakened: it only applies to certain types of changes, the conditions of the vote are poorly specified and subject to manipulation by Facebook, and in fact they reserve the right to ignore the vote for “administrative reasons.”

With a nod to Bruce Schneier, we call such steps “democracy theatre.” It seems the goal is not to actually turn governance over to users, but to use the appearance of democracy and user involvement to ward off future criticism. Our term may be new, but this trick is not, it has been used by autocratic regimes around the world for decades.

Facebook’s new terms represent a genuine step forward with improved clarity in certain areas, but an even larger step backward in using democracy theatre to cover the fact that Facebook is a business and its ultimate accountability is to its shareholders. The outrage over the previous terms was real and it was justified, social networks mean a great deal to their users, and they want to have a real say.  Since Facebook appears unwilling to actually do so, though, we would be remiss to allow them to deflect user’s anger with flowery language and a sham democratic process. For this reason we cannot support the new terms.

[UPDATE: Our report has been officially backed by the Open Rights Group]

National Fraud Strategy

Today the Government “launches” its National Fraud Strategy. I qualify the verb because none of the quality papers seems to be running the story, and the press releases have not yet appeared on the websites of the Attorney General or the Ministry of Justice.

And well might Baroness Scotland be ashamed. The Strategy is a mishmash of things that are being done already with one new initiative – a National Fraud Reporting Centre, to be run by the City of London Police. This is presumably intended to defuse the Lords’ criticisms of the current system whereby fraud must be reported to the banks, not to the police. As our blog has frequently reported, banks dump liability for fraud on customers by making false claims about system security and imposing unreasinable terms and conditions. This is a regulatory failure: the FSA has been just as gullible in accepting the banking industry’s security models as they were about accepting its credit-risk models. (The ombudsman has also been eager to please.)

So what’s wrong with the new arrangements? Quite simply, the National Fraud Reporting Centre will nestle comfortably alongside the City force’s Dedicated Cheque and Plastic Crime Unit, which investigates card fraud but is funded by the banks. Given this disgraceful arrangement, which is more worthy of Uzbekistan than of Britain, you have to ask how eager the City force will be to investigate offences that bankers don’t want investigated, such as the growing number of insider frauds and chip card cloning? And how vigorously will City cops investigate their paymasters for the fraud of claiming that their systems are secure, when they’re not, in order to avoid paying compensation to defrauded accountholders? The purpose of the old system was to keep the fraud figures artificially low while enabling the banks to control such investigations as did take place. And what precisely has changed?

The lessons of the credit crunch just don’t seem to have sunk in yet. The Government just can’t kick the habit of kowtowing to bankers.

Optimised to fail: Card readers for online banking

A number of UK banks are distributing hand-held card readers for authenticating customers, in the hope of stemming the soaring levels of online banking fraud. As the underlying protocol — CAP — is secret, we reverse-engineered the system and discovered a number of security vulnerabilities. Our results have been published as “Optimised to fail: Card readers for online banking”, by Saar Drimer, Steven J. Murdoch, and Ross Anderson.

In the paper, presented today at Financial Cryptography 2009, we discuss the consequences of CAP having been optimised to reduce both the costs to the bank and the amount of typing done by customers. While the principle of CAP — two factor transaction authentication — is sound, the flawed implementation in the UK puts customers at risk of fraud, or worse.

When Chip & PIN was introduced for point-of-sale, the effective liability for fraud was shifted to customers. While the banking code says that customers are not liable unless they were negligent, it is up to the bank to define negligence. In practice, the mere fact that Chip & PIN was used is considered enough. Now that Chip & PIN is used for online banking, we may see a similar reduction of consumer protection.

Further information can be found in the paper and the talk slides.

Forensic genomics

I recently presented a paper on Forensic genomics: kin privacy, driftnets and other open questions (co-authored with Lucia Bianchi, Pietro Liò and Douwe Korff) at WPES 2008, the Workshop for Privacy in the Electronic Society of ACM CCS, the ACM Computer and Communication Security conference. Pietro and I also gave a related talk here at the Computer Laboratory in Cambridge.

While genetics is concerned with the observation of specific sections of DNA, genomics is about studying the entire genome of an organism, something that has only become practically possible in recent years. In forensic genetics, which is the technology behind the large national DNA databases being built in several countries including notably UK and USA (Wallace’s outstanding article lucidly exposes many significant issues), investigators compare scene-of-crime samples with database samples by checking if they match, but only on a very small number of specific locations in the genome (e.g. 13 locations according to the CODIS rules). In our paper we explore what might change when forensic analysis moves from genetics to genomics over the next few decades. This is a problem that can only be meaningfully approached from a multi-disciplinary viewpoint and indeed our combined backgrounds cover computer security, bioinformatics and law.

CODIS markers
(Image from Wikimedia commons, in turn from NIST.)

Sequencing the first human genome (2003) cost 2.7 billion dollars and took 13 years. The US’s National Human Genome Research Institute has offered over 20 M$ worth of grants towards the goal of driving the cost of whole-genome sequencing down to a thousand dollars. This will enable personalized genomic medicine (e.g. predicting genetic risk of contracting specific diseases) but will also open up a number of ethical and privacy-related problems. Eugenetic abortions, genomic pre-screening as precondition for healthcare (or even just dating…), (mis)use of genomic data for purposes other than that for which it was collected and so forth. In various jurisdictions there exists legislation (such as the recent GINA in the US) that attempts to protect citizens from some of the possible abuses; but how strongly is it enforced? And is it enough? In the forensic context, is the DNA analysis procedure as infallible as we are led to believe? There are many subtleties associated with the interpretation of statistical results; when even professional statisticians disagree, how are the poor jurors expected to reach a fair verdict? Another subtle issue is kin privacy: if the scene-of-crime sample, compared with everyone in the database, partially matches Alice, this may be used as a hint to investigate all her relatives, who aren’t even in the database; indeed, some 1980s murders were recently solved in this way. “This raises compelling policy questions about the balance between collective security and individual privacy” [Bieber, Brenner, Lazer, 2006]. Should a democracy allow such a “driftnet” approach of suspecting and investigating all the innocents in order to catch the guilty?

This is a paper of questions rather than one of solutions. We believe an informed public debate is needed before the expected transition from genetics to genomics takes place. We want to stimulate discussion and therefore we invite you to read the paper, make up your mind and support what you believe are the right answers.

Making bank reimbursement statutory

Many of the recommendations of the House of Lords Science and Technology Committee report on Personal Internet Security have been recycled into Conservative Party policy [*] — as announced back in March. So, if you believe the polls, we might see some changes after the next election or, if you’re cynical, even before then as the Government implements opposition policy!

However, one of the Committee recommendations that the Conservatives did not take up was that the law should be changed so that banks become liable for all eBanking and ATM losses — just as they have been liable since 1882 if they honour a forged cheque. Of course, if the banks can prove fraud (for cheques or for the e-equivalents) then the end-user is liable (and should be locked up).

At present the banks will cover end-users under the voluntary Banking Code… so they say that there would be no difference with a statutory regime. This is a little weak as an objection, since if you believe their position it would make no difference either way to them. But, in practice it will make a difference because the voluntary code doesn’t work too well for a minority of people.

Anyway, at present the banks don’t have a lot of political capital and so their views are carrying far less weight. This was particularly clear in last week’s House of Lords debate on “Personal Internet Security”, where Viscount Bridgeman speaking for the Conservatives said:

“I entirely agree with the noble Lord, Lord Broers, that statutory control of the banks in this respect is required and that we cannot rely on the voluntary code.”

which either means he forgot his brief! or that this really is a new party policy. If so then, in my view, it’s very welcome.

[*] the policy document has inexplicably disappeared from the Conservative website, but a Word version is available from Microsoft here.

ePolicing – Tomorrow the world?

This week has finally seen an announcement that the Police Central e-crime Unit (PCeU) is to be funded by the Home Office. However, the largesse amounts to just £3.5 million of new money spread over three years, with the Met putting up a further £3.9 million — but whether the Met’s contribution is “new” or reflects a move of resources from their existing Computer Crime Unit I could not say.

The announcement is of course Good News — because once the PCeU is up and running next Spring, it should plug (to the limited extent that £2 million a year can plug) the “level 2” eCrime gap that I’ve written about before. viz: that SOCA tackles “serious and organised crime” (level 3), your local police force tackles local villains (level 1), but if criminals operate outside their force’s area — and on the Internet this is more likely than not — yet they don’t meet SOCA’s threshold, then who is there to deal with them?

In particular, the PCeU is envisaged to be the unit that deals with the intelligence packages coming from the City of London Fraud Squad’s new online Fraud Reporting website (once intended to launch in November 2008, now scheduled for Summer 2009).

Of course everyone expects the website to generate more reports of eCrime than could ever be dealt with (even with much more money), so the effectiveness of the PCeU in dealing with eCriminality will depend upon their prioritisation criteria, and how carefully they select the cases they tackle.

Nevertheless, although the news this week shows that the Home Office have finally understood the need to fund more ePolicing, I don’t think that they are thinking about the problem in a sufficiently global context.

A little history lesson might be in order to explain why.
Continue reading ePolicing – Tomorrow the world?

Finland privacy judgment

In a case that will have profound implications, the European Court of Human Rights has issued a judgment against Finland in a medical privacy case.

The complainant was a nurse at a Finnish hospital, and also HIV-positive. Word of her condition spread among colleagues, and her contract was not renewed. The hospital’s access controls were not sufficient to prevent colleages accessing her record, and its audit trail was not sufficient to determine who had compromised her privacy. The court’s view was that health care staff who are not involved in the care of a patient must be unable to access that patient’s electronic medical record: “What is required in this connection is practical and effective protection to exclude any possibility of unauthorised access occurring in the first place.” (Press coverage here.)

A “practical and effective” protection test in European law will bind engineering, law and policy much more tightly together. And it will have wide consequences. Privacy compaigners, for example, can now argue strongly that the NHS Care Records service is illegal. And what will be the further consequences for the Transformational Government initiative – the “Database State”?

Operational security failure

A shocking article appeared yesterday on the BMJ website. It recounts how auditors called 45 GP surgeries asking for personal information about 51 patients. In only one case were they asked to verify their identity; the attack succeeded against the other 50 patients.

This is an old problem. In 1996, when I was advising the BMA on clinical system safety and privacy, we trained the staff at one health authority to detect false-pretext phone calls, and they found 30 a week. We reported this to the Department of Health, hoping they’d introduce some operational security measures nationwide; instead the Department got furious at us for treading on their turf and ordered the HA to stop cooperating (the story’s told in my book). More recently I confronted the NHS chief executive, David Nicholson, and patient tsar Harry Cayton, with the issue at a conference early last year; they claimed there wasn’t a problem nowadays now that people have all these computers.

What will it take to get the Department of Health to care about patient privacy? Lack of confidentiality already costs lives, albeit indirectly. Will it require a really high-profile fatality?

Slow removal of child sexual abuse image websites

On Friday last week The Guardian ran a story on an upcoming research paper by Tyler Moore and myself which will be presented at the WEIS conference later this month. We had determined that child sexual abuse image websites were removed from the Internet far slower than any other category of content we looked at, excepting illegal pharmacies hosted on fast-flux networks; and we’re unsure if anyone is seriously trying to remove them at all!
Continue reading Slow removal of child sexual abuse image websites