Category Archives: Privacy technology

Anonymous communication, data protection

Phone hacking, technology and policy

Britain’s phone hacking scandal touches many issues of interest to security engineers. Murdoch’s gumshoes listened to celebs’ voicemail messages using default PINs. They used false-pretext phone calls – blagging – to get banking and medical records.

We’ve known for years that private eyes blag vast amounts of information (2001 book, from page 167; 2006 ICO Report). Centralisation and the ‘Cloud’ are making things worse. Twenty years ago, your bank records were available only in your branch; now any teller at any branch can look them up. The dozen people who work at your doctor’s surgery used to be able to keep a secret, but the 840,000 staff with a logon to our national health databases?

Attempts to fix the problem using the criminal justice system have failed. When blagging was made illegal in 1995, the street price of medical records actually fell from £200 to £150! Parliament increased the penalty from fines to jail in 2006 but media pressure scared ministers off implementing this law.

Our Database State report argued that the wholesale centralisation of medical and other records was unsafe and illegal; and the NHS Population Demographics Service database appears to be the main one used to find celebs’ ex-directory numbers. Celebs can opt out, but most of them are unaware of PDS abuse, so they don’t. Second, you can become a celeb instantly if you are a victim of crime, war or terror. Third, even if you do opt out, the gumshoes can just bribe policemen, who have access to just about everything.

In future, security engineers must pay much more attention to compartmentation (even the Pentagon is now starting to get it), and we must be much more wary about the risk that law-enforcement access to information will be abused.

The PET Award: Nominations wanted for prestigious privacy award

The PET Award is presented annually to researchers who have made an outstanding contribution to the theory, design, implementation, or deployment of privacy enhancing technology. It is awarded at the annual Privacy Enhancing Technologies Symposium (PETS).

The PET Award carries a prize of 3000 USD thanks to the generous support of Microsoft. The crystal prize itself is offered by the Office of the Information and Privacy Commissioner of Ontario, Canada.

Any paper by any author written in the area of privacy enhancing technologies is eligible for nomination. However, the paper must have appeared in a refereed journal, conference, or workshop with proceedings published in the period from August 8, 2009 until April 15, 2011.

The complete award rules including eligibility requirements can be found under the award rules section of the PET Symposium website.

Anyone can nominate a paper by sending an email message containing the following to award-chair11@petsymposium.org.

  • Paper title
  • Author(s)
  • Author(s) contact information
  • Publication venue and full reference
  • Link to an available online version of the paper
  • A nomination statement of no more than 500 words.

All nominations must be submitted by April 15th, 2011. The Award Committee will select one or two winners among the nominations received. Winners must be present at the PET Symposium in order to receive the Award. This requirement can be waived only at the discretion of the PET Advisory board.

More information about the PET award (including past winners) is available at http://petsymposium.org/award/

More information about the 2011 PET Symposium is available at http://petsymposium.org/2011.

Wikileaks, security research and policy

A number of media organisations have been asking us about Wikileaks. Fifteen years ago we kicked off the study of censorship resistant systems, which inspired the peer-to-peer movement; we help maintain Tor, which provides the anonymous communications infrastructure for Wikileaks; and we’ve a longstanding interest in information policy.

I have written before about governments’ love of building large databases of sensitive data to which hundreds of thousands of people need access to do their jobs – such as the NHS spine, which will give over 800,000 people access to our health records. The media are now making the link. Whether sensitive data are about health or about diplomacy, the only way forward is compartmentation. Medical records should be kept in the surgery or hospital where the care is given; and while an intelligence analyst dealing with Iraq might have access to cables on Iraq, Iran and Saudi Arabia, he should have no routine access to stuff on Korea or Brazil.

So much for the security engineering; now to policy. No-one questions the US government’s right to try one of its soldiers for leaking the cables, or the right of the press to publish them now that they’re leaked. But why is Wikileaks treated as the leaker, rather than as a publisher?

This leads me to two related questions. First, does a next-generation censorship-resistant system need a more resilient technical platform, or more respectable institutions? And second, if technological change causes respectable old-media organisations such as the Guardian and the New York Times to go bust and be replaced by blogs, what happens to freedom of the press, and indeed to freedom of speech?

Research, public opinion and patient consent

Paul Thornton has brought to my attention some research that the Department of Health published quietly at the end of 2009 (and which undermines Departmental policy).

It is the Summary of Responses to the Consultation on the Additional Uses of Patient Data undertaken following campaigning by doctors, NGOs and others about the Secondary Uses Service (SUS). SUS keeps summaries of patient care episodes, some of them anonymised, and makes them available for secondary uses; the system’s advocates talk about research, although it is heavily used for health service management, clinical audit, answering parliamentary questions and so on. Most patients are quite unaware that tens of thousands of officials have access to their records, and the Database State report we wrote last year concluded that SUS is almost certainly illegal. (Human-rights and data-protection law require that sensitive data, including health data, be shared only with the consent of the data subject or using tightly restricted statutory powers whose effects are predictable to data subjects.)

The Department of Health’s consultation shows that most people oppose the secondary use of their health records without consent. The executive summary tries to spin this a bit, but the data from the report’s body show that public opinion remains settled on the issue, as it has been since the first opinion survey in 1997. We do see some signs of increasing sophistication: now a quarter of patients don’t believe that data can be anonymised completely, versus 15% who say that sharing is “OK if anonymised” (p 23). And the views of medical researchers and NHS administrators are completely different; see for example p 41. The size of this gap suggests the issue won’t get resolved any time soon – perhaps until there’s an Alder-Hey-type incident that causes a public outcry and forces a reform of SUS.

Digital Activism Decoded: The New Mechanics of Change

The book “Digital Activism Decoded: The New Mechanics of Change” is one of the first on the topic of digital activism. It discusses how digital technologies as diverse as the Internet, USB thumb-drives, and mobile phones, are changing the nature of contemporary activism.

Each of the chapters offers a different perspective on the field. For example, Brannon Cullum investigates the use of mobile phones (e.g. SMS, voice and photo messaging) in activism, a technology often overlooked but increasingly important in countries with low ratios of personal computer ownership and poor Internet connectivity. Dave Karpf considers how to measure the success of digital activism campaigns, given the huge variety of (potentially misleading) metrics available such as page impression and number of followers on Twitter. The editor, Mary Joyce, then ties each of these threads together, identifying the common factors between the disparate techniques for digital activism, and discussing future directions.

My chapter “Destructive Activism: The Double-Edged Sword of Digital Tactics” shows how the positive activism techniques promoted throughout the rest of the book can also be used for harm. Just as digital tools can facilitate communication and create information, they can also be used to block and destroy. I give some examples where these events have occurred, and how the technology to carry out these actions came to be created and deployed. Of course, activism is by its very nature controversial, and so is where to draw the line between positive and negative actions. So my chapter concludes with a discussion of the ethical frameworks used when considering the merits of activism tactics.

Digital Activism Decoded, published by iDebate Press, is now available for download, and can be pre-ordered from Amazon UK or Amazon US (available June 30th now).

Update (2010-06-17): Amazon now have the book in stock at both their UK and US stores.

Digital Activism Decoded

What's the Buzz about? Studying user reactions

Google Buzz has been rolled out to 150M Gmail users around the world. In their own words, it’s a service to start conversations and share things with friends. Cynics have said it’s a megalomaniacal attempt to leverage the existing user base to compete with Facebook/Twitter as a social hub. Privacy advocates have rallied sharply around a particular flaw: the path of least-resistance to signing up for Buzz includes automatically following people based on Buzz’s recommendations from email and chat frequency, and this “follower” list is completely public unless you find the well-hidden privacy setting. As a business decision, this makes sense, the only chance for Buzz to make it is if users can get started very quickly. But this is a privacy misstep that a mandatory internal review would have certainly objected to. Email is still a private, personal medium. People email their mistresses, workers email about job opportunities, reporters email anonymous sources all with the same emails they use for everything else. Besides the few embarrassing incidents this will surely cause, it’s fundamentally playing with people’s perceptions of public and private online spaces and actively changing social norms, as my colleague Arvind Narayanan spelled out nicely.

Perhaps more interesting than the pundit’s responses though is the ability to view thousands of user’s reactions to Buzz as they happen. Google’s design philosophy of “give minimal instructions and just let users type things into text boxes and see what happens” preserved a virtual Pompeii of confused users trying to figure out what the new thing was and accidentally broadcasting their thoughts to the entire Internet. If you search Buzz for words like “stupid,” “sucks,” and “hate” the majority of the conversation so far is about Buzz itself. Thoughts are all over the board: confusion, stress, excitement, malaise, anger, pleading. Thousands of users are badly confused by Google’s “follow” and “profile” metaphors. Others are wondering how this service compares to the competition. Many just want the whole thing to go away (leading a few how-to guides) or are blasting Google or blasting others for complaining.

It’s a major data mining and natural language processing challenge to analyze the entire body of reactions to the new service, but the general reaction is widespread disorientation and confusion. In the emerging field of security psychology, the first 48 hours of Buzz posts could provide be a wealth of data about about how people react when their privacy expectations are suddenly shifted by the machinations of Silicon Valley.

The need for privacy ombudsmen

Facebook is rolling out two new features with privacy implications, an app dashboard and a gaming dashboard. Take a 30 second look at the beta versions which are already live (with real user data) and see if you spot any likely problems. For the non-Facebook users, the new interfaces essentially provide a list of applications that your friends are using, including “Recent Activity” which lists when applications were used. What could possibly go wrong?

Well, some users may use applications they don’t want their friend to know about, like dating or job-search. And they certainly may not want others to know the time they used an application, if this makes it clear that they were playing a game on company time. This isn’t a catastrophic privacy breach, but it will definitely lead to a few embarrassing situations. As I’ve argued before, users should have a basic privacy expectation that if they continue to use a service in a consistent way, data won’t be shared in a new, unexpected manner of which they have no warning or control, and this new feature violates that expectation. The interesting thing is how Facebook is continually caught by surprise when their spiffy new features upset users. They seem equally clueless with their response: allowing developers to opt an application out of appearing on the dashboard. Developers have no incentive to do this, as they want maximum exposure for their apps. A minimally acceptable solution must allow users to opt themselves out.

It’s inexcusable that Facebook doesn’t appear to have a formal privacy testing process to review new features and recommend fixes before they go live. The site is quite complicated, but a small team should be able to identify the issues with something like the new dashboard in a day’s work. It could be effective with with 1% of the manpower of the company’s nudity cops. Notably, Facebook is trying to resolve a class-action lawsuit over their Beacon fiasco by creating an independent privacy foundation, which privacy advocates and users have both objected to. As a better way forward, I’d call for creating an in-house “privacy ombudsmen” team, which has the authority to review new features and publish analysis of them, as a much more direct step to preventing future privacy failures.

Facebook tosses graph privacy into the bin

Facebook has been rolling out new privacy settings in the past 24 hours along with a “privacy transition” tool that is supposed to help users update their settings.  Ostensibly, Facebook’s changes are the result of pressure from the Canadian privacy commissioner, and in Facebook’s own words the changes are meant to be “new tools to control your experience.” The changes have been harshly criticized in a number of high-profile places:  the New York Times, Wired, CnetTechCrunch, Valleywag, ReadWriteWeb, and by the the EFF and the ACLU. The ACLU has the most detailed technical summary of changes, essentially there are more granular controls but many more things will default to “open to everyone.” It’s most telling to check the blogs used by Facebook developers and marketers with a business interest in the matter. Their take is simple: a lot more information is about to be shared and developers need to find out how to use it.

The most discussed issue is the automatic change to more open-settings, which will lead to privacy breaches of the socially-awkward variety, as users will accidentally post something that the wrong person can read. This will assuredly happen more frequently as a direct result of these changes, even though Facebook is trying to force users to read about the new settings, it’s a safe bet that users won’t read any of it. Many people learn how Facebook works by experience, they expect it to keep working that way and it’s a bad precedent to change that when it’s not necessary. The fact that Facebook’s “transition wizard” includes one column of radio buttons for “keep my old settings” and a pre-selected column for “switch to the new settings Facebook wants me to have” shows that either they don’t get it or they really don’t respect their users. Most of this isn’t surprising though: I wrote in June that Facebook would be automatically changing user settings to be more open, TechCrunch also saw this coming in July.

There’s a much more surprising bit which has been mostly overlooked-it’s now impossible for any user to hide their friend list from being globally viewable to the Internet at large. Facebook has a few shameful cop-out statements about this, stating that you can remove it from your default profile view if you wish, but since (in their opinion) it’s “publicly available information”  you can’t hide it from people who really want to see it. It has never worked this way previously, as hiding one’s friend list was always an option, and there have been many research papers, including a few by me and colleagues in Cambridge, concluding that the social graph is actually the most important information to keep private. The threats here are more fundamental and dangerous-unexpected inference of sensitive information, cross-network de-anonymisation, socially targeted phishing and scams.

It’s incredibly disappointing to see Facebook ignoring a growing body of scientific evidence and putting its social graph up for grabs. It will likely be completely crawled fairly soon by professional data aggregators, and probably by enterprising researchers soon after. The social graph is powerful view into who we are—Mark Zuckerberg said so himself—and  it’s a sad day to see Facebook cynically telling us we can’t decide for ourselves whether or not to share it.

UPDATE 2009-12-11: Less than 12 hours after publishing this post, Facebook backed down citing criticism and made it possible to hide one’s friend list. They’ve done this in a laughably ham-handed way, as friend-list visibility is now all-or-nothing while you can set complex ACLs on most other profile items. It’s still bizarre that they’ve messed with this at all, for years the default was in fact to only show your friend list to other friends. One can only conclude that they really want all users sharing their friend list, while trying to appear privacy-concerned: this is precisely the “privacy communication game” which Sören Preibusch and I wrote of in June. This remains an ignoble moment for Facebook-the social graph will still become mostly public as they’ll be changing overnight the visibility of hundreds of millions of users’ friends lists who don’t find this well-hidden opt-out.

What does Detica detect?

There has been considerable interest in a recent announcement by Detica of “CView” which their press release claims is “a powerful tool to measure copyright infringement on the internet”. The press release continues by saying that it will provide “a measure of the total volume of unauthorised file sharing”.

Commentators have divided as to whether these claims are nonsense, or whether the system must be deeply intrusive. The main reason for this is that when peer-to-peer file sharing flows are encrypted, it is impossible for a passive observer to know what is being transferred.

I met with Detica last Friday, at their suggestion, to discuss what their system actually did (they’ve read some of my work on Phorm’s system, so meeting me was probably not entirely random). With their permission, I can now explain the basics of what they are actually doing. A more detailed account should appear at some later date.
Continue reading What does Detica detect?