All posts by Joseph Bonneau

Static Consent and the Dynamic Web

Last week Facebook announced the end of regional networks for access control. The move makes sense: regional networks had no authentication so information available to them was easy to get with a fake account. Still, silently making millions of weakly-restricted profiles globally viewable raises some disturbing questions. If Terms of Service promise to only share data consistent with users’ privacy settings, but the available privacy settings change as features are added, what use are the terms as a legal contract? This is just one instance of a major problem for rapidly evolving web pages which rely on a static model of informed consent for data collection. Even “privacy fundamentalists” who are careful to read privacy policies and configure their privacy settings can’t be confident of their data’s future for three main reasons:

  • Functionality Changes: Web 2.0 sites add features constantly, usually with little warning or announcement. Users are almost always opted-in for fear that features won’t get noticed otherwise. Personal data is shared before users have any chance to opt out. Facebook has done this repeatedly, opting users in to NewsFeed, Beacon, Social Ads, and Public Search Listings. This has generated a few sizeable backlashes, but Facebook maintains that users must try new features in action before they can reasonably opt out.
  • Contractual Changes: Terms of Service documents can often be changed without notice, and users automatically agree to the new terms by continuing to use the service. In a study we’ll be publishing at WEIS next month evaluating 45 social networking sites, almost half don’t guarantee to announce changes to their privacy policies. Less than 10% of the sites commit to a mandatory notice period before implementing changes (typically a week or less). Realistically, at least 30 days are needed for fundamentalists to read the changes and cancel their accounts if they wish.
  • Ownership Changes: As reported in the excellent survey of web privacy practices by the KnowPrivacy project at UC Berkeley, the vast majority (over 90%) of sites explicitly reserve the right to share data with ‘affiliates’ subject only to the affiliate’s privacy policy. Affiliate is an ambiguous term but it includes at least  parent companies and their subsidiaries. If your favourite web site gets bought out by an international conglomerate, your data is transferred to the new owners who can instantly start using it under their own privacy policy. This isn’t an edge case, it’s a major loophole: websites are bought and sold all the time and for many startups acquisition is the business model.

For any of these reasons, the terms under which consent was given can be changed without warning. Safely disclosing personal data on the web thus requires continuously monitoring sites for new functionality, updated terms of service, or mergers, and instantaneously opting out if you are no longer comfortable. This is impossible even for privacy fundamentalists with an infinite amount of patience and legal knowledge, rendering the old paradigm of informed consent for data collection unworkable for Web 2.0.

How Privacy Fails: The Facebook Applications Debacle

I’ve been writing a lot about privacy in social networks, and sometimes the immediacy gets lost during the more theoretical debates. Recently though I’ve been investigating a massive privacy breach on Facebook’s application platform which serves as a sobering case study. Even to me, the extent of unauthorised data flow I found and the cold economic motivations keeping it going were surprising. Facebook’s application platform remains a disaster from a privacy standpoint, dampening one of the more compelling features of the network.

Continue reading How Privacy Fails: The Facebook Applications Debacle

Attack of the Zombie Photos

One of the defining features of Web 2.0 is user-uploaded content, specifically photos. I believe that photo-sharing has quietly been the killer application which has driven the mass adoption of social networks. Facebook alone hosts over 40 billion photos, over 200 per user, and receives over 25 million new photos each day. Hosting such a huge number of photos is an interesting engineering challenge. The dominant paradigm which has emerged is to host the main website from one server which handles user log-in and navigation, and host the images on separate special-purpose photo servers, usually on an external content-delivery network. The advantage is that the photo server is freed from maintaining any state. It simply serves its photos to any requester who knows the photo’s URL.

This setup combines the two classic forms of enforcing file permissions, access control lists and capabilities. The main website checks each request for a photo against an ACL, it then grants a capability to view a photo in the form of an obfuscated URL which can be sent to the photo-server. We wrote earlier about how it was possible to forge Facebook’s capability-URLs and gain unauthorised access to photos. Fortunately, this has been fixed and it appears that most sites use capability-URLs with enough randomness to be unforgeable. There’s another traditional problem with capability systems though: revocation. My colleagues Jonathan Anderson, Andrew Lewis, Frank Stajano and I ran a small experiment on 16 social-networking, blogging, and photo-sharing web sites and found that most failed to remove image files from their photo servers after they were deleted from the main web site. It’s often feared that once data is uploaded into “the cloud,” it’s impossible to tell how many backup copies may exist and where, and this provides clear proof that content delivery networks are a major problem for data remanence. Continue reading Attack of the Zombie Photos

The Curtain Opens on Facebook's Democracy Theatre

Last month we penned a highly-critical report of Facebook’s proposed terms of service and much-hyped “public review” process. We categorised them as “democracy theatre”, a publicity stunt intended to provide the appearance of community input without committing to real change. We included our report in Facebook’s official forum, and it was backed by the Open Rights Group as their official expert response as requested by Facebook. Last night, Facebook published their revised terms of service and unveiled their voting process, and our scepticism about the process has been confirmed. We’ve issued a press release summarising our opposition to the new terms.

Taking a look at the diff output from the revised terms, it’s clear that as we anticipated, no meaningful changes were made. All of the changes are superficial, in fact Section 2 is now slightly less clear and a few more shady definitions have been pushed to the back of the document. Facebook received hundreds of comments in addition to our report during the public review process, but their main response was a patronising FAQ document which dismissed user’s concerns as being merely misunderstandings of Facebook’s goodwill. Yet, Facebook still described their new terms as “reflecting comments from users and experts received during the 30-day comment period. ” We would challenge Facebook to point to a single revision which reflected a specific comment received.

The voting process is also problematic, as we predicted it would be. The new terms were announced and instantly put to a 7-day vote, hardly enough time to have a serious debate on the revised terms. Depending on your profile settings it can be quite hard to even find the voting interface. For some profiles it is prominently shown on one’s home page, for others it is hidden and can’t even be found through search. The voting interface was outsourced to a third-party developer called Wildfire Promotion Builder and has been frequently crashing in the first 12 hours of voting, despite a relatively low turnout (50,000 votes so far). This is particularly damning since the required quorum is 60 million votes over 7 days, meaning Facebook was unprepared technically to handle 1% of the required voting traffic.

The poorly done voting interface summarises the situation well. This process was never about democracy or openness, but about damage control from a major PR disaster. Truly opening the site up to user control is an interesting option and might be in Facebook’s long-term interest. They are also certainly within their rights as well to run their site as a dictatorship using the older, corporate-drafted terms of service. But it’s tough to swallow Facebook’s arrogant insistence that it’s engaging users, when it’s really doing no such thing.

Update, 24/04/2009: The vote ended yesterday. About 600,000 users voted, 0.3% of all users on the site and less than 1% of the required 30%. Over 25% of voters opposed the new terms of service, many of which can be interpreted as voting in protest. For Facebook, it was still a win, as they experienced mostly good press and have now had their new terms ratified.

Facebook Giving a Bit Too Much Away

Facebook has been serving up public listings for over a year now. Unlike most of the site, anybody can view public listings, even non-members. They offer a window into the Facebook world for those who haven’t joined yet, since Facebook doesn’t allow full profiles to be publicly viewable by non-members (unlike MySpace and others). Of course, this window into Facebook comes with a prominent “Sign Up” button, growth still being the main mark of success in the social networking world. The goal is for non-members to stumble across a public listing, see how many friends are already using Facebook, and then join. Economists call this a network effect, and Facebook is shrewdly harnessing it.

Of course, to do this, Facebook is making public every user’s name, photo, and 8 friendship links. Affiliations with organizations, causes, or products are also listed, I just don’t have any on my profile (though my sister does). This is quite a bit of information given away by a feature many active Facebook user are unaware of. Indeed, it’s more information than the Facebook’s own privacy policy indicates is given away. When the feature was launched in 2007, every over-18 user was automatically opted-in, as have been new users since then. You can opt out, but few people do-out of more than 500 friends of mine, only 3 had taken the time to opt out. It doesn’t help that most users are unaware of the feature, since registered users don’t encounter it.

Making matters worse, public listings aren’t protected from crawling. In fact they are designed to be indexed by search engines. In our own experiments, we were able to download over 250,000 public listings per day using a desktop PC and a fairly crude Python script. For a serious data aggregator getting every user’s listing is no sweat. So what can one do with 200 million public listings?

I explored this question along with Jonathan Anderson, Frank Stajano, and Ross Anderson in a new paper which we presented today at the ACM Social Network Systems Workshop in Nuremberg. Facebook’s public listings give us a random sample of the social graph, leading to some interesting exercises in graph theory. As we describe in the paper, it turns out that this sampled graph allows us to approximate many properties of the complete network surprisingly well: degree and centrality of nodes, small dominating sets, short paths, and community structure. These are all things marketers and sociologists alike would love to know for the complete Facebook graph.

This result leads to two interesting conclusions. First, protecting a social graph is hard. Consistent with previous results, we found that giving away a seemingly small amount can allow much information to be inferred. It’s also been shown that anonymising a social graph is almost impossible.

Second, Facebook is developing a track record of releasing features and then being surprised by the privacy implications, from Beacon to NewsFeed and now Public Search. Analogous to security-critical software, where new code is extensively tested and evaluated before being deployed, social networks should have a formal privacy review of all new features before they are rolled out (as, indeed, should other web services which collect personal information).  Features like public search listings shouldn’t make it off the drawing board.

Democracy Theatre on Facebook

You may remember a big PR flap last month about Facebook‘s terms of service, followed by Facebook backing down and promising to involve users in a self-governing process of drafting their future terms. This is an interesting step with little precedent amongst commercial web sites. Facebook now has enough users to be the fifth largest nation on earth (recently passing Brazil), and operators of such immense online societies need to define a cyber-government which satisfies their users while operating lawfully within a multitude of jurisdictional boundaries, as well as meeting their legal obligations to the shareholders who own the company.

Democracy is an intriguing approach, and it is encouraging that Facebook is considering this path. Unfortunately, after some review my colleagues and I are left thoroughly disappointed by both the new documents and the specious democratic process surrounding them. We’ve outlined our arguments in a detailed report, the official deadline for commentary is midnight tonight.

The non-legally binding Statement of Principles outline an admirable set of goals in plain language, which was refreshing. However, these goals are then undermined for a variety of legal and business reasons by the “Statement of Rights and Responsibilities“, which would effectively be the new Terms of Service. For example, Facebook demands that application developers comply with user’s privacy settings which it doesn’t provide access to, states that users should have “programmatic access” and then bans users from interacting with the site via “automated means,” and states that the service will transcend national boundaries while banning users from signing up if they live in a country embargoed by the United States.

The stated goal of fairness and equality is also lost. The Statement of Rights and Responsibilities primarily assigns rights to Facebook and responsibilities on users, developers, and advertisers. Facebook still demands a broad license to all user content, shifts all responsibility for enforcing privacy onto developers, and sneakily disclaims itself of all liability. Yet it demands an unrealistic set of obligations: a literal reading of the document requires users to get explicit permission from other users before viewing their content. Furthermore, they have applied the banking industry’s well-known trick of shifting liability to customers, binding users to not do anything to “jeopardize the security of their account,” which can be used to dissolve the contract.

The biggest missed opportunity, however, is the utter failure to provide a real democratic process as promised. Users are free to comment on terms, but Facebook is under no obligation to listen. Facebook‘s official group for comments contains a disorganised jumble of thousands of comments, some insightful and many inane. It is difficult to extract intelligent analysis here. Under certain conditions a vote can be called, but this is hopelessly weakened: it only applies to certain types of changes, the conditions of the vote are poorly specified and subject to manipulation by Facebook, and in fact they reserve the right to ignore the vote for “administrative reasons.”

With a nod to Bruce Schneier, we call such steps “democracy theatre.” It seems the goal is not to actually turn governance over to users, but to use the appearance of democracy and user involvement to ward off future criticism. Our term may be new, but this trick is not, it has been used by autocratic regimes around the world for decades.

Facebook’s new terms represent a genuine step forward with improved clarity in certain areas, but an even larger step backward in using democracy theatre to cover the fact that Facebook is a business and its ultimate accountability is to its shareholders. The outrage over the previous terms was real and it was justified, social networks mean a great deal to their users, and they want to have a real say.  Since Facebook appears unwilling to actually do so, though, we would be remiss to allow them to deflect user’s anger with flowery language and a sham democratic process. For this reason we cannot support the new terms.

[UPDATE: Our report has been officially backed by the Open Rights Group]

When Layers of Abstraction Don’t Get Along: The Difficulty of Fixing Cache Side-Channel Vulnerabilities

(co-authored with Robert Watson)

Recently, our group was treated to a presentation by Ruby Lee of Princeton University, who discussed novel cache architectures which can prevent some cache-based side channel attacks against AES and RSA. The new architecture was fascinating, in particular because it may actually increase cache performance (though this point was spiritedly debated by several systems researchers in attendance). For the security group, though, it raised two interesting and troubling questions. What is the proper defence against side-channels due to processor cache? And why hasn’t it been implemented despite these attacks being around for years?

Continue reading When Layers of Abstraction Don’t Get Along: The Difficulty of Fixing Cache Side-Channel Vulnerabilities

New Facebook Photo Hacks

Last March, Facebook caught some flak when some hacks circulated showing how to access private photos of any user. These were enabled by egregiously lazy design: viewing somebody’s private photos simply required determining their user ID (which shows up in search results) and then manually fetching a URL of the form:
www.facebook.com/photo.php?pid=1&view=all&subj=[uid]&id=[uid]
This hack was live for a few weeks in February, exposing some photos of Facebook CEO Mark Zuckerberg and (reportedly) Paris Hilton, before the media picked it up in March and Facebook upgraded the site.

Instead of using properly formatted PHP queries as capabilities to view photos, Faceook now verifies the requesting user against the ACL for each photo request. What could possibly go wrong? Well, as I discovered this week, the photos themselves are served from a separate content-delivery domain, leading to some problems which highlight the difficulty of building access control into an enormous, globally distributed website like Facebook.

Continue reading New Facebook Photo Hacks

Think of the children

Last week, the Times ran an article about a new website promising to be “Facebook for Kids”: School Together Now. According to the article, an ordinary mother of 3 got the idea for the site to allow parents to be more involved with their kids, and to give children aged 7-12 the benefits of social networking (Facebook, for example, limits membership to those older than 13). School Together Now is set to officially launch on the first of the year, but is already open for public registration and has been written up several times by the press.

We’ll leave the question of whether young children need a social network for sociologists and psychologists; there are difficult enough questions on how to design security for this vulnerable age group. Jonathan Anderson and I reviewed School Together Now and were disturbed with its lack of answers. The first thing we noticed was that logging in without entering any username or password provided full access via the account of the user “Amber Munt” (this works from the log-in box displayed after clicking “Children->Register/Login”). The next thing we noticed was the site’s About Us page, which states the goal of allowing advertisers to “Get themselves in front of their favourite customers (i.e. parents with deep pockets!)” Further investigation revealed a pattern of poor security choices driven by the desire for rapid commercialisation, which is inexcusable for a site specifically marketed at young children. Continue reading Think of the children