Category Archives: Security economics

Social-science angles of security

Security and Human Behaviour 2009

I’m at SHB 2009, which brings security engineers together with psychologists, behavioral economists and others interested in deception, fraud, fearmongering, risk perception and how we make security systems more usable. Here is the agenda.

This workshop was first held last year, and most of us who attended reckoned it was the most exciting event we’d been to in some while. (I blogged SHB 2008 here.) In followups that will appear as comments to this post, I’ll be liveblogging SHB 2009.

Location privacy

I was recently asked for a brief (4-page) invited paper for a forthcoming special issue of the ACM SIGSPATIAL on privacy and security of location-based systems, so I wrote Foot-driven computing: our first glimpse of location privacy issues.

In 1989 at ORL we developed the Active Badge, the first indoor location system: an infrared transmitter worn by personnel that allowed you to tell which room the wearer was in. Every press and TV reporter who visited our lab worried about the intrusiveness of this technology; yet, today, all those people happily carry mobile phones through which they can be tracked anywhere they go. The significance of the Active Badge project was to give us a head start of a few years during which to think about location privacy before it affected hundreds of millions of people. (There is more on our early ubiquitous computing work at ORL in this free excerpt from my book.)
The ORL Active Badge

Location privacy is a hard problem to solve, first because ordinary people don’t seem to actually care, and second because there is a misalignment of incentives: those who could do the most to address the problem are the least affected and the least concerned about it. But we have a responsibility to address it, in the same way that designers of new vehicles have a responsibility to address the pollution and energy consumption issue.

Security economics video

Here is a video of a talk I gave at DMU on security economics (and the slides). I’ve given variants of this survey talk at various conferences over the past two or three years; at last one of them recorded the talk and put the video online. There’s also a survey paper that covers much of the same material. If you find this interesting, you might enjoy coming along to WEIS (the Workshop on the Economics of Information Security) on June 24-25.

Temporal Correlations between Spam and Phishing Websites

Richard Clayton and I have been studying phishing website take-down for some time. We monitored the availability of phishing websites, finding that while most phishing websites are removed with a day or two, a substantial minority remain for much longer. We later found that one of the main reasons why so many websites slip through the cracks is that the take-down companies responsible for removal refuse to share their URL lists with each other.

One nagging question remained, however. Do long-lived phishing websites cause any harm? Would removing them actually help? To get that answer, we had to bring together data on the timing of phishing spam transmission (generously shared by Cisco IronPort) with our existing data on phishing website lifetimes. In our paper co-authored with Henry Stern and presented this week at the USENIX LEET Workshop in Boston, we describe how a substantial portion of long-lived phishing websites continue to receive new spam until the website is removed. For instance, fresh spam continues to be sent out for 75% of phishing websites alive after one week, attracting new victims. Furthermore, around 60% of phishing websites still alive after a month keep receiving spam advertisements.

Consequently, removal of websites by the banks (and the specialist take-down companies they hire) is important. Even when the sites stay up for some time, there is value in continued efforts to get them removed, because this will limit the damage.

However, as we have pointed out before, the take-down companies cause considerable damage by their continuing refusal to share data on phishing attacks with each other, despite our proposals addressing their competitive concerns. Our (rough) estimate of the financial harm due to longer-lived phishing websites was $330 million per year. Given this new evidence of persistent spam campaigns, we are now more confident of this measure of harm.

There are other interesting insights discussed in our new paper. For instance, phishing attacks can be broken down into two main categories: ordinary phishing hosted on compromised web servers and fast-flux phishing hosted on a botnet infrastructure. It turns out that fast-flux phishing spam is more tightly correlated with the uptime of the associated phishing host. Most spam is sent out around the time the fast-flux website first appears and stops once the website is removed. For phishing websites hosted on compromised web servers, there is much greater variation between the time a website appears and when the spam is sent. Furthermore, fast-flux phishing spam was 68% of the total email spam detected by IronPort, despite this being only 3% of all the websites.

So there seems to be a cottage industry of fairly disorganized phishing attacks, with perhaps a few hundred people involved. Each compromises a small number of websites, while sending a small amount of spam. Conversely there are a small number of organized gangs who use botnets for hosting, send most of the spam, and are extremely efficient on every measure we consider. We understand that the police are concentrating their efforts on the second set of criminals. This appears to be a sound decision.

Chip and PIN on Trial

The trial of Job v Halifax plc has been set down for April 30th at 1030 in the Nottingham County Court, 60 Canal Street, Nottingham NG1 7EJ. Alain Job is an immigrant from the Cameroon who has had the courage to sue his bank over phantom withdrawals from his account. The bank refused to refund the money, making the usual claim that its systems were secure. There’s a blog post on the cavalier way in which the Ombudsman dealt with his case. Alain’s case was covered briefly in Guardian in the run-up to a previous hearing; see also reports in Finextra here, here and (especially) here.

The trial should be interesting and I hope it’s widely reported. Whatever the outcome, it may have a significant effect on consumer protection in the UK. For years, financial regulators have been just as credulous about the banks’ claims to be in control of their information-security risk management as they were about the similar claims made in respect of their credit risk management (see our blog post on the ombudsman for more). It’s not clear how regulatory capture will (or can) be fixed in respect of credit risk, but it is just possible that a court could fix the consumer side of things. (This happened in the USA with the Judd case, as described in our submission to the review of the ombudsman service — see p 13.)

For further background reading, see blog posts on the technical failures of chip and PIN, the Jane Badger case, the McGaughey case and the failures of fraud reporting. Go back into the 1990s and we find the Halifax again as the complainant in R v Munden; John Munden was prosecuted for attempted fraud after complaining about phantom withdrawals. The Halifax couldn’t produce any evidence and he was acquitted.

The Snooping Dragon

There’s been much interest today in a report that Shishir Nagaraja and I wrote on Chinese surveillance of the Tibetan movement. In September last year, Shishir spent some time cleaning out Chinese malware from the computers of the Dalai Lama’s private office in Dharamsala, and what we learned was somewhat disturbing.

Later, colleagues from the University of Toronto followed through by hacking into one of the control servers Shishir identified (something we couldn’t do here because of the Computer Misuse Act); their report relates how the attackers had controlled malware on hundreds of other PCs, many in government agencies of countries such as India, Vietnam and the Phillippines, but also in US firms such as AP and Deloittes.

The story broke today in the New York Times; see also coverage in the Telegraph, the BBC, CNN, the Times of India, AP, InfoWorld, Wired and the Wall Street Journal.

Democracy Theatre on Facebook

You may remember a big PR flap last month about Facebook‘s terms of service, followed by Facebook backing down and promising to involve users in a self-governing process of drafting their future terms. This is an interesting step with little precedent amongst commercial web sites. Facebook now has enough users to be the fifth largest nation on earth (recently passing Brazil), and operators of such immense online societies need to define a cyber-government which satisfies their users while operating lawfully within a multitude of jurisdictional boundaries, as well as meeting their legal obligations to the shareholders who own the company.

Democracy is an intriguing approach, and it is encouraging that Facebook is considering this path. Unfortunately, after some review my colleagues and I are left thoroughly disappointed by both the new documents and the specious democratic process surrounding them. We’ve outlined our arguments in a detailed report, the official deadline for commentary is midnight tonight.

The non-legally binding Statement of Principles outline an admirable set of goals in plain language, which was refreshing. However, these goals are then undermined for a variety of legal and business reasons by the “Statement of Rights and Responsibilities“, which would effectively be the new Terms of Service. For example, Facebook demands that application developers comply with user’s privacy settings which it doesn’t provide access to, states that users should have “programmatic access” and then bans users from interacting with the site via “automated means,” and states that the service will transcend national boundaries while banning users from signing up if they live in a country embargoed by the United States.

The stated goal of fairness and equality is also lost. The Statement of Rights and Responsibilities primarily assigns rights to Facebook and responsibilities on users, developers, and advertisers. Facebook still demands a broad license to all user content, shifts all responsibility for enforcing privacy onto developers, and sneakily disclaims itself of all liability. Yet it demands an unrealistic set of obligations: a literal reading of the document requires users to get explicit permission from other users before viewing their content. Furthermore, they have applied the banking industry’s well-known trick of shifting liability to customers, binding users to not do anything to “jeopardize the security of their account,” which can be used to dissolve the contract.

The biggest missed opportunity, however, is the utter failure to provide a real democratic process as promised. Users are free to comment on terms, but Facebook is under no obligation to listen. Facebook‘s official group for comments contains a disorganised jumble of thousands of comments, some insightful and many inane. It is difficult to extract intelligent analysis here. Under certain conditions a vote can be called, but this is hopelessly weakened: it only applies to certain types of changes, the conditions of the vote are poorly specified and subject to manipulation by Facebook, and in fact they reserve the right to ignore the vote for “administrative reasons.”

With a nod to Bruce Schneier, we call such steps “democracy theatre.” It seems the goal is not to actually turn governance over to users, but to use the appearance of democracy and user involvement to ward off future criticism. Our term may be new, but this trick is not, it has been used by autocratic regimes around the world for decades.

Facebook’s new terms represent a genuine step forward with improved clarity in certain areas, but an even larger step backward in using democracy theatre to cover the fact that Facebook is a business and its ultimate accountability is to its shareholders. The outrage over the previous terms was real and it was justified, social networks mean a great deal to their users, and they want to have a real say.  Since Facebook appears unwilling to actually do so, though, we would be remiss to allow them to deflect user’s anger with flowery language and a sham democratic process. For this reason we cannot support the new terms.

[UPDATE: Our report has been officially backed by the Open Rights Group]

National Fraud Strategy

Today the Government “launches” its National Fraud Strategy. I qualify the verb because none of the quality papers seems to be running the story, and the press releases have not yet appeared on the websites of the Attorney General or the Ministry of Justice.

And well might Baroness Scotland be ashamed. The Strategy is a mishmash of things that are being done already with one new initiative – a National Fraud Reporting Centre, to be run by the City of London Police. This is presumably intended to defuse the Lords’ criticisms of the current system whereby fraud must be reported to the banks, not to the police. As our blog has frequently reported, banks dump liability for fraud on customers by making false claims about system security and imposing unreasinable terms and conditions. This is a regulatory failure: the FSA has been just as gullible in accepting the banking industry’s security models as they were about accepting its credit-risk models. (The ombudsman has also been eager to please.)

So what’s wrong with the new arrangements? Quite simply, the National Fraud Reporting Centre will nestle comfortably alongside the City force’s Dedicated Cheque and Plastic Crime Unit, which investigates card fraud but is funded by the banks. Given this disgraceful arrangement, which is more worthy of Uzbekistan than of Britain, you have to ask how eager the City force will be to investigate offences that bankers don’t want investigated, such as the growing number of insider frauds and chip card cloning? And how vigorously will City cops investigate their paymasters for the fraud of claiming that their systems are secure, when they’re not, in order to avoid paying compensation to defrauded accountholders? The purpose of the old system was to keep the fraud figures artificially low while enabling the banks to control such investigations as did take place. And what precisely has changed?

The lessons of the credit crunch just don’t seem to have sunk in yet. The Government just can’t kick the habit of kowtowing to bankers.