Category Archives: Security economics

Social-science angles of security

Measuring Typosquatting Perpetrators and Funders

For more than a decade, aggressive website registrants have been engaged in ‘typosquatting’ — the intentional registration of misspellings of popular website addresses. Uses for the diverted traffic have evolved over time, ranging from hosting sexually-explicit content to phishing. Several countermeasures have been implemented, including outlawing the practice and developing policies for resolving disputes. Despite these efforts, typosquatting remains rife.

But just how prevalent is typosquatting today, and why is it so pervasive? Ben Edelman and I set out to answer these very questions. In Measuring the Perpetrators and Funders of Typosquatting (appearing at the Financial Cryptography conference), we estimate that at least 938,000 typosquatting domains target the top 3,264 .com sites, and we crawl more than 285,000 of these domains to analyze their revenue sources.
Continue reading Measuring Typosquatting Perpetrators and Funders

Call for papers: WEIS 2010 — Submissions due next week

The Workshop on the Economics of Information Security (WEIS) is the leading forum for interdisciplinary scholarship on information security, combining expertise from the fields of economics, social science, business, law, policy and computer science. Prior workshops have explored the role of incentives between attackers and defenders, identified market failures dogging Internet security, and assessed investments in cyber-defense.

The ninth installment of WEIS will take place June 7–8 at Harvard. Submissions are due in one week, February 22, 2010. For more information, see the complete call for papers.

WEIS 2010 will build on past efforts using empirical and analytic tools to not only understand threats, but also strengthen security through novel evaluations of available solutions. How should information risk be modeled given the constraints of rare incidence and high interdependence? How do individuals’ and organizations’ perceptions of privacy and security color their decision making? How can we move towards a more secure information infrastructure and code base while accounting for the incentives of stakeholders?

If you have been working to answer questions such as these, then I encourage you to submit a paper.

Why is 3-D Secure a single sign-on system?

Since the blog post on our paper Verified by Visa and MasterCard SecureCode: or, How Not to Design Authentication, there has been quite a bit of feedback, including media coverage. Generally, commenters have agreed with our conclusions, and there have been some informative contributions giving industry perspectives, including at Finextra.

One question which has appeared a few times is why we called 3-D Secure (3DS) a single sign-on (SSO) system. 3DS certainly wasn’t designed as a SSO system, but I think it meets the key requirement: it allows one party to authenticate another, without credentials (passwords, keys, etc…) being set up in advance. Just like other SSO systems like OpenID and Windows CardSpace, there is some trusted intermediary which both communication parties have a relationship with, who facilitates the authentication process.

For this reason, I think it is fair to classify 3DS as a special-purpose SSO system. Your card number acts as a pseudonym, and the protocol gives the merchant some assurance that the customer is the legitimate controller of that pseudonym. This is a very similar situation to OpenID, which provides the relying party assurance that the visitor is the legitimate controller of a particular URL. On top of this basic functionality, 3DS also gives the merchant assurance that the customer is able to pay for the goods, and provides a mechanism to transfer funds.

People are permitted to have multiple cards, but this does not prevent 3DS from being a SSO system. In fact, it is generally desirable, for privacy purposes, to allow users to have multiple pseudonyms. Existing SSO systems support this in various ways — OpenID lets you have multiple domain names, and Windows CardSpace uses clever cryptography. Another question which came up was whether 3DS was actually transaction authentication, because the issuer does get a description of the transaction. I would argue not, because the transaction description does not go to the customer, thus the protocol is vulnerable to a man-in-the-middle attack if the customer’s PC is compromised.

A separate point is whether it is useful to categorize 3DS as SSO. I would argue yes, because we can then compare 3DS to other SSO systems. For example, OpenID uses the domain name system to produce a hierarchical name space. In contrast, 3DS has a flat numerical namespace and additional intermediaries in the authentication process. Such architectural comparisons between deployed systems are very useful to future designers. In fact, the most significant result the paper presents is one from security-economics: 3DS is inferior in almost every way to the competition, yet succeeded because incentives were aligned. Specifically, the reward for implementing 3DS is the ability to push fraud costs onto someone else — the merchant to the issuer and the issuer to the customer.

How online card security fails

Online transactions with credit cards or debit cards are increasingly verified using the 3D Secure system, which is branded as “Verified by VISA” and “MasterCard SecureCode”. This is now the most widely-used single sign-on scheme ever, with over 200 million cardholders registered. It’s getting hard to shop online without being forced to use it.

In a paper I’m presenting today at Financial Cryptography, Steven Murdoch and I analyse 3D Secure. From the engineering point of view, it does just about everything wrong, and it’s becoming a fat target for phishing. So why did it succeed in the marketplace?

Quite simply, it has strong incentives for adoption. Merchants who use it push liability for fraud back to banks, who in turn push it on to cardholders. Properly designed single sign-on systems, like OpenID and InfoCard, can’t offer anything like this. So this is yet another case where security economics trumps security engineering, but in a predatory way that leaves cardholders less secure. We conclude with a suggestion on what bank regulators might do to fix the problem.

Update (2010-01-27): There has been some follow-up media coverage

Update (2010-01-28): The New Scientist also has the story, as has Ars Technica.

How hard can it be to measure phishing?

Last Friday I went to a workshop organised by the Oxford Internet Institute on “Mapping and Measuring Cybercrime“. The attendees represented many disciplines from lawyers, through ePolicy, to serving police officers and an ex Government minister. Much of the discussion related to the difficulty of saying precisely what is or is not “cybercrime“, and what might be meant by mapping or measuring it.

The position paper I submitted (one more of the extensive Moore/Clayton canon on phishing) took a step back (though of course we intend to be a step forward), in that it looked at the very rich datasets that we have for phishing and asked whether this meant that we could usefully map or measure that particular criminal speciality?

In practice, we believe, bias in the data and the bias of those who are interpret it means that considerable care is needed to understand what all the data actually means. We give an example from our own work of how failing to understand the bias meant that we initially misunderstood the data, and how various intentional distortions arise because of the self-interest of those who collect the data.

Extrapolating, this all means that getting better data on other types of cybercrime may not prove to be quite as useful as might initially be thought.

As ever, reading the whole paper (it’s only 4 sides!) is highly recommended, but to give a flavour of the problem we’re drawing attention to:

If a phishing gang host their webpages on a thousand fraudulent domains, using fifty stolen credit cards to purchase them from a dozen registrars, and then transfer money out of a hundred customer accounts leading to a monetary loss in six cases: is that a 1000 crimes, or 50, or 12, or 100 or 6 ?

The phishing website removal companies would say that there were 1000 incidents because they need to get 1000 domains suspended. The credit card companies would say there were 50 incidents because 50 cardholders ought to have money reimbursed. Equally they would have 12 registrars to “charge back” because they had accepted fraudulent registrations (there might have been any number of actual credit card money transfer events between 12 and 1000 depending whether the domains were purchased in bulk). The banks will doubtless see the criminality as 100 unauthorised transfers of money out of their customer accounts; but if they claw back almost all of the cash (because it remains within the mainstream banking system) then the six-monthly Financial Fraud Action UK (formerly APACS) report will merely include the monetary losses from the 6 successful thefts.

Clearly, what you count depends on who you are — but crucially, in a world where resources are deployed to meet measurement targets (and your job is at risk if you miss them), deciding what to measure will bias your decisions on what you actually do and hence how effective you are at defeating the criminals.

TV coverage of online banking card-reader vulnerabilities

This evening (Monday 26th October 2009, at 19:30 UTC), BBC Inside Out will show Saar Drimer and I demonstrating how the use of smart card readers, being issued in the UK to authenticate online banking transactions, can be circumvented. The programme will be broadcast on BBC One, but only in the East of England and Cambridgeshire, however it should also be available on iPlayer.

In this programme, we demonstrate how a tampered Chip & PIN terminal could collect an authentication code for Barclays online banking, while a customer thinks they are buying a sandwich. The criminal could then, at their leisure, use this code and the customer’s membership number to fraudulently transfer up to £10,000.

Similar attacks are possible against all other banks which use the card readers (known as CAP devices) for online banking. We think that this type of scenario is particularly practical in targeted attacks, and circumvents any anti-malware protection, but criminals have already been seen using banking trojans to attack CAP on a wide scale.

Further information can be found on the BBC online feature, and our research summary. We have also published an academic paper on the topic, which was presented at Financial Cryptography 2009.

Update (2009-10-27): The full programme is now on BBC iPlayer for the next 6 days, and the segment can also be found on YouTube.

BBC Inside Out, Monday 26th October 2009, 19:30, BBC One (East)

apComms backs ISP cleanup activity

The All Party Parliamentary Communications Group (apComms) recently published their report into an inquiry entitled “Can we keep our hands off the net?”

They looked at a number of issues, from “network neutrality” to how best to deal with child sexual abuse images. Read the report for the all the details; in this post I’m just going to draw attention to one of the most interesting, and timely, recommendations:

51. We recommend that UK ISPs, through Ofcom, ISPA or another appropriate
organisation, immediately start the process of agreeing a voluntary code for
detection of, and effective dealing with, malware infected machines in the UK.
52. If this voluntary approach fails to yield results in a timely manner, then we further recommend that Ofcom unilaterally create such a code, and impose it upon the UK ISP industry on a statutory basis.

The problem is that although ISPs are pretty good these days at dealing with incoming badness (spam, DDoS attacks etc) they can be rather reluctant to deal with customers who are malware infected, and sending spam, DDoS attacks etc to other parts of the world.

From a “security economics” point of view this isn’t too surprising (as I and colleagues pointed out in a report to ENISA). Customers demand effective anti-spam, or they leave for another ISP. But talking to customers and holding their hand through a malware infection is expensive for the ISP, and customers may just leave if hassled, so the ISPs have limited incentives to take any action.

When markets fail to solve problems, then you regulate… and what apComms is recommending is that a self-regulatory solution be given a chance to work. We shall have to see whether the ISPs seize this chance, or if compulsion will be required.

This UK-focussed recommendation is not taking place in isolation, there’s been activity all over the world in the past few weeks — in Australia the ISPs are consulting on a Voluntary Code of Practice for Industry Self-regulation in the Area of e-Security, in the Netherlands the main ISPs have signed an “Anti-Botnet Treaty“, and in the US the main cable provider, Comcast, has announced that its “Constant Guard” programme will in future detect if their customer machines become members of a botnet.

ObDeclaration: I assisted apComms as a specialist adviser, but the decision on what they wished to recommend was theirs alone.

Economics of peer-to-peer systems

A new paper, Olson’s Paradox Revisited: An Empirical Analysis of File-Sharing Behaviour in P2P Communities, finds a positive correlation between the size of a BitTorrent file-sharing community and the amount of content shared, despite a reduced individual propensity to share in larger groups, and deduces from this that file-sharing communities provide a pure (non-rival) public good. Forcing users to upload results in a smaller catalogue; but private networks provide both more and better content, as do networks aimed at specialised communities.

George Danezis and I produced a theoretical model of this five years ago in The Economics of Censorship Resistance. It’s nice to see that the data, now collected, bear us out

The Economics of Privacy in Social Networks

We often think of social networking to Facebook, MySpace, and the also-rans, but in reality there are there are tons of social networks out there, dozens which have membership in the millions. Around the world it’s quite a competitive market. Sören Preibusch and I decided to study the whole ecosystem to analyse how free-market competition has shaped the privacy practices which I’ve been complaining about. We carefully examined 45 sites, collecting over 250 data points about each sites’ privacy policies, privacy controls, data collection practices, and more. The results were fascinating, as we presented this week at the WEIS conference in London. Our full paper and complete dataset are now available online as well.

We collected a lot of data, and there was a little bit of something for everybody. There was encouraging news for fans of globalisation, as we found the social networking concept popular across many cultures and languages, with the most popular sites being available in over 40 languages. There was an interesting finding from a business perspective that photo-sharing may be the killer application for social networks, as this features was promoted far more often than sharing videos, blogging, or playing games. Unfortunately the news was mostly negative from a privacy standpoint. We found some predictable but still surprising problems. Too much unnecessary data is collected by most sites, 90% requiring a full-name and DOB. Security practices are dreadful: no sites employed phishing countermeasures, and 80% of sites failed to protect password entry using TLS. Privacy policies were obfuscated and confusing, and almost half failed basic accessibility tests. Privacy controls were confusing and overwhelming, and profiles were almost universally left open by default.

The most interesting story we found though was how sites consistently hid any mention of privacy, until we visited the privacy policies where they provided paid privacy seals and strong reassurances about how important privacy is. We developed a novel economic explanation for this: sites appear to craft two different messages for two different populations. Most users care about privacy about privacy but don’t think about it in day-to-day life. Sites take care to avoid mentioning privacy to them, because even mentioning privacy positively will cause them to be more cautious about sharing data. This phenomenon is known as “privacy salience” and it makes sites tread very carefully around privacy, because users must be comfortable sharing data for the site to be fun. Instead of mentioning privacy, new users are shown a huge sample of other users posting fun pictures, which encourages them to  share as well. For privacy fundamentalists who go looking for privacy by reading the privacy policy, though, it is important to drum up privacy re-assurance.

The privacy fundamentalists of the world may be positively influencing privacy on major sites through their pressure. Indeed, the bigger, older, and more popular sites we studied had better privacy practices overall. But the desire to limit privacy salience is also a major problem because it prevents sites from providing clear information about their privacy practices. Most users therefore can’t tell what they’re getting in to, resulting in the predominance of poor-practices in this “privacy jungle.”