On Friday last week The Guardian ran a story on an upcoming research paper by Tyler Moore and myself which will be presented at the WEIS conference later this month. We had determined that child sexual abuse image websites were removed from the Internet far slower than any other category of content we looked at, excepting illegal pharmacies hosted on fast-flux networks; and we’re unsure if anyone is seriously trying to remove them at all!
Continue reading Slow removal of child sexual abuse image websites
Category Archives: Security economics
Second edition
The second edition of my book “Security Engineering” came out three weeks ago. Wiley have now got round to sending me the final electronic version of the book, plus permission to put half a dozen of the chapters online. They’re now available for download here.
The chapters I’ve put online cover security psychology, banking systems, physical protection, APIs, search, social networking, elections and terrorism. That’s just a sample of how our field has grown outwards in the seven years since the first edition.
Enjoy!
New Banking Code shifts more liability to customers
The latest edition of the Banking Code, the voluntary consumer-protection standard for UK banks, was released last week. The new code claims to “give customers the most up to date information on how to protect their accounts from fraud.” This sounds like a worthy cause, but closer inspection shows customers could be worse off than they were before.
Clause 12.11 of the code deals with liability for losses:
If you act fraudulently, you will be responsible for all losses on your account. If you act without reasonable care, and this causes losses, you may be responsible for them. (This may apply, for example, if you do not follow section 12.5 or 12.9 or you do not keep to your account’s terms and conditions.)
Clauses 12.5 and 12.9 include some debatable advice about anti-virus software and clicking on links in email (more on this in a later post). While malware and phishing emails are a serious fraud threat, it is unrealistic to suggest that home users’ computers can be adequately secured to defeat attacks.
Fraud-detection algorithms are more likely to be effective, since they can examine patterns of transactions over all customers. However, these can only be deployed by the banks themselves.
Existing phishing schemes would be defeated by two-factor authentication, but UK banks have been notoriously slow at rolling out these, despite being widespread in many other European countries. Although not perfect, these defences might cause fraudsters to move to easier targets. Two-channel and transaction authentication techniques additionally give protection against man in the middle attacks.
Until the banks are made liable for fraud, they have no incentive to make a proper assessment as to the effectiveness of these protection measures. The new banking code allows the banks to further dump the cost of their omission onto customers.
When the person responsible for securing a system is not liable for breaches, the system is likely to fail. This situation of misaligned incentives is common, and here we see a further example. There might be a short-term benefit to banks of shifting liability, as they can resist introducing further security mechanisms for a while. However, in the longer term, it could be that moves like this will degrade trust in the banking system, causing everyone to suffer.
The House of Lords Science and Technology committee recognized this problem of the banking industry and recommended a statutory change (8.17) whereby banks would be held liable for electronic fraud. The new Banking Code, by allowing banks to dump yet more costs on the customers, is a step in the wrong direction.
Security Economics and the EU
ENISA — the European Network and Information Security Agency — has just published a major report on security economics that was authored by Rainer Böhme, Richard Clayton, Tyler Moore and me.
Security economics has become a thriving field since the turn of the century, and in this report we make a first cut at applying it in a coherent way to the most pressing security policy problems. We very much look forward to your feedback.
(Edited Dec 2019 to update link after ENISA changed their website)
The two faces of Privila
We have discussed the Privila network on Light Blue Touchpaper before. Richard explained how Privila solicit links and I described how to map the network. Since then, Privila’s behavior has changed. Previously, their pages were dominated by adverts, but included articles written by unpaid interns. Now the articles have been dropped completely, leaving more room for the adverts.
This change would appear to harm Privila’s search rankings — the articles, carefully optimized to include desirable keywords, would no longer be indexed. However, when Google download the page, the articles re-appear and the adverts are gone. The web server appears to be configured to give different pages, depending on the “User-Agent” header in the HTTP request.
For example, here’s how soccerlove.com appears in Firefox, Netscape, Opera and Internet Explorer — lots of adverts, and no article:
In contrast, by setting the browser’s user-agent to match that of Google’s spider, the page looks very different — a prominent article and no adverts:
Curiously, the Windows Live Search, and Yahoo! spiders are presented with an almost empty page: just a header but neither adverts nor articles (see update 2). You can try this yourself, by using the User Agent Switcher Firefox extension and a list of user-agent strings.
I expect the interns who wrote these articles will be displeased that their articles are hidden from view. Google will doubtlessly be interested too, since their webmaster guidelines recommend against such behavior. BMW and Ricoh were delisted for similar reasons. Fortunately for Google, I’ve already shown how to build a complete list of Privila’s sites.
Update 1 (2008-03-08):
It looks like Google has removed the Privila sites from their index. For example, searches of soccerlove.com, ammancarpets.com, and canadianbattery.com all return zero results.
Update 2 (2008-03-11):
Privila appear to have fixed the problem that led to Yahoo! and Windows Live Search bots being presented with a blank page. Both of these spiders are being shown the same content as Google’s — the article with no adverts. Normal web browsers are still being sent adverts with no article.
Update 3 (2008-03-11):
Shortly after the publication of an article about Privila’s browser tricks on The Register, Privila has restored articles on the pages shown to normal web browsers. Pages presented to search engines still are not identical — they don’t contain the adverts.
Chip & PIN terminals vulnerable to simple attacks
Steven J. Murdoch, Ross Anderson and I looked at how well PIN entry devices (PEDs) protect cardholder data. Our paper will be published at the IEEE Symposium on Security and Privacy in May, though an extended version is available as a technical report. A segment about this work will appear on BBC Two’s Newsnight at 22:30 tonight.
We were able to demonstrate that two of the most popular PEDs in the UK — the Ingenico i3300 and Dione Xtreme — are vulnerable to a “tapping attack” using a paper clip, a needle and a small recording device. This allows us to record the data exchanged between the card and the PED’s processor without triggering tamper proofing mechanisms, and in clear violation of their supposed security properties. This attack can capture the card’s PIN because UK banks have opted to issue cheaper cards that do not use asymmetric cryptography to encrypt data between the card and PED.
In addition to the PIN, as part of the transaction, the PED reads an exact replica of the magnetic strip (for backwards compatibility). Thus, if an attacker can tap the data line between the card and the PED’s processor, he gets all the information needed to create a magnetic strip card and withdraw money out of an ATM that does not read the chip.
We also found that the certification process of these PEDs is flawed. APACS has been effectively approving PEDs for the UK market as Common Criteria (CC) Evaluated, which does not equal Common Criteria Certified (no PEDs are CC Certified). What APACS means by “Evaluated” is that an approved lab has performed the “evaluation”, but unlike CC Certified products, the reports are kept secret, and governmental Certification Bodies do not do quality control.
This process causes a race to the bottom, with PED developers able to choose labs that will approve rather than improve PEDs, at the lowest price. Clearly, the certification process needs to be more open to the cardholders, who suffer from the fraud. It also needs to be fixed such that defective devices are refused certification.
We notified APACS, Visa, and the PED manufactures of our results in mid-November 2007 and responses arrived only in the last week or so (Visa chose to respond only a few minutes ago!) The responses are the usual claims that our demonstrations can only be done in lab conditions, that criminals are not that sophisticated, the threat to cardholder data is minimal, and that their “layers of security” will detect fraud. There is no evidence to support these claims. APACS state that the PEDs we examined will not be de-certified or removed, and the same for the labs who certified them and would not even tell us who they are.
The threat is very real: tampered PEDs have already been used for fraud. See our press release and FAQ for basic points and the technical report where we discuss the work in detail.
Update 1 (2008-03-09): The segment of Newsnight featuring our contribution has been posted to Google Video.
Update 2 (2008-03-21): If the link above doesn’t work try YouTube: part1 and part 2.
Justice, in one case at least
This morning Jane Badger was acquitted of fraud at Birmingham Crown Court. The judge found there was no case to answer.
Her case was remarkably similar to that of John Munden, about whom I wrote here (and in my book here). Like John, she worked for the police; like John, she complained to a bank about some ATM debits on her bank statement that she did not recognise; like John, she was arrested and suspended from work; like John, she faced a bank (in her case, Egg) claiming that as its systems were secure, she must be trying to defraud them; and like John, she faced police expert evidence that was technically illiterate and just took the bank’s claims as gospel.
In her case, Egg said that the transactions must have been done with the card issued to her rather than using a card clone, and to back this up they produced a printout allocating a transaction code of 05 to each withdrawal, and a rubric stating that 05 meant “Integrated Circuit Card read – CVV data reliable” with in brackets the explanatory phrase “(chip read)”. This seemed strange. If the chip of an EMV card is read, the reader will verify the signature on the certificate; if its magnetic strip is read (perhaps because the chip is unserviceable) then the bank will check the CVV, which is there to prevent magnetic strip forgery. The question therefore was whether the dash in the above rubric meant “OR”, as the technology would suggest, or “AND” as the bank and the CPS hoped. The technology is explained in more detail in our recent submission to the Hunt Review of the Financial Services Ombudsman (see below). I therefore advised the defence to apply for the court to order Egg to produce the actual transaction logs and supporting material so that we could verify the transaction certificates, if any.
The prosecution folded and today Jane walked free. I hope she wins an absolute shipload of compensation from Egg!
Financial Ombudsman losing it?
I appeared on “You and Yours” (Radio 4) today at 12.35 with an official from the Financial Ombudsman Service, after I coauthored a FIPR submission to a review of the service which is currently being conducted by Lord Hunt.
Our submission looks at three cases in particular in which the ombudsman decided in favour of the banks and against bank customers over disputed ATM transactions. We found that the adjudicators employed by the ombudsman made numerous errors both of law and of technology, and concluded that their decisions were an affront to reason and to justice.
One of the cases has already appeared here on lightbluetouchpaper; the other two cardholders appeared on an investigation into card fraud on “Tonight with Trevor MacDonald”, and their case papers are included, with their permission, as appendices to our submission. These papers are damning, but the Hunt review’s staff declined to publish them on the somewhat surprising grounds that the information in them might be used to commit identity theft against the customers in question. Eventually they published our submission minus the two appendices of case papers. (If knowing someone’s residential address and the account number to a now-defunct bank account is enough for a criminal to steal money from you, then the regulatory failures afflicting the British banking system are even deeper than I thought.)
The Financial Ombudsman Service, and its predecessor the Banking Ombudsman, have for many years found against bank customers and in favour of the banks. In the early-to-mid 1990s, they upheld the banks’ outrageous claim that mag-stripe ATM cards were invulnerable to cloning; this led to the court cases described here and here. That position collapsed when ATM criminals started being sent to prison. Now we have another wave of ATM card cloning, which we’ve discussed several times: we’ve shown you a chip and PIN terminal playing Tetris and described relay attacks. There’s much more to come.
The radio program is online here (the piece starts 29 minutes and 40 seconds in). We clearly have them rattled; the ombudsman was patronising and abusive, and made a number of misleading statements. He also said that the “independent” Hunt review was commissioned by his board of directors. I hope it turns out to be a bit more independent than that. If it doesn’t, then consumer advocates should campaign for the FOS to be abolished and for customers to be empowered to take disputes to the courts, as we argue in section 31-32 of our submission.
Relay attacks on card payment: vulnerabilities and defences
At this year’s Chaos Communication Congress (24C3), I presented some work I’ve been doing with Saar Drimer: implementing a smart card relay attack and demonstrating that it can be prevented by distance bounding protocols. My talk (abstract) was filmed and the video can be found below. For more information, we produced a webpage and the details can be found in our paper.
[ slides (PDF 9.6M) | video (BitTorrent — MPEG4, 106M) ]
Update 2008-01-15:
Liam Tung from ZDNet Australia has written an article on my talk: Bank card attack: Only Martians are safe.
How effective is the wisdom of crowds as a security mechanism?
Over the past year, Richard Clayton and I have been tracking phishing websites. For this work, we are indebted to PhishTank, a website where dedicated volunteers submit URLs from suspected phishing websites and vote on whether the submissions are valid. The idea behind PhishTank is to bring together the expertise and enthusiasm of people across the Internet to fight phishing attacks. The more people participate, the larger the crowd, the more robust it should be against errors and perhaps even manipulation by attackers.
Not so fast. We studied the submission and voting records of PhishTank’s users, and our results are published in a paper appearing at Financial Crypto next month. It turns out that participation is very skewed. While PhishTank has several thousand registered users, a small core of around 25 moderators perform the bulk of the work, casting 74% of the votes we observed. Both the distributions of votes and submissions follow a power law.
This leaves PhishTank more vulnerable to manipulation than would be the case if every member of the crowd participated to the same extent. Why? If a few of the most active users stopped voting, a backlog of unverified phishing sites might collect. It also means an attacker could join the system and vote maliciously on a massive scale. Since 97% of submissions to PhishTank are verified as phishing URLs, it would be easy for an attacker to build up reputation by voting randomly many times, and then sprinkle in malicious votes protecting the attacker’s own phishing sites, for example. Since over half of the phishing sites in PhishTank are duplicate rock-phish domains, a savvy attacker could build reputation by voting for these sites without contributing to PhishTank otherwise.
So crowd-sourcing your security decisions can leave you exposed to manipulation. But how does PhishTank compare to the feeds maintained by specialist website take-down companies hired by the banks? Well, we compared PhishTank’s feed to a feed from one such company, and found the company’s feed to be slightly more complete and significantly faster in confirming phishing websites. This is because companies can afford employees to verify their submissions.
We also found that users who vote less often are more likely to vote incorrectly, and that users who commit many errors tend to have voted on
the same URLs.
Despite these problems, we do not advocate against leveraging user participation in the design of all security mechanisms, nor do we believe that PhishTank should throw in the towel. Some improvements can be made by automating obvious categorization so that the hard decisions are taken by PhishTank’s users. In any case, we implore caution before turning over a security decision to a crowd.
Infosecurity Magazine has written a news article describing this work.