Category Archives: News coverage

Media reports that may interest you

Stolen mobiles story

I was just on Sky TV to debunk today’s initiative from the Home Office. The Home Secretary claimed that more rapid notification of stolen phone IMEIs between UK operators would have a significant effect on street crime.

I’m not so sure. Most mobiles stolen in the UK go abroad – the cheap ones to the third world and the flash ones to developed countries whose operators don’t subsidise handsets. As for the UK secondhand market, most mobiles can be reprogrammed (even though this is illegal). Lowering their street price is, I expect, a hard problem – like raising the street price of drugs.

What the Home Office might usefully do is to crack down on mobile operators who continue to bill customers after they have reported their phones stolen and cancelled their accounts. That is a scandal. Government’s role in problems like this is to straighten out the incentives and to stop the big boys from dumping risk on their customers.

Health IT Report

Late last year I wrote a report for the National Audit Office on the health IT expenditure, strategies and goals of the UK and a number of other developed countries. This showed that our National Program for IT is in many ways an outlier, and high-risk. Now that the NAO has published its own report, we’re allowed to make public our contribution to it.

Readers may recall that I was one of 23 computing professors who wrote to Parliament’s Health Select Committee asking for a technical review of this NHS computing project, which seems set to become the biggest computer project disaster ever. My concernes were informed by the NAO work.

Growing epidemic of card cloning

Markus points us to a story on card fraud by German TV reporter Sabine Wolf, who reported some of our recent work on how cards get cloned.She reports a number of cases in which German holidaymakers had cards cloned in Italy. In one case, a sniffer in a chip and PIN terminal at a skilift in Livigno sent holidaymakers’ card and PIN details by SMS to Romania. These devices, which apparently first appeared in Hungary in 2003, are now becoming widespread in Europe; one model sits between a card reader and the retail terminal. (I have always refused to use my chip card at stores such as Tesco and B&Q where they want to swipe your card at the checkout terminal and have you enter your PIN at a separate PIN pad – this is particularly vulnerable to such sniffing attacks.)

According to Hungarian police, the crooks bribe the terminal maintenance technicians, or send people round stores pretending to be technicians; the Bavarian police currently have a case in which 150 German cardholders lost 600,000 Euro; the Guardia di Finanza in Genoa have a case in which they’ve recovered thousands of SMSs from phone company computers containing card data; a prosecutor in Bolzano believes that crooks hide in supermarkets overnight and wire up the terminals; and there are also cases from Sweden, France, and Britain. Customers tend to get blamed unless there’s such a large batch of similar frauds that the bank can’t fail to observe the pattern. (This liability algorithm gives the bankers every incentive not to look too hard.)

In Hungary, banks now routinely confirm all card transactions to their customers by SMS. Maybe that’s what banks here will be doing in a year or two (Barclays will already SMS you if you make an online payment to a new payee). It’s not ideal though as it keeps pushing liability to the customer. I suspect it might take an EU directive to push the liability firmly back on the banks, along the lines of the US Federal Reserve’s Regulation E.

Powers, Powers, and yet more Powers …

Our beloved government is once again Taking Powers in the fight against computer crime. The Home Office proposes to create cyber-asbos that would enable the police to ban suspects from using such dangerous tools as computers and bank accounts. This would be done in a civil court against a low evidence standard; there are squeals from the usual suspects such as zdnet.

The Home Office proposals will also undermine existing data protection law; for example by allowing the banks to process sensitive data obtained from the public sector (medical record privacy, anyone?) and ‘dispelling misconceptions about consent’. I suppose some might welcome the proposed extension of ASBOs to companies. Thus, a company with repeated convictions for antitrust violations might be saddled with a list of harm-prevention conditions, for example against designing proprietary server-side protocols or destroying emails. I wonder what sort of responses the computer industry will make to this consultation 🙂

A cynic might point out that the ‘new powers’ seem in inverse proportion to the ability, or will, to use the existing ones. Ever since the South Sea Bubble in the 18th century, Britain has been notoriously lax in prosecuting bent bankers; city folk are now outraged when a Texas court dares to move from talk to action. Or take spam; although it’s now illegal to send unsolicited commercial emails to individuals in the UK, complaints don’t seem to result in action. Now trade and industry minister ‘Enver’ Hodge explains this is because there’s a loophole – it’s not illegal to spam businesses. So rather than prosecuting a spammer for spamming individuals, our beloved government will grab a headline or two by blocking this loophole. I don’t suppose Enver ever stopped to wonder how many spam runs are so well managed as to not send a single item to a single private email address – cheap headlines are more attractive than expensive, mesy implementation.

This pattern of behaviour – taking new powers rather than using the existing ones – is getting too well entrenched. In cyberspace we don’t have law enforcement any more – we have the illusion of law enforcement.

Ignoring the "Great Firewall of China"

The Great Firewall of China is an important tool for the Chinese Government in their efforts to censor the Internet. It works, in part, by inspecting web traffic to determine whether or not particular words are present. If the Chinese Government does not approve of one of the words in a web page (or a web request), perhaps it says “f” “a” “l” “u” “n”, then the connection is closed and the web page will be unavailable — it has been censored.

This user-level effect has been known for some time… but up until now, no-one seems to have looked more closely into what is actually happening (or when they have, they have misunderstood the packet level events).

It turns out [caveat: in the specific cases we’ve closely examined, YMMV] that the keyword detection is not actually being done in large routers on the borders of the Chinese networks, but in nearby subsidiary machines. When these machines detect the keyword, they do not actually prevent the packet containing the keyword from passing through the main router (this would be horribly complicated to achieve and still allow the router to run at the necessary speed). Instead, these subsiduary machines generate a series of TCP reset packets, which are sent to each end of the connection. When the resets arrive, the end-points assume they are genuine requests from the other end to close the connection — and obey. Hence the censorship occurs.

However, because the original packets are passed through the firewall unscathed, if both of the endpoints were to completely ignore the firewall’s reset packets, then the connection will proceed unhindered! We’ve done some real experiments on this — and it works just fine!! Think of it as the Harry Potter approach to the Great Firewall — just shut your eyes and walk onto Platform 9Âľ.

Ignoring resets is trivial to achieve by applying simple firewall rules… and has no significant effect on ordinary working. If you want to be a little more clever you can examine the hop count (TTL) in the reset packets and determine whether the values are consistent with them arriving from the far end, or if the value indicates they have come from the intervening censorship device. We would argue that there is much to commend examining TTL values when considering defences against denial-of-service attacks using reset packets. Having operating system vendors provide this new functionality as standard would also be of practical use because Chinese citizens would not need to run special firewall-busting code (which the authorities might attempt to outlaw) but just off-the-shelf software (which they would necessarily tolerate).

There’s a little more to this story (but not much) and all is revealed in our academic paper (Clayton, Murdoch, Watson) which will be presented at the 6th Workshop on Privacy Enhancing Technologies being held here in Cambridge this week.

NB: There’s also rather more to censorship in China than just the “Great Firewall” keyword detecting system — some sites are blocked unconditionally, and it is necessary to use other techniques, such as proxies, to deal with that. However, these static blocks are far more expensive for the Chinese Government to maintain, and are inherently more fragile and less adaptive to change as content moves around. So there remains real value in exposing the inadequacy of the generic system.

The bottom line though, is that a great deal of the effectiveness of the Great Chinese Firewall depends on systems agreeing that it should work … wasn’t there once a story about the Emperor’s New Clothes ?

Censoring science

I’ve written a rebuttal in today’s Guardian to an article that appeared last week by Martin Rees, the President of the Royal Society. Martin argued that science should be subjected to more surveillance and control in case terrorists do bad things with it.

Those of us who work with cryptography and computer security have been subjected to a lot of attempts by governments to restrict what we do and publish. It’s a long-running debate: the first book written on cryptology in English, by Bishop John Wilkins in 1641, remarked that ‘If all those useful Inventions that are liable to abuse, should therefore be concealed, there is not any Art or Science which might be lawfully profest’. (John, like Martin, was Master of Trinity in his day.)

In 2001–2, the government put an export control act through Parliament which, in its original form, would have required scientists working on subjects with possible military applications (that is, most subjects) to get export licenses before talking to foreigners about our work. FIPR colleagues and I opposed this; we organised Universities UK, the AUT, the Royal Society, the Conservatives and the Liberals to bring in an amendment in the Lords creating a research exemption for scientists. We mustn’t lose that. If scientists end up labouring under the same bureaucratic controls as companies that sell guns, then both science and nonproliferation will be seriously weakened.

Some people love to worry: Martin wrote a whole book wondering about how the human race will end. But maybe we should rather worry about something a bit closer to hand — how our civilisation will end. If a society turns inwards and builds walls to keep the barbarians out, then competition abates, momentum gets lost, confidence seeps away, and eventually the barbarians win. Imperial Rome, Ming Dynasty China, … ?

Chip and skim 2

The 12:30 ITN news on ITV1 today featured a segment (video) on Chip and PIN, and should also be shown at 19:00 and 22:30. It included an interview with Ross Anderson and some shots of me presenting our Chip and PIN interceptor. The demonstration was similar to the one shown on German TV but this time we went all the way, borrowing a magstripe writer and producing a fake card. This was used by the reporter to successfully withdraw money from an ATM (from his own account).

More details on how the device actually works are on our interceptor page. The key vulnerabilities present in the UK Chip and PIN cards we have tested, which the interceptor relies on, are:

  • The entered PIN is sent from the terminal to the card in unencrypted form
  • It is still possible to use magstripe-only cards to withdraw cash, with the same PIN used in shops
  • All the details necessary to create a valid magstripe are also present on the chip

This means that a crook could insert a miniaturised version of the interceptor into the card slot of a Chip and PIN terminal, without interfering with the tamper detection. The details it collects include the PIN and enough information to create a valid magstripe. The fake card can now be used in ATMs which are willing to accept cards, which from its perspective, have a damaged chip — known as “fallback”. Some ATMs might even not be able to read the chip at all, particularly ones abroad.

The fact that the chip also includes the magstripe details is not strictly necessary, since a skimmer could also read this, but the design of some Chip and PIN terminals, which only cover the chip, make this difficult. One of the complaints against the terminals used in the Shell fraud was that they make it impossible to read the chip without reading the magstripe too. This led to suggestions that customers should not use such terminals, or even that they wipe their card’s magstripe to prevent skimmers from reading it.

While it is possible that the Shell fraudsters did read the magstripe, wiping it will not be a defence against them reading the communication between terminal and chip, which includes all the needed details. Even the CVV1, the code used to verify that a magstripe is valid, is on the chip (but not the CVV2, which is the 3 digit code printed on the back, used by ecommerce). This was presumably a backwards-compatibility measure, as was magstripe fallback. As shown by countless examples before, such features are frequently the source of security flaws.

The mythical tamper-proof PIN pad?

As reported in many places (BBC News and The Register amongst others), Shell have stopped accepting Chip and PIN transactions at all 600 of their directly owned petrol stations in the UK. It is reported that eight arrests have been made, but only a few details about the modus operandi of the fraudsters have reached the media.

Most reports contain a quote from Sandra Quinn, of APACS:

They have used an old style skimming device. They are skimming thecard, copying the magnetic details – there is no new fraud here. Theyhave managed to tamper with the PIN pads. These pads are supposed tobe tamper resistant, they are supposed to shut down, and so that has obviously failed.

It is not clear from the information that has been released so far whether the “magnetic details” were obtained by the attackers through reading the magnetic stripe, or by intercepting the communication between the card and the terminal. Shell-owned petrol stations seem to use the Smart 5000 PIN pad, produced by Trintech. These devices are hybrid readers: it is impossible to insert a card (for a Chip and PIN transaction) without the magnetic stripe also passing through a reader. With this design, there seem to be two possible methods of attack.

  1. A hardware attack. Given the statement that “[the attackers] have managed to tamper with the PIN pads”, perhaps the only technical element of the fraud was the dismantling of the pads in such a way that the output of the magnetic card reader (or the chip reader) could be relayed to the bad guys by some added internal hardware. Defeating the tamper-resistance in this way might also have allowed the output from the keypad to be read, providing the fraudsters with both the magnetic stripe details and a corresponding PIN. It seems fairly unlikely that any “skimming” device could have been attached externally without arousing the suspicion of consumers; the curved design of the card receptacle, although looking ‘suspicious’ in itself, does not lend itself to the easy attachment of another device.
  2. A software-only attack. The PIN pads used by Shell run the Linux kernel, and so maybe an attacker with a little technical savvy could have replaced the firmware with a version the relays the output of the magstripe reader and PIN pad to the bad guys. The terminals can be remotely managed — a successful attack on the remote management might have allowed all the terminals to be subverted in one go.

The reaction to the fraud (the suspension of Chip and PIN transactions in all 600 stations) is interesting; it suggests that either Shell cannot tell remotely which terminals have been compromised, or perhaps that every terminal was compromised. The former case suggests a “hardware attack”; the latter a (perhaps remote) “software attack”.

Even if the only defeat of the tamper resistance was the addition of some hardware to “skim” the magstripe of all inserted cards, corresponding PINs could have been obtained from, for example, CCTV footage.

Attacks like this look set to continue, given the difficulty of enabling consumers to check the authenticity of the terminals into which they insert their cards (and type their PINs). Even the mythical tamper-proof terminal could be replaced with an exact replica, and card details elicited through a relay attack. Members of the Security Group have been commenting on these risks for some time, but the comments have sometimes fallen on deaf ears.

The Internet and Elections: the 2006 Presidential Election in Belarus

On Thursday, the OpenNet Initiative released their report, to which I contributed, studying Internet Censorship in Belarus during the 2006 Presidential Election there. It even has managed a brief mention in the New York Times.

In summary, we did find suspicious behaviour, particularly in the domain name system (DNS), the area I mainly explored, but no proof of outright filtering. It is rarely advisable to attribute to malice what can just as easily be explained by incompetence, so it is difficult to draw conclusions about what actually happened solely from the technical evidence. However, regardless of whether this was the first instance the ONI has seen of a concerted effort to hide state censorship, or simply an unfortunate coincidence of network problems, it is clear that existing tools for Internet monitoring are not adequate for distinguishing between these cases.

Simply observing that a site is inaccessible from within the country being studied is not enough evidence to demonstrate censorship, because it is also possible that the server or its network connection is down. For this reason, the ONI simultaneously checks from an unrestricted Internet connection. If the site is inaccessible from both connections, it is treated as being down. Censorship is only attributed if the site can be reliably accessed from the unrestricted connection, but not by the in-country testers. This approach has been very successful at analysing previously studied censorship regimes but could not positively identify censorship in Belarus. Here sites were inaccessible (often intermittently) from all Internet connections tried.

Ordinarily this result would be assumed to simply be from network or configuration errors; however the operators of these sites claimed the faults were caused by denial of service (DoS) attacks, hacking attempts or other government orchestrated efforts. Because many of the sites or their domain names were hosted in Belarus, and given the state strangle-hold on communication infrastructure, these claims were plausible, but generating evidence is difficult. On the client side, the coarse results available from the current ONI testing software are insufficient to combat the subtlety of the alleged attacks.

What is needed is more intelligent software, which tries to establish, at the packet level, exactly why a particular connection fails. Network debugging tools exist, but are typically designed for experts, whereas in the anti-censorship scenario the volunteers in the country being studied should not need to care about these details. Instead the software should perform basic analysis before securely sending the low-level diagnostic information back to a central location for further study.

There is also a place for improved software at the server side. In response to reports of DoS and hacking attacks we requested logs from the administrators of the sites in question to substantiate the allegations, but none were forthcoming. A likely and understandable reason is that the operators did not want to risk the privacy of their visitors by releasing such sensitive information. Network diagnostic applications on the server could be adapted to generate evidence of attacks, while protecting the identity of users. Ideally the software would also resist fabrication of evidence, but this might be infeasible to do robustly.

As the relevance of the Internet to politics grows, election monitoring will need to adapt accordingly. This brings new challenges so both the procedures and tools used must change. Whether Belarus was the first example of indirect state censorship seen by the ONI is unclear, but in either case I suspect it will not be the last.