Category Archives: Security economics

Social-science angles of security

A dubious article for a dubious journal

This morning I received a request to review a manuscript for the “Journal of Internet and Information Systems“. That’s standard for academics — you regularly get requests to do some work for the community for free!

However this was a little out of the ordinary in that the title of the manuscript was “THE ASSESSING CYBER CRIME AND IT IMPACT ON INFORMATION TECHNOLOGY IN NIGERIA” which is not, I feel, particularly grammatical English. I’d expect an editor to have done something about that before I was sent the manuscript…

I stared hard at the email headers (after all I’d just been sent some .docx files out of the blue) and it seems that the Journals Review Department of academicjournals.org uses Microsoft’s platform for their email (so no smoking gun from a spear-fishing point of view). So I took some appropriate precautions and opened the manuscript file.

It was dreadful … and read like it had been copied from somewhere else and patched together — indeed one page appeared twice! However, closer examination suggested it had been scanned rather than copy-typed.

For example:

The primary maturation of malicious agents attacking information system has changed over time from pride and prestige to financial again.

Which, some searches will show you comes from page 22 of Policing Cyber Crime written by Petter Gottschalk in 2010 — a book I haven’t read so I’ve no idea how good it is. Clearly “maturation” should be “motivation”, “system” should “systems” and “again” should be “gain”.

Much of the rest of the material (I didn’t spend a long time on it) was from the same source. Since the book is widely available for download in PDF format (though I do wonder how many versions were authorised), it’s pretty odd to have scanned it.

I then looked harder at the Journal itself — which is one of a group of 107 open-access journals. According to this report they were at one time misleadingly indicating an association with Elsevier, although they didn’t do that on the email they sent me.

The journals appear on “Beall’s list“: a compendium of questionable, scholarly open-access publishers and journals. That is, publishing your article in one of these venues is likely to make your CV look worse rather than better.

In traditional academic publishing the author gets their paper published for free and libraries pay (quite substantial amounts) to receive the journal, which the library users can then read for free, but the article may not be available to non-library users. The business model of “open-access” is that the author pays for having their paper published, and then it is freely available to everyone. There is now much pressure to ensure that academic work is widely available and so open-access is very much in vogue.

There are lots of entirely legitimate open-access journals with exceedingly high standards — but also some very dubious journals which are perceived of as accepting most anything and just collecting the money to keep the publisher in the style to which they have become accustomed (as an indication of the money involved, the fee charged by the Journal of Internet and Information Systems is $550).

I sent back an email to the Journal saying “Even a journal with your reputation should not accept this item“.

What does puzzle me is why anyone would submit a plagiarised article to an open-access journal with a poor reputation. Paying money to get your ripped-off material published in a dubious journal doesn’t seem to be good tactics for anyone. Perhaps it’s just that the journal wants to list me (enrolling my reputation) as one of their reviewers? Or perhaps I was spear-phished after all? Time will tell!

Can we have medical privacy, cloud computing and genomics all at the same time?

Today sees the publication of a report I helped to write for the Nuffield Bioethics Council on what happens to medical ethics in a world of cloud-based medical records and pervasive genomics.

As the information we gave to our doctors in private to help them treat us is now collected and treated as an industrial raw material, there has been scandal after scandal. From failures of anonymisation through unethical sales to the care.data catastrophe, things just seem to get worse. Where is it all going, and what must a medical data user do to behave ethically?

We put forward four principles. First, respect persons; do not treat their confidential data like were coal or bauxite. Second, respect established human-rights and data-protection law, rather than trying to find ways round it. Third, consult people who’ll be affected or who have morally relevant interests. And fourth, tell them what you’ve done – including errors and security breaches.

The collection, linking and use of data in biomedical research and health care: ethical issues took over a year to write. Our working group came from the medical profession, academics, insurers and drug companies. We had lots of arguments. But it taught us a lot, and we hope it will lead to a more informed debate on some very important issues. And since medicine is the canary in the mine, we hope that the privacy lessons can be of value elsewhere – from consumer data to law enforcement and human rights.

Financial Cryptography 2015

I will be trying to liveblog Financial Cryptography 2015.

The opening keynote was by Gavin Andresen, chief scientist of the Bitcoin Foundation, and his title was “What Satoshi didn’t know.” The main unknown six years ago when bitcoin launched was whether it would bootstrap; Satoshi thought it might be used as a spam filter or a practical hashcash. In reality it was someone buying a couple of pizzas for 10,000 bitcoins. Another unknown when Gavin got involved in 2010 was whether it was legal; if you’d asked the SEC then they might have classified it as a Ponzi scheme, but now their alerts are about bitcoin being used in Ponzi schemes. The third thing was how annoying people can be on the Internet; people will abuse your system for fun if it’s popular. An example was penny flooding, where you send coins back and forth between your sybils all day long. Gavin invented “proof of stake”; in its early form it meant prioritising payers who turn over coins less frequently. The idea was that scarcity plus utility equals value; in addition to the bitcoins themselves, another scarce resources emerges as the old, unspent transaction outputs (UTXOs). Perhaps these could be used for further DoS attack prevention or a pseudonymous identity anchor.

It’s not even clear that Satoshi is or was a cryptographer; he used only ECC / ECDSA, hashes and SSL (naively), he didn’t bother compressing public keys, and comments suggest he wasn’t up on the latest crypto research. In addition, the rules for letting transactions into the chain are simple; there’s no subtlety about transaction meaning, which is mixed up with validation and transaction fees; a programming-languages guru would have done things differently. Bitcoin now allows hashes of redemption scripts, so that the script doesn’t have to be disclosed upfront. Another recent innovation is using invertible Bloom lookup tables (IBLTs) to transmit expected differences rather than transmitting all transactions over the network twice. Also, since 2009 we have FHE, NIZLPs and SNARKs from the crypto research folks; the things on which we still need more research include pseudonymous identity, practical privacy, mining scalability, probabilistic transaction checking, and whether we can use streaming algorithms. In questions, Gavin remarked that regulators rather like the idea that there was a public record of all transactions; they might be more negative if it were completely anonymous. In the future, only recent transactions will be universally available; if you want the old stuff you’ll have to store it. Upgrading is hard though; Gavin’s big task this year is to increase the block size. Getting everyone in the world to update their software at once is not trivial. People say: “Why do you have to fix the software? Isn’t bitcoin done?”

I’ll try to blog the refereed talks in comments to this post.

Launch of security economics MOOC

TU Delft has just launched a massively open online course on security economics to which three current group members (Sophie van der Zee, David Modoc and I) have contributed lectures, along with one alumnus (Tyler Moore). Michel van Eeten of Delft is running the course (Delft does MOOCs while Cambridge doesn’t yet), and there are also talks from Rainer Boehme. This was pre-announced here by Tyler in November.

The videos will be available for free in April; if you want to take the course now, I’m afraid it costs $250. The deal is that EdX paid for the production and will sell it as a professional course to security managers in industry and government; once that’s happened we’ll make it free to all. This is the same basic approach as with my book: rope in a commercial publisher to help produce first-class content that then becomes free to all. But if your employer is thinking of giving you some security education, you could do a lot worse than to support the project and enrol here.

Spooks behaving badly

Like many in the tech world, I was appalled to see how the security and intelligence agencies’ spin doctors managed to blame Facebook for Lee Rigby’s murder. It may have been a convenient way of diverting attention from the many failings of MI5, MI6 and GCHQ documented by the Intelligence and Security Committee in its report yesterday, but it will be seriously counterproductive. So I wrote an op-ed in the Guardian.

Britain spends less on fighting online crime than Facebook does, and only about a fifth of what either Google or Microsoft spends (declaration of interest: I spent three months working for Google on sabbatical in 2011, working with the click fraud team and on the mobile wallet). The spooks’ approach reminds me of how Pfizer dealt with Viagra spam, which was to hire lawyers to write angry letters to Google. If they’d hired a geek who could have talked to the abuse teams constructively, they’d have achieved an awful lot more.

The likely outcome of GCHQ’s posturing and MI5’s blame avoidance will be to drive tech companies to route all the agencies’ requests past their lawyers. This will lead to huge delays. GCHQ already complained in the Telegraph that they still haven’t got all the murderers’ Facebook traffic; this is no doubt due to the fact that the Department of Justice is sitting on a backlog of requests for mutual legal assistance, the channel through which such requests must flow. Congress won’t give the Department enough money for this, and is content to play chicken with the Obama administration over the issue. If GCHQ really cares, then it could always pay the Department of Justice to clear the backlog. The fact that all the affected government departments and agencies use this issue for posturing, rather than tackling the real problems, should tell you something.

WEIS 2015 call for papers

The 2015 Workshop on the Economics of Information Security will be held at Delft, the Netherlands, on 22-23 June 2015. Paper submissions are due by 27 February 2015. Selected papers will be invited for publication in a special issue of the Journal of Cybersecurity, a new, interdisciplinary, open-source journal published by Oxford University Press.

We hope to see lots of you in Delft!

Privacy with technology: where do we go from here?

As part of the Royal Society Summer Science Exhibition 2014, I spoke at the panel session “Privacy with technology: where do we go from here?”, along with Ross Anderson, and Bashar Nuseibeh with Jon Crowcroft as chair.

The audio recording is available and some notes from the session are below.

The session started with brief presentations from each of the panel members. Ross spoke on the economics of surveillance and in particular network effects, the topic of his paper at WEIS 2014.

Bashar discussed the difficulties of requirements engineering, as eloquently described by Billy Connolly. These challenges are particularly acute when it comes to designing for privacy requirements, especially for wearable devices with their limited ability to communicate with users.

I described issues around surveillance on the Internet, whether by governments targeting human rights workers or advertisers targeting pregnant customers. I discussed how anonymous communication tools, such as Tor, can help defend against such surveillance.

Continue reading Privacy with technology: where do we go from here?

EMV: Why Payment Systems Fail

In the latest edition of Communications of the ACM, Ross Anderson and I have an article in the Inside Risks column: “EMV: Why Payment Systems Fail” (DOI 10.1145/2602321).

Now that US banks are deploying credit and debit cards with chips supporting the EMV protocol, our article explores what lessons the US should learn from the UK experience of having chip cards since 2006. We address questions like whether EMV would have prevented the Target data breach (it wouldn’t have), whether Chip and PIN is safer for customers than Chip and Signature (it isn’t), whether EMV cards can be cloned (in some cases, they can) and whether EMV will protect against online fraud (it won’t).

While the EMV specification is the same across the world, they way each country uses it varies substantially. Even individual banks within a country may make different implementation choices which have an impact on security. The US will prove to be an especially interesting case study because some banks will be choosing Chip and PIN (as the UK has done) while others will choose Chip and Signature (as Singapore did). The US will act as a natural experiment addressing the question of whether Chip and PIN or Chip and Signature is better, and from whose perspective?

The US is also distinctive in that the major tussle over payment card security is over the “interchange” fees paid by merchants to the banks which issue the cards used. Interchange fees are about an order of magnitude higher than losses due to fraud, so while security is one consideration in choosing different sets of EMV features, the question of who pays how much in fees is a more important factor (even if the decision is later claimed to be justified by security). We’re already seeing results of this fight in the courts and through legislation.

EMV is coming to the US, so it is important that banks, customers, merchants and regulators know the likely consequences and how to manage the risks, learning from the lessons of the UK and elsewhere. Discussion of these and further issues can be found in our article.

Don’t shoot the demonstrators

Jim Graves, Alessandro Acquisti and I are giving a paper today at WEIS on Experimental Measurement of Attitudes Regarding Cybercrime, which we hope might nudge courts towards more rational sentencing for cybercrime.

At present, sentencing can seem somewhere between random and vindictive. People who commit a fraud online can get off with a tenth of what they’d get if they’d swindled the same amount of money face-to-face; yet people who indulge in political activism – as the Anonymous crowd did – can get hammered with much harsher sentences than they’d get for a comparable protest on the street.

Is this just the behaviour of courts and prosecutors, or does it reflect public attitudes?

We did a number of surveys of US residents and found convincing evidence that it’s the former. Americans want fraudsters to be punished on two criteria: for the value of the damage they do, with steadily tougher punishments for more damage, and for their motivation, where they want people who hack for profit to be punished more harshly than people who hack for political protest.

So Americans, thankfully, are rational. Let’s hope that legislators and prosecutors start listening to their voters.