Category Archives: Academic papers

Can we have medical privacy, cloud computing and genomics all at the same time?

Today sees the publication of a report I helped to write for the Nuffield Bioethics Council on what happens to medical ethics in a world of cloud-based medical records and pervasive genomics.

As the information we gave to our doctors in private to help them treat us is now collected and treated as an industrial raw material, there has been scandal after scandal. From failures of anonymisation through unethical sales to the care.data catastrophe, things just seem to get worse. Where is it all going, and what must a medical data user do to behave ethically?

We put forward four principles. First, respect persons; do not treat their confidential data like were coal or bauxite. Second, respect established human-rights and data-protection law, rather than trying to find ways round it. Third, consult people who’ll be affected or who have morally relevant interests. And fourth, tell them what you’ve done – including errors and security breaches.

The collection, linking and use of data in biomedical research and health care: ethical issues took over a year to write. Our working group came from the medical profession, academics, insurers and drug companies. We had lots of arguments. But it taught us a lot, and we hope it will lead to a more informed debate on some very important issues. And since medicine is the canary in the mine, we hope that the privacy lessons can be of value elsewhere – from consumer data to law enforcement and human rights.

Financial Cryptography 2015

I will be trying to liveblog Financial Cryptography 2015.

The opening keynote was by Gavin Andresen, chief scientist of the Bitcoin Foundation, and his title was “What Satoshi didn’t know.” The main unknown six years ago when bitcoin launched was whether it would bootstrap; Satoshi thought it might be used as a spam filter or a practical hashcash. In reality it was someone buying a couple of pizzas for 10,000 bitcoins. Another unknown when Gavin got involved in 2010 was whether it was legal; if you’d asked the SEC then they might have classified it as a Ponzi scheme, but now their alerts are about bitcoin being used in Ponzi schemes. The third thing was how annoying people can be on the Internet; people will abuse your system for fun if it’s popular. An example was penny flooding, where you send coins back and forth between your sybils all day long. Gavin invented “proof of stake”; in its early form it meant prioritising payers who turn over coins less frequently. The idea was that scarcity plus utility equals value; in addition to the bitcoins themselves, another scarce resources emerges as the old, unspent transaction outputs (UTXOs). Perhaps these could be used for further DoS attack prevention or a pseudonymous identity anchor.

It’s not even clear that Satoshi is or was a cryptographer; he used only ECC / ECDSA, hashes and SSL (naively), he didn’t bother compressing public keys, and comments suggest he wasn’t up on the latest crypto research. In addition, the rules for letting transactions into the chain are simple; there’s no subtlety about transaction meaning, which is mixed up with validation and transaction fees; a programming-languages guru would have done things differently. Bitcoin now allows hashes of redemption scripts, so that the script doesn’t have to be disclosed upfront. Another recent innovation is using invertible Bloom lookup tables (IBLTs) to transmit expected differences rather than transmitting all transactions over the network twice. Also, since 2009 we have FHE, NIZLPs and SNARKs from the crypto research folks; the things on which we still need more research include pseudonymous identity, practical privacy, mining scalability, probabilistic transaction checking, and whether we can use streaming algorithms. In questions, Gavin remarked that regulators rather like the idea that there was a public record of all transactions; they might be more negative if it were completely anonymous. In the future, only recent transactions will be universally available; if you want the old stuff you’ll have to store it. Upgrading is hard though; Gavin’s big task this year is to increase the block size. Getting everyone in the world to update their software at once is not trivial. People say: “Why do you have to fix the software? Isn’t bitcoin done?”

I’ll try to blog the refereed talks in comments to this post.

Technology assisted deception detection (HICSS symposium)

The annual symposium “Credibility Assessment and Information Quality in Government and Business” was this year held on the 5th and 6th of January as part of the “Hawaii International Conference on System Sciences” (HICSS). The symposium on technology assisted deception detection was organised by Matthew Jensen, Thomas Meservy, Judee Burgoon and Jay Nunamaker. During this symposium, we presented our paper “to freeze or not to freeze” that was posted on this blog last week, together with a second paper on “mining bodily cues to deception” by Dr. Ronald Poppe. The talks were of very high quality and researchers described a wide variety of techniques and methods to detect deceit, including mouse clicks to detect online fraud, language use on social media and in fraudulent academic papers and the very impressive avatar that can screen passengers when going through airport border control. I have summarized the presentations for you; enjoy!

 Monday 05-01-2015, 09.00-09.05

Introduction Symposium by Judee Burgoon

This symposium is being organized annually during the HICSS conference and functions as a platform for presenting research on the use of technology to detect deceit. Burgoon started off describing the different types of research conducted within the Center for the Management of Information (CMI) that she directs, and within the National Center for Border Security and Immigration. Within these centers, members aim to detect deception on a multi-modal scale using different types of technology and sensors. Their deception research includes physiological measures such as respiration and heart rate, kinetics (i.e., bodily movement), eye-movements such as pupil dilation, saccades, fixation, gaze and blinking, and research on timing, which is of particular interest for online deception. Burgoon’s team is currently working on the development of an Avatar (DHS sponsored): a system with different types of sensors that work together for screening purposes (e.g., border control; see abstracts below for more information). The Avatar is currently been tested at Reagan Airport. Sensors include a force platform, Kinect, HD and thermo cameras, oculometric cameras for eye-tracking, and a microphone for Natural Language Processing (NLP) purposes. Burgoon works together with the European border management organization Frontex. Continue reading Technology assisted deception detection (HICSS symposium)

To freeze or not to freeze

We think we may have discovered a better polygraph.

Telling truth from lies is an ancient problem; some psychologists believe that it helped drive the evolution of intelligence, as hominids who were better at cheating, or detecting cheating by others, left more offspring. Yet despite thousands of years of practice, most people are pretty bad at lie detection, and can tell lies from truth only about 55% of the time – not much better than random.

Since the 1920s, law enforcement and intelligence agencies have used the polygraph, which measures the physiological stresses that result from anxiety. This is slightly better, but not much; a skilled examiner may be able to tell truth from lies 60% of the time. However it is easy for an examiner who has a preconceived view of the suspect’s innocence or guilt to use a polygraph as a prop to help find supporting “evidence” by intimidating them. Other technologies, from EEG to fMRI, have been tried, and the best that can be said is that it’s a complicated subject. The last resort of the desperate or incompetent is torture, where the interviewee will tell the interviewer whatever he wants to hear in order to stop the pain. The recent Feinstein committee inquiry into the use of torture by the CIA found that it was not just a stain on America’s values but ineffective.

Sophie van der Zee decided to see if datamining people’s body movements might help. She put 90 pairs of volunteers in motion capture suits and got them to interview each other; half the interviewees were told to lie. Her first analysis of the data was to see whether you could detect deception from mimicry (you can, but it’s not much better than the conventional polygraph) and to debug the technology.

After she joined us in Cambridge we had another look at the data, and tried analysing it using a number of techniques, some suggested by Ronald Poppe. We found that total body motion was a reliable indicator of guilt, and works about 75% of the time. Put simply, guilty people fidget more; and this turns out to be fairly independent of cultural background, cognitive load and anxiety – the factors that confound most other deception detection technologies. We believe we can improve that to over 80% by analysing individual limb data, and also using effective questioning techniques (as our method detects truth slightly more dependably than lies).

Our paper is appearing at HICSS, the traditional venue for detection-deception technology. Our task for 2015 will be to redevelop this for low-cost commodity hardware and test it in a variety of environments. Of course, a guilty man can always just freeze, but that will rather give the game away; we suspect it might be quite hard to fidget deliberately at exactly the same level as you do when you’re not feeling guilty. (See also press coverage.)

Systemization of Pluggable Transports for Censorship Resistance

An increasing number of countries implement Internet censorship at different levels and for a variety of reasons. Consequently, there is an ongoing arms race where censorship resistance schemes (CRS) seek to enable unfettered user access to Internet resources while censors come up with new ways to restrict access. In particular, the link between the censored client and entry point to the CRS has been a censorship flash point, and consequently the focus of circumvention tools. To foster interoperability and speed up development, Tor introduced Pluggable Transports — a framework to flexibly implement schemes that transform traffic flows between Tor client and the bridge such that a censor fails to block them. Dozens of tools and proposals for pluggable transports  have emerged over the last few years, each addressing specific censorship scenarios. As a result, the area has become too complex to discern a big picture.

Our recent report takes away some of this complexity by presenting a model of censor capabilities and an evaluation stack that presents a layered approach to evaluate pluggable transports. We survey 34 existing pluggable transports and highlight their inflexibility to lend themselves to feature sharability for broader defense coverage. This evaluation has led to a new design for Pluggable Transports – the Tweakable Transport: a tool for efficiently building and evaluating a wide range of Pluggable Transports so as to increase the difficulty and cost of reliably censoring the communication channel.

Continue reading Systemization of Pluggable Transports for Censorship Resistance

Our Christmas message for troublemakers: how to do anonymity in the real world

On the 5th of December I gave a talk at a journalists’ conference on what tradecraft means in the post-Snowden world. How can a journalist, or for that matter an MP or an academic, protect a whistleblower from being identified even when MI5 and GCHQ start trying to figure out who in Whitehall you’ve been talking to? The video of my talk is now online here. There is also a TV interview I did later, which can be found here, while the other conference talks are here.

Enjoy!

Ross

Why password managers (sometimes) fail

We are asked to remember far too many passwords. This problem is most acute on the web. And thus, unsurprisingly, it is on the web that technical solutions have had most success in replacing users’ ad hoc coping strategies. One of the longest established and most widely adopted technical solutions is a password manager: software that remembers passwords and submits them on the user’s behalf. But this isn’t as straightforward as it sounds. In our recent work on bootstrapping adoption of the Pico system [1], we’ve come to appreciate just how hard life is for developers and maintainers of password managers.

In a paper we are about to present at the Passwords 2014 conference in Trondheim, we introduce our proposal for Password Manager Friendly (PMF) semantics [2]. PMF semantics are designed to give developers and maintainers of password managers a bit of a break and, more importantly, to improve the user experience.

Continue reading Why password managers (sometimes) fail

WEIS 2015 call for papers

The 2015 Workshop on the Economics of Information Security will be held at Delft, the Netherlands, on 22-23 June 2015. Paper submissions are due by 27 February 2015. Selected papers will be invited for publication in a special issue of the Journal of Cybersecurity, a new, interdisciplinary, open-source journal published by Oxford University Press.

We hope to see lots of you in Delft!

Pico part II: What’s wrong with QR code password replacement schemes, and how to fix them!

Users don’t want to authenticate, they want to do useful or enjoyable things like sending emails, ordering groceries or playing games. To alleviate the burden of having to type passwords, Pico and several other schemes, such as SQRL and tiQR, let the user simply scan a QR code; then a cryptographic protocol authenticates the user behind the scenes and initiates a session. But users, unless they are on the move, may prefer to run their email or web browsing sessions on their full-size computer instead of on their  smartphone, whose user interface is relatively limited. Therefore they don’t want an authenticated session between their smartphone and the website but between their computer and the website, even if it’s the smartphone that scans the QR code.

In the original 2011 Pico paper (footnote 37), the website kept track of which “page impression” from a web browser was related to which Pico authentication by including a nonce in each login page QR code and having the Pico sign and return it as part of the authentication. Since then, within the Pico team, there has been much discussion of the so-called Page Impression Nonce or PIN, infamous both for the attacks it enables and its unfortunate, overloaded acronym. While other schemes may have called it something different, or not called it anything at all, it was always present in one form or another because they all used it to solve this same problem of linking browser sessions to authentications.

For example, in the SQRL system each QR code contains a URL, part of which is a random nonce (the PIN in this system). The SQRL app must sign and return this URL, thus associating the nonce with the app’s per-verifier public key. The web browser then starts its session by making another request which includes the URL (and thus the PIN) and gets back a session cookie.

So what’s the problem?

The problem with this kind of mechanism is that anyone else who learns the PIN can also make that second request, thus logging themselves in as the user who scanned the QR code. For example, a bad guy can obtain a QR code and its PIN from the login page of bank.com and display it somewhere, like the login page of randomgameforum.com, for a victim to scan. Now, assuming the victim had an account at bank.com, the attacker obtains a bank.com session that the victim unsuspectingly initiated with their smartphone.

Part of the problem is that QR codes are not human-readable. Some have suggested that a simple confirmation step (“Do you really want to login to bank.com?”) might prevent such attacks, but we decided this wasn’t really good enough from a security or a usability perspective. We don’t want users to have to read the confirmation dialog and press the OK button every time they authenticate, and realistically they won’t, especially if they never normally do anything other than press OK.

Moreover, the confirmation step doesn’t help at all when the relaying of the QR code is combined with traditional phishing techniques. Consider receiving this email:

From: security@bank.com
To: victim@example.com
Subject: Urgent: Account security threat
---
Dear Customer

<compelling phishing mumbo jumbo>

To keep your account secure, please scan this QR code:

<login QR code with PIN known by the sender>

Kind regards,

Account security department

and if you oblige:

Do you really want to login to bank.com?

Now the poor user thinks “Well yes, I do, that’s exactly what the account security team asked me to do” and even worse: “I’m definitely not being phished, I remember what those security people kept telling me about checking the address of the website before logging in”.

How to fix it

The solution we came up with is called session delegation. Instead of having a nonce in each QR code, which anyone can later trade-in for an authenticated session, we have the website return a session delegation token to the Pico (not the web browser) as part of the authentication protocol. The Pico may then delegate the session to the browser on the bigger computer by sending it this token, via a secure channel. (For further details see section 4.1 of our “lousy phish” paper.) The price to pay for this strategy is that it requires a channel from the Pico to the browser, which is much harder to provide than the one in the opposite direction (the visual “QR code” channel).

We made a prototype which used Bluetooth for the delegation channel but, because Bluetooth was sometimes difficult to set up and not universally available, we even thought about using an audio cable plugged into the microphone jack of the computer. However, we were still worried about the availability and usability of these hardware-based solutions. We did a lot of research into NAT and firewall traversal techniques (such as STUN and TURN) to see if we could use peer-to-peer IP connectivity, but this is not possible in all cases without a separate signalling channel. In our latest prototype we’re using a “rendezvous point”, which is a very simple relay server we’ve designed, running in the public Internet. The rendezvous point is the most universal and usable solution, but does come with some privacy concerns, namely that the untrusted rendezvous server gets to see the Pico/computer IP address pairs which are communicating. So we still allow privacy-conscious users to adopt less convenient alternatives if they’re willing to pay the price of setting up Bluetooth, connecting cables or changing their firewall/NAT settings, but we don’t impose that cost on everyone.

The drawback of this approach is that the user’s computer requires some Pico software to receive the delegation tokens, via the rendezvous point or whatever other channel. Having to install these hurts the “deployability” of the system as a whole and could render it completely useless in situations where installing new software is not possible. But another innovation, making the delegation token take the form of a URL, means there is always a last-resort fallback channel: manual transcription. If a Pico user can’t install the software on, or doesn’t want to trust, a particular computer, they can always still retype the token URL. There are other security concerns related to having URLs which will log your browser into someone else’s account, but you’ll have to read the lousy phish paper for a more detailed discussion of this topic.

There is clearly much interest in finding a replacement for passwords and several schemes (such as US 8261089 B2Snap2Pass, tiQR, US 20130219479 A1, QRAuth, SQRL) propose using QR codes. But upon close inspection, all of the above use a page impression nonce, making them vulnerable to session hijacking attacks. We rejected the idea that this could be solved simply by getting the user to carry out more checks and instead we propose an architectural fix which provides a more secure basis for the design of Pico.

For more information about Pico, have a look at our website, sign up to our mailing list and stay tuned for more Pico-related posts on Light Blue Touchpaper in the near future.

Pico part I: Russian hackers stole a billion passwords? True or not, with Pico you wouldn’t worry about it.

In last week’s news (August 2014) we heard that Russian hackers stole 1.2 billion passwords. Even though such claims sound somewhat exaggerated, and not correlated with a proportional amount of fraudulent access to user accounts, password compromise is always a pain for the web sites involved—more so when it causes direct reputation damage by having the company name plastered on the front page of the Financial Times, as happened to eBay on 22 May 2014 after they lost to cybercriminals the passwords of over 100 million users. Shortly before that, in April 2014, it was the Heartbleed bug that forced password resets on allegedly 66% of all websites. And last year, in November 2013, it was Adobe who lost the passwords of 150 million users. Keep going back and you’ll find many more incidents. With alarming frequency we hear of some major security exploit that compromises an enormous number of passwords and embarrasses web sites into asking their users to pick a new password.

Note the irony: despite the complaints from some arrogant security experts that users are too lazy or too dumb to pick strong passwords, when such attacks take place, all users must change their passwords, not just those with a weak one. Even the diligent users who went to the trouble of following complicated instructions and memorizing “avKpt9cpGwdp”, not to mention typing it every day, are punished, for a sin they didn’t commit (the insecurity of the web site) just as much as the allegedly lazy ones who picked “p@ssw0rd” or “1234”. This is fundamentally unfair.

My team has been working on Pico, an ambitious project to replace passwords with a fairer system that does not require remembering secrets. The primary goal of Pico is to be easier to use than remembering a bunch of PINs and passwords; but, incidentally, it’s also meant to be much more secure. On that note, because Pico uses public key cryptography, if a Pico-based web site is compromised, then its users do not need to change their login credentials. The attackers can only steal the users’ public keys, not their private keys, and therefore are not able to impersonate them, neither at that site nor anywhere else (besides the fact that, to protect your privacy, your Pico uses a different key pair for every one of your accounts). This alone, even aside from any usability improvements, should be a good enough reason for web sites to convert to Pico.

We didn’t blog it then, but a few months ago we produced a short introductory video of our vision for Pico. On the Pico web site, besides that video and others, there are also frequently asked questions and, for those wanting to probe more deeply, a growing collection of technical papers.

phished

This is the first part in a series on the Pico project: my research associates will follow it up with further developments. Pico was recently featured in The Observer and on Sophos’s Naked Security blog, and is about to feature on BBC Radio 4’s PM programme on Tuesday 19 August at 17:00 (broadcast on Thursday 21 August 2014, with a slight cut; currently on iPlayer, starting at 46:28 . Full version broadcast on BBC World Service and downloadable, for a while, from the BBC Global News Podcast, starting at 21:37 ).

Update: the Pico web site now has a page with press coverage.