Technology assisted deception detection (HICSS symposium)

The annual symposium “Credibility Assessment and Information Quality in Government and Business” was this year held on the 5th and 6th of January as part of the “Hawaii International Conference on System Sciences” (HICSS). The symposium on technology assisted deception detection was organised by Matthew Jensen, Thomas Meservy, Judee Burgoon and Jay Nunamaker. During this symposium, we presented our paper “to freeze or not to freeze” that was posted on this blog last week, together with a second paper on “mining bodily cues to deception” by Dr. Ronald Poppe. The talks were of very high quality and researchers described a wide variety of techniques and methods to detect deceit, including mouse clicks to detect online fraud, language use on social media and in fraudulent academic papers and the very impressive avatar that can screen passengers when going through airport border control. I have summarized the presentations for you; enjoy!

 Monday 05-01-2015, 09.00-09.05

Introduction Symposium by Judee Burgoon

This symposium is being organized annually during the HICSS conference and functions as a platform for presenting research on the use of technology to detect deceit. Burgoon started off describing the different types of research conducted within the Center for the Management of Information (CMI) that she directs, and within the National Center for Border Security and Immigration. Within these centers, members aim to detect deception on a multi-modal scale using different types of technology and sensors. Their deception research includes physiological measures such as respiration and heart rate, kinetics (i.e., bodily movement), eye-movements such as pupil dilation, saccades, fixation, gaze and blinking, and research on timing, which is of particular interest for online deception. Burgoon’s team is currently working on the development of an Avatar (DHS sponsored): a system with different types of sensors that work together for screening purposes (e.g., border control; see abstracts below for more information). The Avatar is currently been tested at Reagan Airport. Sensors include a force platform, Kinect, HD and thermo cameras, oculometric cameras for eye-tracking, and a microphone for Natural Language Processing (NLP) purposes. Burgoon works together with the European border management organization Frontex. Continue reading Technology assisted deception detection (HICSS symposium)

Commercialising academic research

At the 2014 annual conference of the Academic Centres of Excellence in Cyber-Security Research I was invited to give a talk on commercialising research from the viewpoint of an academic. I did that by distilling the widsom and experience of five of my Cambridge colleagues who had started a company (or several). The talk was well received at the conference and may be instructive both for academics with entrepreneurial ambitions and for other universities that aspire to replicate the “Cambridge phenomenon” elsewhere.

Screenshot from 2015-01-12 14:45:04

A recording of the presentation, Commercialising research: the academic’s perspective from Frank Stajano Explains, is available on Vimeo.

To freeze or not to freeze

We think we may have discovered a better polygraph.

Telling truth from lies is an ancient problem; some psychologists believe that it helped drive the evolution of intelligence, as hominids who were better at cheating, or detecting cheating by others, left more offspring. Yet despite thousands of years of practice, most people are pretty bad at lie detection, and can tell lies from truth only about 55% of the time – not much better than random.

Since the 1920s, law enforcement and intelligence agencies have used the polygraph, which measures the physiological stresses that result from anxiety. This is slightly better, but not much; a skilled examiner may be able to tell truth from lies 60% of the time. However it is easy for an examiner who has a preconceived view of the suspect’s innocence or guilt to use a polygraph as a prop to help find supporting “evidence” by intimidating them. Other technologies, from EEG to fMRI, have been tried, and the best that can be said is that it’s a complicated subject. The last resort of the desperate or incompetent is torture, where the interviewee will tell the interviewer whatever he wants to hear in order to stop the pain. The recent Feinstein committee inquiry into the use of torture by the CIA found that it was not just a stain on America’s values but ineffective.

Sophie van der Zee decided to see if datamining people’s body movements might help. She put 90 pairs of volunteers in motion capture suits and got them to interview each other; half the interviewees were told to lie. Her first analysis of the data was to see whether you could detect deception from mimicry (you can, but it’s not much better than the conventional polygraph) and to debug the technology.

After she joined us in Cambridge we had another look at the data, and tried analysing it using a number of techniques, some suggested by Ronald Poppe. We found that total body motion was a reliable indicator of guilt, and works about 75% of the time. Put simply, guilty people fidget more; and this turns out to be fairly independent of cultural background, cognitive load and anxiety – the factors that confound most other deception detection technologies. We believe we can improve that to over 80% by analysing individual limb data, and also using effective questioning techniques (as our method detects truth slightly more dependably than lies).

Our paper is appearing at HICSS, the traditional venue for detection-deception technology. Our task for 2015 will be to redevelop this for low-cost commodity hardware and test it in a variety of environments. Of course, a guilty man can always just freeze, but that will rather give the game away; we suspect it might be quite hard to fidget deliberately at exactly the same level as you do when you’re not feeling guilty. (See also press coverage.)

Systemization of Pluggable Transports for Censorship Resistance

An increasing number of countries implement Internet censorship at different levels and for a variety of reasons. Consequently, there is an ongoing arms race where censorship resistance schemes (CRS) seek to enable unfettered user access to Internet resources while censors come up with new ways to restrict access. In particular, the link between the censored client and entry point to the CRS has been a censorship flash point, and consequently the focus of circumvention tools. To foster interoperability and speed up development, Tor introduced Pluggable Transports — a framework to flexibly implement schemes that transform traffic flows between Tor client and the bridge such that a censor fails to block them. Dozens of tools and proposals for pluggable transports  have emerged over the last few years, each addressing specific censorship scenarios. As a result, the area has become too complex to discern a big picture.

Our recent report takes away some of this complexity by presenting a model of censor capabilities and an evaluation stack that presents a layered approach to evaluate pluggable transports. We survey 34 existing pluggable transports and highlight their inflexibility to lend themselves to feature sharability for broader defense coverage. This evaluation has led to a new design for Pluggable Transports – the Tweakable Transport: a tool for efficiently building and evaluating a wide range of Pluggable Transports so as to increase the difficulty and cost of reliably censoring the communication channel.

Continue reading Systemization of Pluggable Transports for Censorship Resistance

Our Christmas message for troublemakers: how to do anonymity in the real world

On the 5th of December I gave a talk at a journalists’ conference on what tradecraft means in the post-Snowden world. How can a journalist, or for that matter an MP or an academic, protect a whistleblower from being identified even when MI5 and GCHQ start trying to figure out who in Whitehall you’ve been talking to? The video of my talk is now online here. There is also a TV interview I did later, which can be found here, while the other conference talks are here.

Enjoy!

Ross

Curfew tags – the gory details

In previous posts I told the story of how Britain’s curfew tagging system can fail. Some prisoners are released early provided they wear a tag to enforce a curfew, which typically means that they have to stay home from 7pm to 7am; some petty offenders get a curfew instead of a prison sentence; and some people accused of serious crimes are tagged while on bail. In dozens of cases, curfewees had been accused of tampering with their tags, but had denied doing so. In a series of these cases, colleagues and I were engaged as experts, but when we demanded tags for testing, the prosecution was withdrawn and the case collapsed. In the most famous case, three men accused of terrorist offences were released; although one has since absconded, the other two are now free in the UK.

This year, a case finally came to trial. Our client, to whom we must refer simply as “Special Z”, was accused of tag tampering, which he denied vigorously. I was instructed as an expert along with my colleague Dr James Dean of Materials Science. Here is my expert report, together with James’s report and addendum, as well as a video of a tag being removed using much less than the amount of force required by the system specification.

The judge was not ready to set a precedent that could have thrown the UK tagging system into chaos. However, I understand our client has now been released on other grounds. Although the court did order us to hand back all the tags, and fragments of broken tags, so as to protect G4S’s intellectual property, it did not make a secrecy order on our expert reports. We publish them here in the hope that they might provide useful guidance to defendants in similar cases in the future, and to policymakers when tagging contracts come up for renewal, whether in the UK or overseas.

On the measurement of banking fraud

Kidnapping is not an easy crime to be successful at…

… it is of course easy to grab the heiress from outside the nightclub at 3am. It’s easy to incarcerate her at the remote farmhouse. If you pick the right henchmen then it’s easy to cut off her ear and post it off to the frantic family.

Thereafter it gets very difficult — you must communicate directly several times and you must physically go and pick up the bag of money. These last two tasks are extremely difficult to manage successfully which is why police forces solve kidnap cases so often (in its first 5 years the Metropolitan Police Kidnap Unit solved 100% of their cases).

Theft from online bank accounts also has its difficulties. It remains relatively easy to gain access to a victim’s bank account and to issue instructions on their behalf. Last decade this was all about “phishing” — gathering credentials by creating fake websites; more recently credentials have been compromised by means of “man-in-the-browser” malware: you think you are paying your gas bill and that’s what your browser tells you is occurring. In practice you’re approving a money transfer to a criminal.

However, moving the money to another account does not mean that the criminal has got away with it. If the bank notices a suspicious pattern of transfers then they can investigate, and when they see the tell-tale signs of fraud then the transfers (which were only changes to computer records) can be trivially reversed. It is only when the criminal can extract folding money from an ATM, or can move the money abroad in such a way that it will never be repatriated that they have been truly successful. So like kidnap, theft from bank accounts is somewhat harder to pull off than one might initially think.

This has turned out to be a surprise to the Treasury Select Committee.

Last month I was asked to give oral evidence to them and the very first question related to how much fraud there was relating to online banking. I explained that the banks collated figures showing how much money was actually “lost” (viz: the amount that the banks ended up, usually anyway, reimbursing to the unfortunate customers who had been defrauded).

However, industry insiders say that about twice this amount is moved to another account but — and this is basically Very Good News — it is then transferred back so there is no actual loss to anyone. We don’t know the exact figures here, because they are not collated and published.

Furthermore, the bank should also be measuring “money at risk” that is the total amount in the compromised accounts. If their security measures failed and criminals stole every last penny then these would be actual losses — an order of magnitude more, perhaps, than the published figures.

The Select Committee chairman is now writing to the banks to ask if this is all true and what the “true” fraud figures might be. If the banks reply with detailed information then we might finally understand quite how difficult bank fraud is. I fully expect the story will run something along the lines that <n> accounts with 10,000 pounds in them are comprised, that the crooks fraudulently transfer 995 pounds from most, but not all of these <n> — but that half the time the fraudulent transaction is reversed.

If this analysis is correct then online banking fraud is a still, on average, much more lucrative than kidnapping — but we must make up our mind as to whether to measure it using the figures of 10,000 or 995 or “about half of 995 is permanently lost”. There’s justification to every way of measuring the problem — but it it’s important to understand the limitations of any single measurement; failure to do so will mean that the banks will not deploy the right level of security measures — and the politicians will fail to give the issue an appropriate level of  consideration.

Why password managers (sometimes) fail

We are asked to remember far too many passwords. This problem is most acute on the web. And thus, unsurprisingly, it is on the web that technical solutions have had most success in replacing users’ ad hoc coping strategies. One of the longest established and most widely adopted technical solutions is a password manager: software that remembers passwords and submits them on the user’s behalf. But this isn’t as straightforward as it sounds. In our recent work on bootstrapping adoption of the Pico system [1], we’ve come to appreciate just how hard life is for developers and maintainers of password managers.

In a paper we are about to present at the Passwords 2014 conference in Trondheim, we introduce our proposal for Password Manager Friendly (PMF) semantics [2]. PMF semantics are designed to give developers and maintainers of password managers a bit of a break and, more importantly, to improve the user experience.

Continue reading Why password managers (sometimes) fail

Spooks behaving badly

Like many in the tech world, I was appalled to see how the security and intelligence agencies’ spin doctors managed to blame Facebook for Lee Rigby’s murder. It may have been a convenient way of diverting attention from the many failings of MI5, MI6 and GCHQ documented by the Intelligence and Security Committee in its report yesterday, but it will be seriously counterproductive. So I wrote an op-ed in the Guardian.

Britain spends less on fighting online crime than Facebook does, and only about a fifth of what either Google or Microsoft spends (declaration of interest: I spent three months working for Google on sabbatical in 2011, working with the click fraud team and on the mobile wallet). The spooks’ approach reminds me of how Pfizer dealt with Viagra spam, which was to hire lawyers to write angry letters to Google. If they’d hired a geek who could have talked to the abuse teams constructively, they’d have achieved an awful lot more.

The likely outcome of GCHQ’s posturing and MI5’s blame avoidance will be to drive tech companies to route all the agencies’ requests past their lawyers. This will lead to huge delays. GCHQ already complained in the Telegraph that they still haven’t got all the murderers’ Facebook traffic; this is no doubt due to the fact that the Department of Justice is sitting on a backlog of requests for mutual legal assistance, the channel through which such requests must flow. Congress won’t give the Department enough money for this, and is content to play chicken with the Obama administration over the issue. If GCHQ really cares, then it could always pay the Department of Justice to clear the backlog. The fact that all the affected government departments and agencies use this issue for posturing, rather than tackling the real problems, should tell you something.

WEIS 2015 call for papers

The 2015 Workshop on the Economics of Information Security will be held at Delft, the Netherlands, on 22-23 June 2015. Paper submissions are due by 27 February 2015. Selected papers will be invited for publication in a special issue of the Journal of Cybersecurity, a new, interdisciplinary, open-source journal published by Oxford University Press.

We hope to see lots of you in Delft!