Category Archives: Security psychology

Decepticon: International Conference on Deceptive Behavior

Call for papers

We are proud to present DECEPTICON 2015 – International Conference on Deceptive Behavior, to be held 24-26 August 2015 at the University of Cambridge, UK. Decepticon brings together researchers, practitioners, and like-minded individuals with a taste for interdisciplinary science in the detection and prevention of deception.

We are organising two panel sessions; one on Future Directions in Lie Detection Research with Aldert Vrij, Par-Anders Granhag, Steven Porter and Timothy Levine, and one on Technology Assisted Lie Detection with Jeff Hancock, Judee Burgoon, Bruno Verschuere and Giorgio Ganis. We broadly and warmly welcome people with varying scientific backgrounds. To cover the diversity of approaches to deception research, our scientific committee members are experts in fields from psychology to computer science, and from philosophy to behavioral economics. For example, scientific committee members from the University of Cambridge are Ross Anderson, Nicholas Humphrey, Peter Robinson and Sophie Van Der Zee.

We strongly encourage practitioners, academics and students alike to submit abstracts that touch on the topic of deception. The extended deadline for abstract submissions (max. 300 words) for an oral, panel or poster presentation is 8 APRIL 2015. Interested in attending, but don’t feel like presenting? You can register for the conference here.

Please visit our webpage for more information. We are happy to answer any questions!

We hope to see you in Cambridge,

DECEPTICON TEAM

 

Talk in Oxford at 5pm today on the ethics and economics of privacy in a world of Big Data

Today at 5pm I’ll be giving the Bellwether Lecture at the Oxford Internet Institute. My topic is Big Conflicts: the ethics and economics of privacy in a world of Big Data.

I’ll be discussing a recent Nuffield Bioethics Council report of which I was one of the authors. In it, we asked what medical ethics should look like in a world of ‘Big Data’ and pervasive genomics. It will take the law some time to catch up with what’s going on, so how should researchers behave meanwhile so that the people whose data we use don’t get annoyed or surprised, and so that we can defend our actions if challenged? We came up with four principles, which I’ll discuss. I’ll also talk about how they might apply more generally, for example to my own field of security research.

Media coverage “to freeze or not to freeze” paper

On the 5th of January this year we presented a paper on the automatic detection of deception based on full-body movements at HICSS (Hawaii), which we blogged about here at LBT. We measured the movements of truth tellers and liars using full-body motion capture suits and found that liars move more than truth tellers; when combined with interviewing techniques designed to increase the cognitive load of liars, but not of truth tellers, liars even moved almost twice as much as truth tellers. These results indicate that absolute movement, when measured automatically, may potentially be a reliable cue to deceit. We are now aiming to find out if this increase in body movements when lying is stable across situations and people. Simultaneously, we are developing two lines of technology that will make this method more usable in practice. First, we are building software to analyse behaviors in real-time. This will enable us to analyse behavior whilst it is happening (i.e., during the interview), instead of afterwards. Second, we are investigating remote ways to analyse behavior, so interviewees will not have to wear a body-suit when being interviewed. We will keep you updated on new developments.

In the meantime, we received quite a lot of national and international media attention. Here is some tv and radio coverage on our work by Dailymotion, Fox (US), BBC world radio, Zoomin TV (NL), WNL Vandaag de dag (NL, deel 2, starts at 5:20min), RTL Boulevard (NL), Radio 2 (NL), BNR (NL), Radio 538 (NL). Our work was also covered by newspapers, websites and blogs, including the Guardian, the Register, the Telegraph, the Telegraph incl. polygraphthe Daily Mail, Mail Online, Cambridge News, King’s College Cambridge, Lancaster University, Security Lancaster, Bruce Schneier’s blog, International Business TimesRT,   PC World, PC Advisor, Engadget, News Nation, Techie News, ABP Live, TweakTown, Computer WorldMyScience, King World News, La Celosia (Spanish),de Morgen (BE), NRC (NL), Algemeen Dagblad (NL), de Volkskrant (NL), KIJK (NL), and RTV Utrecht (NL).

 

 

Can we have medical privacy, cloud computing and genomics all at the same time?

Today sees the publication of a report I helped to write for the Nuffield Bioethics Council on what happens to medical ethics in a world of cloud-based medical records and pervasive genomics.

As the information we gave to our doctors in private to help them treat us is now collected and treated as an industrial raw material, there has been scandal after scandal. From failures of anonymisation through unethical sales to the care.data catastrophe, things just seem to get worse. Where is it all going, and what must a medical data user do to behave ethically?

We put forward four principles. First, respect persons; do not treat their confidential data like were coal or bauxite. Second, respect established human-rights and data-protection law, rather than trying to find ways round it. Third, consult people who’ll be affected or who have morally relevant interests. And fourth, tell them what you’ve done – including errors and security breaches.

The collection, linking and use of data in biomedical research and health care: ethical issues took over a year to write. Our working group came from the medical profession, academics, insurers and drug companies. We had lots of arguments. But it taught us a lot, and we hope it will lead to a more informed debate on some very important issues. And since medicine is the canary in the mine, we hope that the privacy lessons can be of value elsewhere – from consumer data to law enforcement and human rights.

Launch of security economics MOOC

TU Delft has just launched a massively open online course on security economics to which three current group members (Sophie van der Zee, David Modoc and I) have contributed lectures, along with one alumnus (Tyler Moore). Michel van Eeten of Delft is running the course (Delft does MOOCs while Cambridge doesn’t yet), and there are also talks from Rainer Boehme. This was pre-announced here by Tyler in November.

The videos will be available for free in April; if you want to take the course now, I’m afraid it costs $250. The deal is that EdX paid for the production and will sell it as a professional course to security managers in industry and government; once that’s happened we’ll make it free to all. This is the same basic approach as with my book: rope in a commercial publisher to help produce first-class content that then becomes free to all. But if your employer is thinking of giving you some security education, you could do a lot worse than to support the project and enrol here.

Technology assisted deception detection (HICSS symposium)

The annual symposium “Credibility Assessment and Information Quality in Government and Business” was this year held on the 5th and 6th of January as part of the “Hawaii International Conference on System Sciences” (HICSS). The symposium on technology assisted deception detection was organised by Matthew Jensen, Thomas Meservy, Judee Burgoon and Jay Nunamaker. During this symposium, we presented our paper “to freeze or not to freeze” that was posted on this blog last week, together with a second paper on “mining bodily cues to deception” by Dr. Ronald Poppe. The talks were of very high quality and researchers described a wide variety of techniques and methods to detect deceit, including mouse clicks to detect online fraud, language use on social media and in fraudulent academic papers and the very impressive avatar that can screen passengers when going through airport border control. I have summarized the presentations for you; enjoy!

 Monday 05-01-2015, 09.00-09.05

Introduction Symposium by Judee Burgoon

This symposium is being organized annually during the HICSS conference and functions as a platform for presenting research on the use of technology to detect deceit. Burgoon started off describing the different types of research conducted within the Center for the Management of Information (CMI) that she directs, and within the National Center for Border Security and Immigration. Within these centers, members aim to detect deception on a multi-modal scale using different types of technology and sensors. Their deception research includes physiological measures such as respiration and heart rate, kinetics (i.e., bodily movement), eye-movements such as pupil dilation, saccades, fixation, gaze and blinking, and research on timing, which is of particular interest for online deception. Burgoon’s team is currently working on the development of an Avatar (DHS sponsored): a system with different types of sensors that work together for screening purposes (e.g., border control; see abstracts below for more information). The Avatar is currently been tested at Reagan Airport. Sensors include a force platform, Kinect, HD and thermo cameras, oculometric cameras for eye-tracking, and a microphone for Natural Language Processing (NLP) purposes. Burgoon works together with the European border management organization Frontex. Continue reading Technology assisted deception detection (HICSS symposium)

To freeze or not to freeze

We think we may have discovered a better polygraph.

Telling truth from lies is an ancient problem; some psychologists believe that it helped drive the evolution of intelligence, as hominids who were better at cheating, or detecting cheating by others, left more offspring. Yet despite thousands of years of practice, most people are pretty bad at lie detection, and can tell lies from truth only about 55% of the time – not much better than random.

Since the 1920s, law enforcement and intelligence agencies have used the polygraph, which measures the physiological stresses that result from anxiety. This is slightly better, but not much; a skilled examiner may be able to tell truth from lies 60% of the time. However it is easy for an examiner who has a preconceived view of the suspect’s innocence or guilt to use a polygraph as a prop to help find supporting “evidence” by intimidating them. Other technologies, from EEG to fMRI, have been tried, and the best that can be said is that it’s a complicated subject. The last resort of the desperate or incompetent is torture, where the interviewee will tell the interviewer whatever he wants to hear in order to stop the pain. The recent Feinstein committee inquiry into the use of torture by the CIA found that it was not just a stain on America’s values but ineffective.

Sophie van der Zee decided to see if datamining people’s body movements might help. She put 90 pairs of volunteers in motion capture suits and got them to interview each other; half the interviewees were told to lie. Her first analysis of the data was to see whether you could detect deception from mimicry (you can, but it’s not much better than the conventional polygraph) and to debug the technology.

After she joined us in Cambridge we had another look at the data, and tried analysing it using a number of techniques, some suggested by Ronald Poppe. We found that total body motion was a reliable indicator of guilt, and works about 75% of the time. Put simply, guilty people fidget more; and this turns out to be fairly independent of cultural background, cognitive load and anxiety – the factors that confound most other deception detection technologies. We believe we can improve that to over 80% by analysing individual limb data, and also using effective questioning techniques (as our method detects truth slightly more dependably than lies).

Our paper is appearing at HICSS, the traditional venue for detection-deception technology. Our task for 2015 will be to redevelop this for low-cost commodity hardware and test it in a variety of environments. Of course, a guilty man can always just freeze, but that will rather give the game away; we suspect it might be quite hard to fidget deliberately at exactly the same level as you do when you’re not feeling guilty. (See also press coverage.)

WEIS 2015 call for papers

The 2015 Workshop on the Economics of Information Security will be held at Delft, the Netherlands, on 22-23 June 2015. Paper submissions are due by 27 February 2015. Selected papers will be invited for publication in a special issue of the Journal of Cybersecurity, a new, interdisciplinary, open-source journal published by Oxford University Press.

We hope to see lots of you in Delft!

Pico part III: Making Pico psychologically acceptable to the everyday user

Many users are willing to sacrifice some security to gain quick and easy access to their services, often in spite of advice from service providers. Users are somehow expected to use a unique password for every service, each sufficiently long and consisting of letters, numbers, and symbols. Since most users do not (indeed, cannot) follow all these rules, they rely on unrecommended coping strategies that make passwords more usable, including writing passwords down, using the same password for several services, and choosing easy-­to-­guess passwords, such as names and hobbies. But usable passwords are not secure passwords and users are blamed when things go wrong.

This isn’t just unreasonable, it’s unjustified, because even secure passwords are not immune to attack. A number of security breaches have had little to do with user practices and password strength, such as the Snapchat hacking incident, theft of Adobe customer records and passwords, and the Heartbleed bug. Stronger authentication requires a stronger and more usable authentication scheme, not longer and more complex passwords.

We have been evaluating the usability of our more secure, token-­based system: Pico, a small, dedicated device that authenticates you to services. Pico is resistant to theft-resistant because it only works when it is close to its owner, which it detects by communicating with other devices you own – Picosiblings. These devices are smaller and can be embedded in clothing and accessories. They create a cryptographic “aura” around you that unlocks Pico.

For people to adopt this new scheme, we need to make sure Pico is psychologically acceptable – unobtrusive and easily and routinely used, especially relative to passwords. The weaknesses of passwords have not been detrimental to their ubiquity because they are easy to administer, well understood, and require no additional hardware or software. Pico, on the other hand, is not currently easy to administer, is not widely understood, and does require additional hardware and software. The onus is on the Pico team to make our solution convenient and easy to use. If Pico is not sufficiently more convenient to use than passwords, it is likely to be rejected, regardless of improvements to security.

This is a particular challenge because we are not merely attempting to replace passwords as they are supposed to be used but as they are actually used by real people, which is more usable, though much less secure, than our current conception of Pico.

Small electronic devices worn by the user – typically watches, wristbands, or glasses – have had limited success (e.g. Rumba Time Go Watch and the Embrace+ Wristband). Reasons include issues with the accuracy of the data they collect, time spent having to manage data, the lack of control over appearance, the sense that these technologies are more gimmicks than useful, the monetary cost, and battery life (etc.). All of these issues need to be carefully considered to pass the user’s cost-benefit analysis of adoption.

To ensure the psychological acceptability of Pico, we have been conducting user studies from the very early design stages. The point of this research is to make sure we don’t impose any restrictive design decisions on users. We want users to be happy to adopt Pico and this requires a user-­centred approach to research and development based on early and frequent usability testing.

Thus far, we have qualitatively investigated user experiences of paper prototypes of Pico, which eventually informed the design of three-­dimensional plastic prototypes (Figure 1).

Figure 1a.
Figure 1. Left: Early paper and plasticine Pico prototypes; Right: Plastic (Polymorph) low-fidelity Pico prototypes

This exploratory research provided insight into whether early Pico designs were sufficient for allowing the user to navigate through common authentication tasks. We then conducted interviews with these plastic prototypes, asking participants which they preferred and why. In the same interviews, we presented participants with a range of pseudo-­Picosiblings (Figure 2) to get an idea of the feasibility of Picosiblings from the end­user’s perspective.

Figure 2.
Figure 2. The range of pseudo-Picosiblings including everyday items (watch, keys, accessories, etc.) and standalone options (magnetic clips and free-standing coins)

The challenge seems to be in finding a balance between cost, style, and usefulness. Consider, for example, the usefulness of a watch. While we expect a watch to tell us the time (to serve a function), what we really care about is its style and cost. This is the difference between my watch and your watch, and it is where we find its selling power. Most wearable electronic devices, such as smart-­watches and fitness gadgets, advertise function and usefulness first, and then style, which is often limited to one, or maybe two, designs. And cost? Well, you’ll just have to find a way to pay for it. Pico, like these devices, could provide the potential usefulness required for widespread and enduring adoption, which, if paired with low cost and user style, should have a greater degree of success than previous wearable electronic devices.

Initial analysis of the results reveals polarised opinions of how Pico and Picosiblings should look, from being fashionable and personalizable to being disguised and discrete. Interestingly, users seemed more concerned about the security of Pico than about the security of passwords. Generally, however, the initial research indicates that users do see the usefulness of Pico as a standalone device, providing it is reliable and can be used for a wide range of services; hardware is no benefit to people unless it replaces most, if not all, passwords, otherwise it becomes another thing that people have to remember.

A legitimate concern for users is loss or theft; we are working to ensure that such incidents do not cause inconvenience or pose threat to the user by making the system easily recoverable. Related concerns relevant to possessing physical devices are durability, physical ease-­of-­use, the awkwardness of having to handle and aim the Pico at a QR code, and the everyday convenience of remembering and carrying several devices.

To make remembering and carrying several devices easier and more worthwhile, interviews revealed that Picosiblings should have more than one function (e.g. watches, glasses, ID cards). By making Picosiblings practical, users are more likely to remember to take them, and to perceive the effort of carrying them around as being outweighed by the benefit. Typically, 3-­4 items were the maximum number of Picosiblings that users said they would be happy to carry; the aim would be to reduce the required number of Picosiblings to 1 or 2 (depending on what they are), allowing users to carry more on them as “backups” if they were going to use them anyway.

Though suggested by some, the same emphasis on dual-­function was not observed for Pico, since this device serves a sufficiently valuable function in itself. However, while many found it perfectly reasonable to carry a dedicated, secure device (given its function), some did express a preference for the convenience of an App on their smartphone. To create a more streamlined experience, we are currently working on such an App, which should give these potential Pico users the flexibility they seem to desire.

By taking into account these and other user opinions before committing to a single design and implementation, we are working to ensure Pico isn’t just theoretically secure – secure only if we can rely on users to implement it properly despite any inconvenience. Instead, we can make sure Pico is actually secure, because we will be creating an authentication scheme that requires users to do only what they would do (or better, want to do) anyway. We can achieve this by taking seriously the capabilities and preferences of the end-­user.

 

First Global Deception Conference

Global Deception conference, Oxford, 17–19th of July 2014

Conference introduction

This deception conference, as part of Hostility and Violence, was organized by interdisciplinary net. Interdisciplinary net runs about 75 conferences a year and was set up by Rob Fisher in 1999 to facilitate international dialogue between disciplines. Conferences are organized on a range of topics, such as gaming, empathycyber cultures, violence and communication and conflict. Not just the topics of the different conferences are interdisciplinary, this is the case within each conference as well. During our deception conference we approached deception from very different angles; from optical illusions in art and architecture via literary hoaxes, fiction and spy novels to the role of the media in creating false beliefs amongst society and ending with a more experimental approach to detecting deception. Even a magic trick was part of the (informal) program, and somehow I ended up being the magician’s assistant. You can find my notes and abstracts below.

Finally, if you (also) have an interest in more experimental deception research with a high practical applicability, then we have good news. Aldert Vrij, Ross Anderson and I are hosting a deception conference to bring together deception researchers and law enforcement people from all over the world. This event will take place at Cambridge University on August 22-24, 2015.

Session 1 – Hoaxes

John Laurence Busch: Deceit without, deceit within: The British Government behavior in the secret race to claim steam-powered superiority at sea. Lord Liverpool became prime minister in 1812 and wanted to catch up with the Americans regarding steam-powered boats. The problem however was that the Royal Navy did not know how to build those vessels, so they joined the British Post in 1820 who wanted to build steam powered boats to deliver post to Ireland more quickly. The post was glad the navy wants to collaborate, although the navy was deceptive; they kept quiet, both to the post, the public and other countries, that they did not know how to build those vessels, and that were hoping to learn how to build a steam boat from them, which succeeded, importantly whilst successfully masking/hiding from the French and the Americans that the British Navy was working on steam vessels to catch up with the US. So the Navy was hiding something questionable (military activity) behind something innocent (post); deceptive public face.

Catelijne Coopmans & Brian Rappert: Revealing deception and its discontents: Scrutinizing belief and skepticism about the moon landing. The moon landing in the 60s is a possible deceptive situation in which the stakes are high and is of high symbolic value. A 2001 documentary by Fox “Conspiracy theory: Did we land on the moon or not?” The documentary bases their suspicions mainly on photographic and visual evidence, such as showing shadows where they shouldn’t be, a “c” shape on a stone, a flag moving in a breeze and pictures with exactly the same background but with different foregrounds. As a response, several people have explained these inconsistencies (e.g., the C was a hair). The current authors focus more on the paradoxes that surround and maybe even fuel these conspiracy theories, such as disclosure vs. non-disclosure, secrecy that fuels suspicion. Like the US governments secrecy around Area 51. Can you trust and at the same time not trust the visual proof of the moan landing presented by NASA? Although the quality of the pictures was really bad, the framing was really well done. Apollo 11 tried to debunk this conspiracy theory by showing a picture of the flag currently still standing on the moon. But then, that could be photoshopped…

Discussion: How can you trust a visual image, especially when used to proof something, when we live in a world where technology makes it possible to fake anything with a high standard? Continue reading First Global Deception Conference