Together with Ronald Poppe, Paul Taylor, and Gordon Wright, Sophie van der Zee (previously employed at the Cambridge Computer Laboratory), took a plunge and tested their automated lie detection methods in the real world. How well do the lie detection methods that we develop and test under very controlled circumstances in the lab, perform in the real world? And what happens to you and your social environment when you constantly feel monitored and attempt to live a truthful life? Is living a truthful life actually something we should desire? Continue reading BBC Horizon documentary: A Week without lying, the honesty experiment
Category Archives: Security psychology
Cybercrime conference
I’m at the Cambridge Cybercrime Centre’s Third Annual Cybercrime Conference. I will try to liveblog the event in followups to this post.
WEIS 2018 – Liveblog
I’m at the seventeenth workshop on the economics of information security, hosted by the University of Innsbruck. I’ll be liveblogging the sessions in followups to this post.
Security and Human Behavior 2018
I’m at the 2018 Workshop on Security and Human Behavior which is being held this year at Carnegie Mellon University. For background, the workshop liveblogs and websites from 2008–17 are linked here.
As usual, I will try to liveblog the sessions in followups to this post.
New security lecturer
We’re delighted to announce that the new security lectureship we advertised has been offered to Alice Hutchings, and she’s accepted. We had 52 applicants of whom we shortlisted three for interview.
Alice works in the Cambridge Cybercrime Centre and her background is in criminology. Her publications are here. Her appointment will build on our strengths in research on cybercrime, and will complement and extend our multidisciplinary work in the economics and psychology of security.
Ethics of mathematics
I’m at the world’s first conference on ethics in mathematics and will be speaking in half an hour. Here are my slides. I will be describing the course I teach to second-year computer scientists on Economics, Law and Ethics. Courses on ethics are mandatory for computer scientists while economics is mandatory for engineers; my innovation has been to combine them. My experience is that teaching them together adds real value. We can explain coherently why society needs rules via discussions of game theory, and then of network effects, asymmetric information and other market failures typical of the IT industry; we can then discuss the limitations of law and regulation; and this sets the stage for both principled and practical discussions of ethics.
We will make you like our research
This is the title of a paper that appeared today in PLOS One. It describes a tool we developed initially to assess the gullibility of cybercrime victims, and which we now present as a general-purpose psychometric of individual susceptibility to persuasion. An early version was described three years ago here and here. Since then we have developed it significantly and used it in experiments on cybercrime victims, Facebook users and IT security officers.
We investigated the effects on persuasion of a subject’s need for cognition, need for consistency, sensation seeking, self-control, consideration of future consequences, need for uniqueness, risk preferences and social influence. The strongest factor was consideration of future consequences, or “premeditation” for short.
We offer a full psychometric test in STP-II with 54 items spanning 10 subscales, and a shorter STP-II-B with 30 items to measure first-order factors, but that omits second-order constructs for brevity. The scale is here with the B items marked, and here is a live instance of the survey for you to play with. Once you complete it, there’s an on-the-fly interpretation at the end. You don’t have to give your name and we don’t record your IP address.
We invite everyone to use our STP-II scale – not just in security contexts, but also in consumer and marketing psychology and anywhere else it might possibly be helpful. Do let us know what you find!
Security economics MOOC running once more
Is this research ethical?
The Economist features face recognition on its front page, reporting that deep neural networks can now tell whether you’re straight or gay better than humans can just by looking at your face. The research they cite is a preprint, available here.
Its authors Kosinski and Wang downloaded thousands of photos from a dating site, ran them through a standard feature-extraction program, then classified gay vs straight using a standard statistical classifier, which they found could tell the men seeking men from the men seeking women. My students pretty well instantly called this out as selection bias; if gay men consider boyish faces to be cuter, then they will upload their most boyish photo. The paper authors suggest their finding may support a theory that sexuality is influenced by fetal testosterone levels, but when you don’t control for such biases your results may say more about social norms than about phenotypes.
Quite apart from the scientific value of the research, which is perhaps best assessed by specialists, I’m concerned with the ethics and privacy aspects. I am surprised that the paper doesn’t report having been through ethical review; the authors consider that photos on a dating website are public information and appear to assume that privacy issues simply do not arise.
Yet UK courts decided, in Campbell v Mirror, that privacy could be violated even by photos taken on the public street, and European courts have come to similar conclusions in I v Finland and elsewhere. For example, a Catholic woman is entitled to object to the use of her medical record in research on abortifacients and contraceptives even if the proposed use is fully anonymised and presents no privacy risk whatsoever. The dating site users would be similarly entitled to object to their photos being used in research to which they might have an ethical objection, even if they could not be identified from their photos. There are surely going to be people who object to research in any nature vs nurture debate, especially on a charged topic such as sexuality. And the whole point of the Economist’s coverage is that face-recognition technology is now good enough to work at population scale.
What do LBT readers think?
Testing the usability of offline mobile payments
Last September we spent some time in Nairobi figuring out whether we could make offline phone payments usable. Phone payments have greatly improved the lives of millions of poor people in countries like Kenya and Bangladesh, who previously didn’t have bank accounts at all but who can now send and receive money using their phones. That’s great for the 80% who have mobile phone coverage, but what about the others?
Last year I described how we designed and built a prototype system to support offline payments, with the help of a grant from the Bill and Melinda Gates Foundation, and took it to Africa to test it. Offline payments require both the sender and the receiver to enter some extra digits to ensure that the payer and the payee agree on who’s paying whom how much. We worked as hard as we could to minimise the number of digits and to integrate them into the familar transaction flow. Would this be good enough?
Our paper setting out the results was accepted to the Symposium on Usable Privacy and Security (SOUPS), the leading security usability event. This has now started and the paper’s online; the lead author, Khaled Baqer, will be presenting it tomorrow. As we noted last year, the DigiTally pilot was a success. For the data and the detailed analysis, please see our paper:
DigiTally: Piloting Offline Payments for Phones, Khaled Baqer, Ross Anderson, Jeunese Adrienne Payne, Lorna Mutegi, Joseph Sevilla, 13th Symposium on Usable Privacy & Security (SOUPS 2017), pp 131–143