I’m liveblogging the Workshop on Security and Human Behaviour which is being held here in Cambridge. The participants’ papers are here and the programme is here. For background, see the liveblogs for SHB 2008-13 which are linked here and here. Blog posts summarising the talks at the workshop sessions will appear as followups below, and audio files will be here.
Mark Frank kicked off SHB 2014 with “Mythperceptions: deception and security”. There are many misperceptions about airport security; actual terrorists are people with grievances, so the human element is central, and should be thought of at each layer. The lab results are not so useful; in the lab, observers can’t even distinguish real from fake pain. But then, the context is often wrong: odds ratios climb from 1.44 for lab research to 2.22 for high-stakes lab research to 9.0 for actual criminal behaviour; we don’t know how high for terrorist acts as there’s not enough data. The best we can do is use the available behavioural science to make the decision on whether to select people for secondary screening.
Frank Stajano gave a “femto-talk” on understanding scam victims; it’s arrogant to whinge that “victims are gullible”, as scammers exploit human nature; if we had Plato’s “ring of Gyges” that made people invisible, would we still be honest? This was followed by a “pico-talk” on making security desirable. Jewelry designers are as important as cryptographers in making security artefacts “cool”.
Sophie van der Zee’s topic was “When lying feels like the right thing to do.” Researchers usually think of lies as negative, but lies are a vital element of politeness; see Bella di Paolo’s 1996 study. So when are people more likely to lie? Sasse found that the perception of fairness was the best predictor of people giving personal details; so would unfairness make people cheat? She did a language proficiency experiment with 156 mTurkers half of whom got unfair negative feedback; all subsequently got an opportunity to cheat. The subjects who were abused cheated more, and were also significantly less happy. She did a further experiment on insurance claims that gave the same result and is of more direct applicability to e-commerce.
Aldert Vrij has been working on deterrence of deception with separate experiments on social influence and imposed cognitive load. Observing oneself can lead to more normative behaviour, just as being observed by others. His study was about lying to get a health insurance quote, and the cover was testing lie detection software; if the subject could get away with lying to the machine they’d get a bonus. The manipulations were observing the subject via a skype link, and seeing yourself on a webcam. There were 82 subjects; 40% of the observed participants lied, as did 60% of the others; seeing yourself had no effect at all. The second study imposed cognitive load; 84 subjects had to lie to get a job as a spy, to test interviewing software, and half of them also had to show that they could multi-task by memorising on-screen popups during the interview. Almost everyone lied during cued questions, but the proportion who lied during open-ended questions was 50% in the control group and 25% in the treatment group. So cognitive load can deter deception in contexts where lying involves some effort.
Jussi Palomaki has been working on “Deception, Machiavellianism and poker”. He recruited 502 online poker players to bluff or not in online poker tasks; all had four opponents and there were three groups where the avatars were all male, all female or two of each. Bluffing was significantly more frequent at tables with only female avatars; subjects were mostly young males. Independently, males are more likely to bluff at all-female tables. However almost all participants disagreed with the proposition that opponent gender had an effect. Machiavellianism predicts bluffing, but only in inexperienced players. Perhaps men see bluffing as a competitive display and women as gullible.
Discussion touched on the differing levels of acceptable lying across cultures; when does a social lubricant become a social grease-fire? A more detailed understanding of when people lie can’t harm – including the extent to which these results vary across cultures. The difference between fair rejection and unfair rejection may not be enough; unfair rejection can amount to demeaning, and it might matter whether the mistreatment is intentional or not. Material incentives matter too; the bigger a poker pot is, the more likely people are to bluff. A formal approach to human-factors security might be based on Reason’s Swiss cheese model, where protection is about ensuring that the holes in the various layers of security never line up; however the top-down approach favoured by governments tends to freeze things while threats are actually dynamic, so we end up with pointless unfair bullying rather than effective dynamic response.
Bonnie Anderson works on NeuroIS, using fMRI, EEG, eye tracking and cortisol measurements to find out what’s really going on with risk behaviour. The repetition suppression effect was first discovered in the California sea slug, and Eric Kandel won a Nobel prize for it; it turns out from fMRI work that this is exactly how people ignore security warnings. Can we counteract this with polymorphic warnings? She tested 13 treatments and found from fMRI that jiggling the message is best, then zooming it. She’s now doing a series of larger-scale field studies with Google and Linkedin. Her takeaway message is that habituation is not equal to laziness but something absolutely innate, and found even in invertebrates.
Michelle Baddeley’s subject is “A behavioural analysis of online privacy and security: understanding deception”. She sees deception as much wider than outright lies: deception includes mis-selling, “free” services, payday loans and so on. She’s working on a taxonomy of deception much (but not all) of which can be understood in terms of standard economics: from opportunism to vengefulness, and from paternalistic and social lies, cheap talk, exaggeration and minimisation, through bad habit lies to self-deception. Even where motives are rational, perpetrators can have motivational biases and exploit victims’ cognitive biases. If sender and receiver both gain, a lie is Pareto-improving; sender gains and receiver loses, selfish; and there are even spiteful lies where both sender and receiver lose (we might not always ignore altruistic lies where the sender helps the receiver). Some means of deterring deception will involve standard economic insights; behavioural economics can contribute ideas from reputation/social capital, shifting time preferences, and enabling victims to gain insight.
Brian Glass followed with a talk on “Modelling Misrepresentation in Online Seller-Buyer Interactions”. The second most reported problem in eBay (after slow seller communication) is misrepresentation of goods sold; this is serious, as 3m items are sold a day by 25m sellers with 200m live listings. He asked subjects to write descriptions of jewelry items from a written description and from handling the; selling is a repeated process with immediate rewards and delayed feedback. This lets him try different types of feedback (rewards, fines, reputation, observation) and see which learning models fit best.
Bhismadev Chakrabarti’s title was “Wh(o/y) do we mimic?” Spontaneous mimicry is a fundamental component of empathy; social contagion of emotion underlies learning and is present from birth. Contagion in online social behaviour can cause a video, or an unsafe practice, or even a threat, to go viral. Mimicry is driven by similarity, status and (lack of) social distance – all of which change the reward value of the social target. He set out to study how reward modulates mimicry using facial EEG and fMRI. There was no effect on individuals with high autistic traits. In neurotypical individuals the reward value modulated activity in the striatum and the anterior insula / putamen. Yet emotional contagion is not everything; perspective taking also matters in empathy, and he measures this by seeing how much time you spend looking at a distractor that satisfies a request from your own perspective rather than the requester’s. It turns out that perspective taking and emotional contagion are substitutes in empathic individuals. Less rewarding distractors distract perspective taking less. Followup research questions: does online contagion also get modulated by reward, and are some people more susceptible to it?
Diego Gambetta has been studying trust and distrust for a quarter century, linking trust with signalling theory. What makes a signal credible, even where there is an incentive to misrepresent? Signals that work are signals that a deceiver cannot afford; a poisoner will not usually drink from a poisoned chalice. This embraces Aldert’s work; increasing the cognitive cost of lying will make liars lie less. The criminal underworld is an interesting study: how, for example, do you ensure underlings are loyal? One answer is to promote incompetents (Machiavelli also said this; if you promote people who deserve it, they will never be grateful). How do you inspire trust? Burn bridges, tie hands and reduce outside options; or share compromising information with partners. The hardest question is: how do you know I’m a bona fide criminal, rather than undercover cop? Get a prison sentence, or some other costly signal. (The better the legal system, the most trustworthy is prison as a crime signal.) In effect, criminals exploit the law to enforce their own cooperation. All this is written up in his book “Codes of the Underworld” he’s now trying to replicate these strategies in the lab.
In discussion, the credentialling function of prison sets an optimal level of enforcement, as if you imprison too many young men as Brazil does, that draws many into crime. But loyalty is rare, and therefore precious. An interesting thing about studying crime is that we see the world deprived of the institutions we live by every day. But there’s a spectrum. Shared knowledge of deviance can be cheap; where everyone cheats on their taxes, you might not need Mafia bullets to enforce social solidarity. Might warnings be made interactive? That’s one possible way forward; but one should bear in mind that not all habituation is bad; it’s not just that it’s energy-efficient for the brain but there’s the economic aspect that many warnings exist for the benefit of the warner rather than the warnee. It would be great is warnings could be quantified, but ambiguity is often necessary, and there are issues of subject autonomy. There’s also a literature on prediction and habituation by Schulz and others. Experts and novices react to warnings differently; and there are brain correlates of expertise, which present differently depending on the nature of the task (perceptual, social, …) though the social brain is really the whole brain! There are also learned trade-offs: smart users tune out “shady” flashing popups so companies put a lot of effort into developing attention-getters that are slightly short of the alarm threshold. The complementarity of emotional contagion and perspective taking is supported by TCMS data on the right temporoparietal junction acting as an “xor gate” between the two.
Harold Thimbleby kicked off the afternoon’s session with a discussion of how safety fails in clinical information systems. A system change that causes doctors to spend an extra half an hour a day feeding it can make a significant difference to mortality, even in the absence of definite harm. User interfaces are dreadful and constantly changing, even within devices with the same vendor and serial number. Safety is very poorly done in medical devices because of regulatory incompetence; the response to poor interface design is “train the nurses” and the to fatal accidents is “blame the nurse”. That at least lets the investigation be closed. There are individuals in the FDA that understand this but the regulatory process doesn’t let them do anything useful. Once the regulatory logjam is broken, the technical solution involves user simulation and error management user interfaces.
Nicolas Christin paid people to download an run an unknown executable, in return for a payment that escalated each week. The software advertised itself as the “CMU Distributed Computing Project” but was hosted offsite and not found on search engines. 50% got a UAC prompt to download a program from another computer. In low-paying conditions (1c – 10c) about half downloaded it and half executed the remote program. In high-paying conditions (50c or $1) two-thirds downloaded and again about half ran it. 1.8% used a VM (most probably Mac users), 16.4% had malware and 79.4% had security software – and people with AV were more likely to have malware than those who didn’t (18% vs 12%). The percentage of users from the developed world increased sharply with price. 70% of participants knew it was dangerous to run unknown programs; they were greedy rather than stupid. Comments included that “it looks shady but he pays on time so must be OK.” There seems to be a risk thermostat effect in that many people thought they were safe as they had AV. Maybe this approach is serviceable for some botnets.
Cormac Herley talked on “Optimizing the right objective function”. Normally people who do risk-based security try to minimise expected loss; but users normally care about expected loss plus effort. This gives finite results which are also more realistic for portfolios of risks, and it mirrors actual user behaviour better too. Thus the supposed “perverse” behaviour of users may just be a result of their having a different objective from us.
Richard Harper has been agitating about trust. Thinking about security as “good” programmers against “bad” outsiders who are idiots or evil can easily crowd out subtleties. For example, he was once hired to try to understand why any honest person would want a prepaid mobile phone; the vendors didn’t understand that some people didn’t trust themselves to be frugal. Lack of self-trust is wider; people use some services because they don’t trust themselves to be able to find their stuff. We need calmness to think about broader problems, and avoid introducing fragility as a result of too narrow a scope.
John Lyle fights spam for Facebook, which is seeing attacks in which people are social-engineered into downloading some javascript and pasting it into their browser. The dangle is something Facebook doesn’t do, such as seeing who viewed your profile or getting a thousand likes; the output is subscribing the victim to pages that are sold to spammers. 80% of people who go to a script page (e.g. from a video page link) won’t run the script, but only 8% of people who paste code into their javascript console will not run the script when they get a warning. What can we do? Should there be a fancy warning on the javascript console? If you block, the bad guys tell victims to install greasemonkey and run the script there. In such circumstances people know they’re doing something Facebook doesn’t allow, so they’re prepared to break rules. The fundamental problem is that the victim is an accomplice in their own compromise, and their incentives are aligned with the scammer’s until the very last step.
Tyler Moore’s topic was “Increasing the impact of voluntary action against cybercrime”. Most action against cybercrime is not down to law enforcement but to private firms and individuals; Tyler wants to measure what’s happening and what works. Communications to end users and to ISPs are very different; the latter can respond well to benchmarking studies which show them to be behind their peers, while behavioural techniques are more useful with end users. The value chain can be complex, and intermediary remediation is the most common way of dealing with the problem with incentive issues at each step abuse report contributor -> abuse database manager -> abuse notifier -> intermediary -> resource owner.
Discussion started round antinormative behaviour; being mischievous and tweaking Facebook is different from messing your friends around. So there may be value in peer policing. The US government is putting a lot of effort into letting law enforcement agencies authenticate each other online, but this doesn’t fix the competence problem which leads to cybercrime operators working on a basis of personal trust. Might we use social-network mechanisms to spread warnings, such as by getting people to tell their friends how they got hacked? What are people’s attitudes to that? Is it OK to admit that you gave social malware to your friends? And how do we deal with thrill-seekers who don’t mind the possibility of getting infected? And is it reasonable to have a really heavyweight warning for actions such as turning on developer mode in our browser?
Scott Atran’s subject is the devoted actor, who is different from the rational actor, as he’s driven by sacred values that are out of proportion to expected benefits. The Americans had a higher standard of living in the 1770s than Britain but still decided for “Liberty or death” – something no history of the world would have encouraged. Augustine asked what commonwealth remains standing now that the Roman Empire had fallen. Costly ritual commitment to apparently absurd beliefs deepens trust, galvanises group solidarity and blinds members to exist strategies. Fully reasoned social contracts are much more fragile because of opportunities to defect later. Sacred values have a privileged link to emotions, are immune to material tradeoffs and are bound to notions of collective and personal identity. They are insensitive to discounting and social influence, at least once internalised. His approach is to interview political and religious leaders to get hypotheses and then, following lab experiments, tries surveys to explore likely effects of policies in the field. This led to Iranian President Rouhani’s gesture on the holocaust, in return for which Israel allowed Iran to enrich uranium (but not stockpile it). One interesting angle is that while parochial people who’re fused with their religion are more likely to resort to violence, less parochial believers (such as religious libertarians) are less so. The former don’t respond to cost-benefit analysis anyway.
Chris Cocking talked on collective insecurity. He’s interested in how crowds behave; there are deeply embedded myths about crowds being bad or mad but the evidence is largely in the other direction. Resilience emerges from a sense shared identity that encourages cooperation. Indiscriminate policing escalates disorder; they’d create crowd cohesion and cause people to identify fire and ambulance crews with the police. The pressure to introduce water cannon after the 2011 riots is political rather than based on evidence. There, the police didn’t let the fireman in to tackle the fire at the Reeves furniture store, but it’s unlikely the crowd would have attacked the fire service if they’d arrived on their own. Whatever you don’t put paramedics in riot gear! Crime rates usually drop in natural emergencies (SF 1906, Katrina 2005) and most of the problems tend to come from a militarised response. The real panic is by the elite.
Shannon French works in military ethics where the core issue is often overcoming a belief among cadets that military ethics was created recently; young men who have not seen combat focus exclusively on physical risks and of being held responsible for the deaths of colleagues. They may say “better to be judged by twelve than carried by six”. They do not understand psychological and moral risks yet. Her most effective guest speaker was Sergeant Sammy Davis, a Medal of Honor winner, who told an Annapolis class forcefully that there are fates worse then death and that people who did not understand this were not fit for command. Losing one’s sacred identity as part of a unit that behaves honourably means losing meaning in life; some of her alumni in the Navy seals are now uncomfortable with missions they’re asked to perform. Senator Bob Kerry, who killed civilians in Vietnam, said it had haunted him for thirty years, and he’d wondered constantly whether things might have happened differently; killing for your country can be worse than dying for it.
David Modic has been studying susceptibility to persuasion. This pulls together questions from many previous scales on aspects on persuasion, and it turns out that premeditation is by far the largest factor with about a quarter of the effect. There is a long version, and a short one; the questionnaire is free and researchers are encouraged to play with it. The link is on the SHB website.
The discussion started on drones; every time a new military technology comes along people call Shannon and her colleagues asking whether a new military ethic is required. The answer is no, as it’s just distance warfare (like the longbow) but the problem is to find the right analogy; she suggests the sniper, as drone operators see the person they kill and suffer high rates of PTSD. But drone warfare is deskilling the job and moving it to civilians – which profoundly disturbs her. There is a similar issue with the NSA and contractors; the shift to a corporate model is also troubling. The military hated the move towards torture, and it eroded their sacred values. The adoption of military tactics by the CIA and the police, without the right ethos and values, can lead to bad things too. Yet professional troops do commit more atrocities than “civilian soldier” draftees. The ferocity of a struggle makes a difference. Broad-based church groups have made huge strides in gentle pursuits such as the civil rights movement; yet in a sharp conflict environment the membership rituals become ever more absurd. Basically we have two registers for beliefs; normal logical behaviour is calm but the fiercest beliefs come through emotional bonding, and all involve rituals which ratchet up the contagiousness of ideas. In the face of a perceived threat, this can motivate large numbers of people quickly; look at Leni Reifenstahl’s movies or the Iranian government’s use of the nuclear issue to mobilise its population. Values usually don’t travel beyond their social networks, so breaking up the networks might help. But we don’t have a prospect theory yet of sacred values, and most work on economic and moral reasoning hasn’t considered sacred values at all. The main attempted way of changing sacred values at scale is revolution, though they usually snap back; things can however change slowly over time, as with the shift from a society based on ethnic and confessional groups to one based on rights, or the shift in the US climate brought about by the national leadership in the years after 9/11.
“.. theory of sacred values”
One interesting starting point for such a theory might be Victor Davis Hanson’s “The Western Way of War: Infantry Battle in Classical Greece” (New York, 1989). He has an interesting discussion of the (complex) relationship of agricultural society, values such as the importance of defending grain fields, orchards and vineyards, and Greek citizen warfare.
Serge Egelman was Tuesday’s first speaker, in the session on usability, and he has worked with Google on browser warnings. The best broad-brush techniques we’ve come up with will only persuade about half the users to pay attention. The next step is probably the personalisation of warnings. He looked at the main privacy scales in the literature (Westin, the PCS and IUIPC). They found some very small correlations, such as an inverse correlation between agreeableness and the Westin; they’re looking more broadly at decision-making style, risk aversion and even how people use Facebook. Perhaps an opportunity is to set default privacy settings based on inferred preferences.
Yuliy Pisetski described work that Facebook has been doing on OAuth. There are many things that can go wrong with both browser and mobile app implementations. There are versions for clients that can’t keep secrets. The usability issue here is usability for a large number of non-expert developers; passing around secrets is hard, and asking thousands of uninitiated to do it is fraught. Predictable mistakes such as session fixation, and improper handling of redirect URIs, keep on happening, week after week. Protocol designers must make much greater effort to idiotproof their designs and must realise that software will ship and developers will move on just as soon as an implementation works at all.
Jeff Yan talked on breaking a family of graphical passwords. These are becoming widely deployed, with Android and Windows 8 being examples. Jeff tackled PassPoints where a password is five click points on an image (the “cued click points” variant uses multiple images). As users don’t click on exactly the same point each time, such schemes discretize the background image into tolerance squares. There’s an edge problem when click points are near the grid; “robust discretization” uses three overlapping, offset grids. So: will grid information stored in the clear leak passwords? Indeed it does; people choose salient points, which can be found automatically, and combining these with the grid information cuts the entropy severely leading to 40% success in password guessing on PP and CCP.
Stuart Schechter gave a talk on his forthcoming Usenix security paper on storing 56-bit keys in human memory but asked us not to blog the details.
Angela Sasse argued it’s time for a reboot of security usability. She’s done over 200 interviews and 1800 survey responses on noncompliance across a number of organisations. Most people break most rules most of the time and most of the organisation is complicit in it; things are getting worse all the time. People re-organise their work to avoid security; they return devices, refuse services, forego innovation and indulge in extensive shadow security practices. Usable-security geeks demand 12-15 character passwords (as the previous speaker did) but this is simply incompatible with human memory and is an intolerable intrusion into the working day. CAPTCHAs have a 40% failure rate and waste 17 years of effort every day. There are no unclaimed pools of human effort to be tapped. People must get real and accept that the user compliance budget is maybe 3% of user time and attention; there should be compulsory assessment of workload and cognitive complexity (maybe using GOMS or NASA-TLX), in line with Harold’s recommendations yesterday.
Discussion started on authentication, ninety percent of which should just go; must of it is just marketing rather than risk reduction. Also, tens of thousands of normal users get compromised every day; the important thing is to give the accounts back to the right people (and to prevent the bad guys accumulating too many accounts, whether compromised or just sybils). There was disagreement on whether social network sites gain or lose from interpersonal privacy, as opposed to privacy against advertisers; both oversharing and undersharing are problems but the former’s worse. There are issues of user motivation, which varies widely, so study designs should be conservative. A shocking case study is patient-controlled analgesia machines where safety and usability really matter, but where all the PINs are 1234 as getting hundreds of nurses to manage hundreds of pumps is otherwise infeasible. Change may come from the move to phones and tablets, where people expect to enter their gmail password once when they buy the device rather than once a day; this pushes the authentication under the hood, and exchanges the problems Angela’s worried about for the problems Yuliy is working on.
Alessandro Acquisti started the sixth session reporting work with Jeff Hancock, who could not be here. In 1966, Hall define a hierarchy of space, from public to social to private to intimate. Does this have evolutionary roots? Alessandro wondered whether external threats affect privacy attitudes. 800+ participants took part in four experiments. They had a human or a moving fan visible through a window in an adjacent room (visual experiment); or a human present or absent in the room (physical experiment); or either a fax or a muddled phone conversation coming from the next room (the auditory experiment); or finally either a vial or clove oil open, or a vial of androstenone (the olfactory experiment). The Moon questions were used to determine privacy sensitivity. There were strong effects from physical intrusion (unlike a previous experiment 20 years ago), with subjects giving shorter answers with fewer pronouns; slightly less strong results for visual and auditory intrusions (the olfactory experiment results haven’t been analysed yet). Reported privacy concerns are higher in the treatment groups. These sensory cues may of course be absent, subdued, or manipulated in cyberspace.
Laura Brandimarte talked on privacy concerns and geolocation. The US census is thinking of using geolocation for the 2020 census; will this raise privacy concerns, or is the government not trusted enough? She did four experiments; in one of them for example, 694 mTurkers were asked census and sensitive questions across three geolocated conditions (control, requested location, geolocation) and three different surveyors (researchers, the census bureau and government in general). There was no effect on the census questions, but people were less likely to disclose sensitive information if geolocated. They trusted researchers and the census bureau more than the government generally. In another experiment, subjects were primed to think of “Clinton” (control) or “Snowden” (surveillance prime) by solving an anagram; the results were broadly the same, with no effect on census questions but a significant effect on the sensitive ones. (The Snowden prime had no effect on the government condition, where it seems people are already primed to think of surveillance.)
David Murakami Wood’s subject was “Vanishing Security and Ambient Government” by which he means that surveillance apparatus from CCTV through nanohummingbirds to smart dust is becoming invisible. New sensor tech such as terahertz “x-ray spex” will even see through our clothes. How do people think about this? Historically, urbanism and architecture were often about control and security. There are extreme, hard cases: EADS Cassidian’s automated border system for Saudi Arabia has automated killing systems, like the Berlin Wall. The reality in most places may be “ambient government” that manages populations in a distributed way with no obvious interface or “place”. Will privacy be useful as a basis for theory or action any more? It’s not that we shouldn’t have it, but we need more. Do we use sousveillance, watching the watchers, as in Brin’s book?
Masashi Crete-Nishihata was next on “Targeted Threats against Human Rights Groups”. How can civil society organisations defend themselves against targeted malware attacks? They have a four-year project monitoring this in China and Tibet, which found relatively low levels of technical sophistication but effective social engineering (classified by the effort put into personalisation). A second study looked at commercially available law enforcement malware such as Gamma’s Finfisher and HackingTeam’s RCS; they sell to the governments of countries like Bahrain, the UAE and Morocco who use them against journalists, refugees and others resident in the UK and the USA. There will be papers on both of these at Usenix.
Peter Swire was the last speaker in this session who talked about “The Declining Half-life of Secrets and the Future of Signals Intelligence.” He asked us not to blog beyond a sentence or two of description. Peter was on Obama’s commission to look into Snowden whose report “Liberty and Security in a Changing World” came out last year and recommended that the government cease wholesale metadata collection as it didn’t help stop any attacks. The report was about protecting both national security and privacy, maintaining public trust – and figuring out how to prevent unauthorised disclosures. His talk was on the fact that the half-life of secrets is declining rapidly because of technology; more people have access to more stuff, leaks now happen by USB rather than by printer, and a leaker doesn’t need the New York Times to publish stuff. For more see chapter VI of the report.
Discussion started on the evidence needed to stand up a hypothesis of evolutionary as opposed to cultural origins for some privacy preferences. There is significant literature in related fields. However the empirical work offers new insights into the real problem of how people react differently to privacy online. And the NSA’s problems are not unique to them; private firms can’t keep secrets any more either, any more than individuals can. A dissenting view comes from Yochai Benkler whose survey suggests that leakers are not getting more common over time; he sees Manning and Snowden as outliers. Keeping secrets is cultural as well as technological; the real change may be contractors who are no longer in the “club” with jobs for life. Chris Soghoian found numerous secret programmes on contractors’ resumes on Linkedin. Pervasive surveillance also raises unrealistic expectations: politicians expect that if they toss all the “big data” in the world into an analytics, it will provide the names and addresses of all terrorists as output. It’s more likely that we’ll suffer 100% of privacy harm for 100% of the people and 0% of the expected output. What are the consequences for justice and fairness, never mind privacy? The government secrecy culture was driven by the specific desire not to have rivals realise how big an advantage we had in 1945; what might replace it post-Snowden? If the Germans had found out that we’d broken Enigma, that would have been tragic but the fact of the break would not have embarrassed the analysts; many of the modern methods fail the front-page test, not least because we’re the targets rather than German or Russian military communications.
David Livingstone Smith’s topic was “Making Monsters – Addressing Appiah’s Challenge”, work he’s been doing in the last couple of weeks. He’s interested in dehumanisation, which facilitates mass genocidal violence by treating others as subhuman creatures evincing fear, horror or disgust. His texts were Morgan Godwyn’s 1685 “The Negro’s & Indians Advocate” and Himmler’s “Die Untermensch”. Godwyn describes the views of his slaveholding fellow-colonists in much the same terms as Himmler: not all who appear to be human are so, some are beasts in human appearance and without souls, and so on. These depend on psychological essentialism ad the “great chain of being”. Kwame Appiah pointed out that mere subhumanity is not enough to explain the antipathy and extreme cruelty observed in genocides. Freud described as “unheimlich” (creepy) the uncertainty about whether something is alive or inanimate; compare the uncanny valley observed by Masahiro Mori in robotics. David hypothesis that dehumanised people are felt to be uncanny as they violate the human/subhuman boundary: the appearance pulls us one way and the believed essence pulls the mind the other way.
Wojtek Przepiorka talked on reputation and signals of trustworthiness in anonymous online markets. About half the transactions in European online markets involve buyers and sellers in different countries, and reputation has a measurable market value: buyers pay for it and sellers invest in it. Wojtek has been looking into whether herding can undermine reputation systems, as bargain hunters and sellers imitate successful peers. His preliminary work suggests not.
Jodok Troy’s subject is the urbanisation of security. Many of our conflicts have their origin in cities, and cities have often evolved under pressure of civic conflict; we live in a world of slums, and a liberal American can be shocked when getting off the train in Gare du Nord in Paris to see armed soldiers on patrol. And authoritarian states riff off each other. Human conduct is not just about patterns of behaviour but the imitation of desire; in international relations this mimesis leads people to fight because they desire the same goals, whatever the meme of the day might be, and competition becomes and end in itself.
Rick Wash wants to challenge one of the foundational ideas of security usability: minimising the number of decisions users have to make. Win95 updates required the user to visit a website manually; by XPsp2 updates were automatic, and compliance exceeded 95%. Rick investigated what people thought updates were doing; most had no idea. But the more you remove the human from the loop, the more you remove the opportunity to practice. If we follow Lorrie Cranor’s advice “Whenever possible, secure system designers should find ways of keeping humans out of the loop” we remove the easy decisions; it’s like expecting people to play Beethoven when they first sit down at a piano rather than trying chopsticks first.
Discussion touched on different hierarchical ideas of humans touching on gender and religion, and their effect on the uncanny reaction; whether there might be other ways of shifting away from the uncanny reaction. In any case the boundaries between in-group and out-group are incredibly flexible. The uncanny reaction is complex: the Nazis likened Jews to rats but only wanted to demean and humiliate the former. There is research showing that moral decisions made by uncanny robots are less valid than decisions made by ordinary robots. Military educators (such as Tony Jack and Shannon French) research ways in which soldiers can be given the necessary psychological distance to kill, but without precipitating the more pathological forms of dehumanisation. Other topics included the differences between reputation systems, and in eBay over time. On the upgrade front, there was confusion over the Heartbleed warnings when people did not know whether to do it right now or later; it came out at the same time as iTunes 11, and many people didn’t like that to the extent they didn’t want to install the next iTunes (which contained Webkit, a whole browser) let along Heartbleed. People just don’t want systems to change in ways they can’t fix.
I talked about reciprocity, transparency and control. Given that intelligence and law enforcement surveillance networks are merging and globalising, what does local control mean? The city of Cambridge has a network of CCTVs and shared radio bands between police, store security and the university’s patrol staff; will we retain control? Even if our personal control over information is progressively compromised, surely a university should provide an environment where information collected about its students can’t be handed over to their countries of origin if they might as a result be executed for rebellious utterances? This leads to a challenge: can we redefine the concept of privacy and control from the user interface outwards? What would be the ideal – a latter-day Ring of Gyges that would enable us to disappear? We should try and dream up something radically new.
Andrew Adams talked on ownership, neutrality, privacy and security: the right to choose one’s partners, or feudal overlords. Bruce remarked that we’re moving to a feudal system where we give allegiance to a service firm in return for protection. Some philosophical schools don’t allow people to own land, merely to use it under some rules; even in countries like the USA where land ownership was nearly absolute, that changed when railways and aircraft came along. Modern technological industries have come up with many restricted concepts of ownership on which modern business models depend. How should this be regulated? It’s objectionable, for example, that a network operator can stop us from using one device as a gateway for others to go online. How should this be regulated?
Jean Camp believes passwords will remain as they allow circumvention: fancy authentication tokens won’t work if a C-level executive can’t give one to her personal assistant. We should focus on making them work better using a variety of techniques that help a bit, such as by giving positive feedback when someone gets one right, using memory cues to help recall, and using pictorial cues to reassure people they’re not at a phishing site.
John Kaag’s subject is the moral hazard of drones. Plato reckoned we should want to die well, and that can’t be put in a nutshell. You don’t motivate a user by talking about passwords but by tapping into existential issues. Plato was concerned about people in power, who have no incentive to be moral or just, at the time of the Peloponnesian wars. Now clandestine military action has always taken place, but never has it been so ubiquitous or so low-cost; Obama’s explanations often turn on not putting troops in harm’s way. That’s one moral hazard; another is the distancing between drone operators and victims. However that’s not complete, and many drone operators face an “existential crisis”, often airbrushed as PTSD. There is also a moral hazard to the public: most don’t know where drone strikes take place, as our citizen soldiers stay at home.
Bruce Schneier noted two underreported aspects of the Snowden affair: little is said about what is done with the data, or about the bulk streams such as location and traffic data. The emergent stuff is all our location data plus who’s talking to whom, and thus who’s travelling with whom. The fact that all this is piggybacking on commercial systems makes it acceptable; we’d never accede to a government demand that we carry tracking devices or report all our friends to the police. The big changes are from government-on-government surveillance to governments-on-populations; and that everyone’s now using the same technology and infrastructure – so that to tap foreign leaders the spooks have to tap everybody. Governments therefore see it as a simple choice between security and surveillance; it’s easier to eavesdrop on everybody than to target. Nothing will change, though, unless we can change the fundamental social issues: fear on the government side, and convenience on the commercial side.
Discussion (blogged by Sophie van der Zee):
Question to Ross, Bruce and Andrew from David Murakami Wood: Your argument on serveillance needs more sophistication, and importantly, it isn’t new but dates from the 70s. In the surveillance studies field we’ve talked about combining things together for years. And although network effects are important, they are not everything. Importance of profit and capitalism. Bruce: True, I’ve told a simplified story (70s are important). Ross: There was little awareness of network effects among IR people I spoke to, so there is a gap in how our two communities understand economics, and we will have better conversations when the left coast and the right coast understand it the same way.
Peter Swire: Let’s look at the problems of the NSA, but from their perspective. The US military budget is going down. If their secrets are coming out, their weaknesses will too. Let’s think about their responses in the future; They will feel they can’t do things that they used to be able to do. Ross: Back in the crypto wars, there were misconceptions that privacy and cryptography were synomymous. Yet privacy compromises are usually abuse of authorised access, and the rest is more about traffic data than content. This is still applicable. So the spooks are afraid of the world going dark, but instead it is getting brighter and brighter. Our problem at Cambridge is how we create a (semi) safe environment for students who come from countries where they can get punished for speaking their minds here.
(Bhismadev:) I thought about vaccination risks. You vaccinate an entire nation, although some people will never come across the disease. Should we treat this as a disease? Bruce: That is happening, people talk about it in terms of herd immunity. But fear is the important driver. If another speaker says that terrorists will kill your children, it doesn’t matter what I say afterwards. Ross: We are creating a global intelligence architecture which will last for decades, even when the US is not in charge anymore. I can warn Obama that the privacy he gives an Indian farmer today will be what the ruler of India gives his great-grandchildren in 100 years time. Would that work? Or is that too far in the future to make a change? Jean: I cannot save the NSA from itself.
Harold Thimbleby: It may become illegal to not take drugs when your doctor prescribes them, because if you stop a course of antibacterials early it leads to immunity. A company Protheus makes a chip that can be used to monitor if people take their medication.
Adam Speaks about a book: Intrusion (the fix).
Jean: This is not the future, there are pregnant women in the prison in the States right now, because you have to not just care of yourself, but for others as well (herd principle and pregnancy and the baby)
Serge: I miss the cultural norms from this discussion.
Frank to John about Drones: You said because there were no troops on the ground, it was easier to get authorisation; did I understand that correctly? John: There might be a correlation between technology and mission …. Bruce: Drones are interesting because as they get cheaper and more diverse, they are the first type of warfare where you don’t need the population on your side. It can now be done by small groups and the population doesn’t need to know. Therefore it is fundamentally different. Ross: Marginal cost go down, capital cost goes up. Adam: With drones we are not seeing the aftermath. Snipers do, but the drone pilot doesn’t see the impact. John disagreed: They do see the aftermath. Jean: Drones don’t replace full invasions; they replace the deniable CIA activities. And these people impact on their community, go to football, home etc. And the drones are bringing this effect home. Ross: The scale is too small compared with WWI and cannot overturn the government.
Diego to Bruce: About privacy and the collection of big data, we change our behaviour with technological advances. We are generating new resources that are public, but who owns what? Adam: Ownership is what my talk was about, also about data ownership. Bruce is not a fan of property rights in data; they won’t work in a lot of applications. Diego: Shouldn’t judges be in charge? Bruce: That’s better. But in the UK there even isn’t a secret court (the US does). Ross: We do have secret courts now, but they are for different purposes.
Adam: There was a German court that allowed a woman to make her ex-boyfriend delete her naked pictures, even though he hadn’t made them public. Ross: Jackie Kennedy was hiding behind sunglasses when she didn’t want to be recognised. Can something like this give us a socially embedded, modern version of the ring of Gyges? With Google Glass coming, can there be social signs to show you don’t want to be photographed? There used to be a privacy T-shirt with a copyright mark on it to as a notice to CCTV operators. If software firms agreed to not store images of people wearing sunglasses, you could have a social privacy signal with some technological support. Of course you then need a law to stop the police wearing shades.
> “With Google Glass coming, can there be social signs to show you don’t want to be photographed?”
Exactly the idea behind Offlinetags (http://www.offlinetags.net/en/) which were in the ConfBag at CPDP2014 and just presented at CHI (http://dl.acm.org/citation.cfm?id=2581195).