I’m liveblogging the Workshop on Security and Human Behaviour which is being held here in Georgetown. The programme is here. For background, see the liveblogs for SHB 2008-13 which are linked here and here. Blog posts summarising the talks at the workshop sessions will appear as followups below.
Arvind Narayanan opened the eighth workshop on security and human behaviour with a discussion of how Bitcoin opens up new possibilities for security behaviour. The security of bitcoin depends on that of your private key, which makes it startlingly like having everyone keep their cash under their mattress. Software on personal machines has never been the main line of defence before. The first thing many malware variants do nowadays is look for bitcoin logs. There have been at least ten thefts of more than 10,000 bitcoins. Security technology can sometimes drive innovation in crime.
Matthew Cronin is interested in creativity, problem solving and collaboration. Trust decisions depend crucially on context and trying to understand them out of context is problematic, just as it’s hard to study ethology outside of a valid ecological context.
Tom Finan is a cybersecurity strategist at the DHS. The last cybersecurity bill in Congress acquired so many features that it became a “Christmas tree” bill with little chance of passing; could market mechanisms work better? So Tom has been investigating whether cyber-risk insurance could improve incentives, and has run several workshops on this. Underwriters are now comfortable they understand crisis management costs, but it’s less clear how you get control system operators to pay attention and there’s not much data. Most underwriters rely on risk culture determinations to work out whether one company’s a safer bet than others. As underwriters become less cocksure, cyber risks get carved out of general corporate insurance. Some CISOs see insurance as a substitute rather than a complement, and thus as a rival for security budgets. He set up CIDAWG, the cyber incident data analysis working group, to accumulate a cyber incident data repository.
Bruce Schneier omitted a chapter on catastrophic risks from his latest book; what might have been in there? More technology lets an individual do more damage, just as gunpowder can do more damage than a sword, and this is likely to continue. What’s the trade-off between criminal efficiency and social welfare? We need fewer criminals as they get more powerful; where does it end? As the big red “do maximum damage” button becomes nastier, so the clamour to stop anyone pushing it will grow. If we get to the point that one guy can kill everybody, then what could possibly be done? Surveillance can detect actions that require a conspiracy, but is less effective against a single individual. Technology restrictions don’t really work, as the history of DRM shows. Morals and social norms break down when you start to have nonhuman actors such as autonomous software agents.
Discussion was led by Andrew Adams. There’s a huge gulf in the definition of “openness” between the extreme openness of bitcoin and the more cautious protocols around vulnerability disclosure. What induces people to take the risks involved in, for example, using bitcoin? Often direct benefits such as defeating exchange controls or even anonymity for downright criminal transactions. People also have protocols (and concepts) of slow versus fast money; you don’t expect to buy a house on impulse. Faster payments can make fraud easier, whereas slow diseases can do more damage; AIDS kills more than Ebola. In the army, you often try to slow down reaction times to forestall traps and errors. Online there are complex nested games between attackers, large-scale defenders such as the developers in big service firms, and individual defenders, which game theory can start to model, Technology policy paths can be complex; for example, the fight against backscatter X-ray devices in airports started with privacy concerns but then expanded to defects in their security policy model and capabilities. This led the vendors and the Director of National Security to argue that it was part of a layered security approach, which may in turn be one approach to future catastrophic technology risks. But how is this squared with constitutional limits on the concentration of power? There are also prudential limits: technology often concentrates power, which in turn creates new risks.
The second session, on fraud, started with Stuart Schechter talking on surveillance and power. What would happen if face recognition was used in door locks? In one of his studies, most (7 of 13) teens did not want surveillance to log their photos on entering/leaving home, while all (11) parents thought it would be negligible not to monitor them; however half of the parents thought it sensible to negotiate. Most (but not all) parents allowed their partner to be an admin or auditor; most (14/19) allowed themselves to audit their teenage children’s access surreptitiously while the rest set the interface so that the children would see when they were seen. This raises many interesting questions about the design of the future “home surveillance state”; without strong privacy defaults, the people who set it up a domestic installation may often take a lot of power.
Vaibhav Garg works for Visa and asked for his talk to not be blogged.
Joe Bonneau’s challenge is “Why aren’t all comms end-to-end encrypted in 2015?” He’s been studying this at EFF. User expectations for different for email and chat, with the latter being much simpler and easier to protect end-to-end. A billion people use iMessage, whatsapp or other secure chat services, with whatsapp being the biggest ever deployment of end-to-end crypto; no-one expects to be able to retrieve or search old chats, or fancy threading. Another pain point is that 1% of users change their key every day, because they reinstalled the app or lost their device, and their contacts get an obscure warning message. A coming issue is whether people will be able to chat between the different walled gardens. Spam will also be harder to deal with, and search. Do we try to bring chat encryption to email, perhaps with less functionality, or just leave them as separate systems? At present people decide which to use for largely social reasons. There are complex issues around managing multiple identities in ways that are less vulnerable to the coercion or compromise of a particular service; projects like Keybase and Coniks are working on this. However we need a consistent user interface; perhaps an append-only log presented as a sports trophy with a growing list of names on it.
David Modic is interested in the psychological mechanisms that make some people more vulnerable to fraud. He picked ten fraud types from victimisation studies and got 6,600 self-reported victims via a media organisation’s crime study. Different mechanisms apply at different stages; to make it initially plausible it should be logical, appealing to victims’ need for cognition (low levels of premeditation and susceptibility to informative social influence also matter). To get people to respond, it should be unique, novel and apparently ethical. To get people to actually lose money, it needs most of the above, and also appeal to consistency and normative social influence. This last is important; scammers should appear to be part of the in group and not stand out.
The morning’s last talk was from Tyler Moore, explaining why there’s no free lunch, even with Bitcoin. He analyses the blockchain to estimate the amounts of money flowing into and out of various scams. His focuses are schemes that are clearly fraudulent by design, from high-yield investment programs through mining and exchange scams to fraudulent wallets that just steal all the money put in them (see his Financial Crypto paper for a full taxonomy). The scams that make the most money get most of it from a small number of victims; the Gini coefficient is correlated to log earnings.
Discussion started on how victim motivation varies with the size of the take; are the big victims driven by greed? Leff’s “Swindling and selling” examines how the structure of swindles and sales are the same, the difference being in sociology as much as anything else. Marketing academics and phishing researchers should be able to learn a lot from each other! The key thing for many scams (as with sales) is the credit card number; the scammer only needs you to agree to pay $1.95 for shipping and handling, and then they’re got your number and can loot you. This is tied up with the “guarantee” from Visa and Mastercard which makes people comfortable with taking this first step. The transactions, and guarantees, are often too complex for the marks to understand. When it comes to crypto, the added complexity of email over chat is not just that the latter has a central identity provider but also that the former has to think more about active attacks. This has usability issues as well as technical ones. And there are corporate usability issues; how, for example, does a company stop training users to click on links? In the family context, people will always demand all the features if they’re presented as such, but will act differently if the defaults are different. There the problem interacts with economics in that firms sell to the person with the money, so the kids’ interests will take second place to the parents’. An interesting set of questions will be how control adapts as kids grow up; what’s the digital equivalent of “getting the keys to the car”?
> parents thought it would be negligible not to monitor them
negligent ?
John Chuang is interested in the proliferation of consumer-grade neurosensing devices which now sell for under $100 and do everything from triggering cameras to measuring stress. In 2005, Julie Thorpe showed she could authenticate users using multi-electrode clinical-grade EEGs; John wondered whether it would be possible to use a single-channel $100 device. He found he could, by matching the tests to the subjects, in which case you can get equal error rates of a few percent. In fact, by asking people to imagine their favourite song, you can meld in some secret data and accomplish two-factor authentication in a single pass. He is also experimenting with synchronised brain wave recordings, getting traces from 60 students at once in lecture theatres; and experimenting with anonymisation techniques.
Tony Vance is using fMRI to explore dual-task interference with security behaviour, and in particular security message disregard. People often have to complete two tasks at once, such as clearing email when interrupted by a security warnings, and pay less attention to the secondary task. How can performance change if tasks are tackled in a more rational order, for example? They tested volunteers in an fMRI machine and found they could use brain data to predict performance on a warning task. He concludes that security tasks which interrupt users make us less secure, as the interruption makes them harder to deal with; wherever possible, warnings should be dealt with later.
Catherine Tinsley is a decision scientist interested in near-miss events and risk. We mostly interpret near misses as disasters that didn’t happen, and Catherine explored this by getting subjects to be oil drilling managers who take production decisions based on weather forecasts with a known error rate; subjects who had read about near-misses were more than four times more likely to drill. In other words, near misses can make us more comfortable with the risk rather than more wary. She thinks this is because the experience can make people fell that “40% risk” is less serious. This may explain “gateway” behaviour; if kids get away with running DDoS attacks in a game environment they may be more willing to do it in other contexts too.
Alex Imas investigates the realization effect – whether realised losses have a different effect from paper losses on future risk behaviour. On paper, there should be no difference, as a loss is a loss. The previous literature reported mixed effects of losses on risk appetite, but results seem to depend on whether losses were realised at the end of the experiment. So he designed an experiment to test this, and found that people claimed that they’d decrease their investment after a loss but acted otherwise unless money was actually transferred. He concludes that mental accounting matters, and thus transaction speed (as with credit versus debit cards) really matters.
Andrew Adams is concerned that people’s behaviour once in a context such as Facebook is different from their preferences expressed beforehand. Context changes not just what people do, but how they fell about it, and the literature on this goes back to Brehm (1955) on post-decision bias; and the less overt external pressure, the more people will adjust their attitudes in order to justify themselves. In Japan, where Andrew lives, people initially used a pseudonymous social network, but switched to Facebook once it had a Japanese interface and global network effects kicked in. Attitudes to information sharing changed significantly between 2008 and 2011 thanks to Facebook’s rise, with network effects, group conformity and other effects leading to pressure, particularly on minorities, to reveal information that is to their detriment.
In discussion, affect also makes a difference, so inducing fear makes people more risk-averse. However there is little immediate feedback on cybersecurity behaviour; a bad outcome is seeing an unfamiliar transaction on your account a month later. For the most part, you only get negative feedback for positive security actions (e.g. you turn off javascript and websites stop working). As for mental accounting, the realisation of losses is the point where a reference point is updated, from the viewpoint of the (intertwined) endowment effect; for the effects of emotion on this, see Lerner’s “Heartstrings and purse strings”. Similarly, if you lose at poker, then so long as the chips remain on the table you’ll gamble harder to get them back; the reference point update happens when you switch tables, or the chips are taken away. More work is needed on when people imbue particular tokens and symbols (such as bitcoins) with the value of money; would people really put real dollars in a Ponzi scheme? People lived in a world without credit, where all losses were immediately realised, until very recently, so this should surprise no-one; and the same applies to near misses. Near-miss effects interact badly with corporate culture, as people won’t take responsibility; as noted earlier, companies invest in safety culture as an act of theatre.
Karen Levy studies how surveillance interacts with sex, relationships and family life. One topic is how surveillance is marketed to families as a care product and indeed an obligation on a loving parent. One company wants you to feel empowered to track everything in your world, from your corporate assets to your offender caseload to your mother; another says you shouldn’t watch your drivers but watch out for your drivers. Everyone’s adopting the language of care.
Catherine Tucker has been studying the effects of privacy regulation, and found for example that it reduces the effectiveness of ads in Europe by two thirds. (This is due to a large baseline effect as advertising generally doesn’t work well anyway.) On the other side, privacy policies that give the illusion of control boost clickthrough rates; privacy laws that give patients rights over their data improve rates of both genetic testing and HIV testing; and there were chilling effects from Ed Snowden’s disclosures about government surveillance. She’s now working on whether big data can lead to persistent income inequality or entrench police profiling.
Sandra Petronio explores privacy dilemmas. The reason privacy is hard is that there are other people in our lives, and this is often overlooked. She has a framework to reason about privacy that puts the interests of others before formal semantics or rules; turbulence happens when rules don’t work. Risk factors include unsolicited disclosures, degree of responsibility for disclosed information, relational closeness, dual loyalties and expectations that you’ll abide by implicit rules. In a survey, couples agreed that wives snoop more than their husbands, but that this is a reaction to limited access to information, and trust can stop it.
Annie Anton is trying to figure out what people mean by “privacy by design”, and has run interdisciplinary workshops at both Berkeley and Georgia Tech. She has also worked on privacy policies; what do they mean in an “Internet of things”? They may still be needed as a basis of discussion between lawyers and engineers. Software engineers get almost trivial amounts of tuition (1–2 hours) on privacy law, and so it’s hardly surprising they often have difficulty figuring out whether requirements comply with the law. Subject matter experts do best. Training people to develop software that respects privacy is a challenge! She and Peter Swire teach a class on privacy technology and law to students from eight different majors, who do projects on topics in the news such as implementing a right to be forgotten.
Laura Brandimarte was Monday’s last speaker, and her topic was the nature of privacy decision making: are privacy preferences culturally universal or a function of rational anticipation of consequences or what? Is the decision function a linear combination of innate preferences and rational trade-offs? She got people to decide whether to adopt a health app with a range of possible consequences, both within and between subjects; the latter showed no effect, suggesting anchoring effects are small, but there is a lot of variation in intention to adopt with the likely consequences. There are (weak) order effects but the valence of consequence is dominant. So perhaps the mean innate privacy preference is small; but do preferences converge given similar consequences? Not always; they do for negative consequences but not for positive ones. The discovery of an innate privacy preference might require a more subtle experiment.
Discussion started on economic topics such as the market since Snowden of systems that purport to detect insider threats. How can we push back on ever-more pervasive employee surveillance? But the cultural change isn’t just driven by technology; recently a Florida mom was arrested for neglect after letting her seven-year old walk half a mile to the park. Another issue is that you can control everything about an eleven year old’s life except whether she can talk to the non-custodial parent, in the USA at least. Yet another is that if you cosset kids against very low-probability events, when they go to college they are completely out of their depth, and now face much more significant risks with much less experience. Sandra’s ideas of group privacy have been adopted to some extent in social media but the demarcation with individual privacy is hard, and social networks present friend groups as friends when often they aren’t really. Even legitimate business models are increasingly exploitative and obscure, which doesn’t help.
Tuesday’s first speaker was Frank Stajano, discussing what’s really wrong with passwords. Many researchers assume that online passwords are a pain point, but they are much less of a problem than they used to be because of cookies, password storage in browsers, and websites’ use of analytics to detect account takeover and thus mitigate weak password choice. The pain points are infrequently used passwords, unexpected or coerced account creation, passwords on touchscreens, and even unlocking your computer.
Sophie van der Zee was next on lie detection using body movement. Deception researchers who code body movements manually tend to analyse only large movements, and of the hands and face. Automatic measurement can do much better. Her first experiments were with the motion-capture suits used by actors who perform in cartoons and computer games, and showed that liars moved significantly more than truth tellers. Unaided humans can tell lies from truth only slightly better than chance (53-4%) while polygraphs manage 61-83% in experimental studies and motion capture scores at the top end of that, at 82%, despite being a new technique while polygraphs have been developed for almost a century. Her research now is on whether she can use remote sensing, using a Kinect and radar, rather than needing to have interviewees wear a clunky body suit.
Brian Glass has been studying the creation and detection of deception. He runs a mock eBay where subjects offer items for sale and can decide whether to omit information about flaws; feedback and whether items sold can be used to manipulate sellers. People start off not being very honest, with 84% of flaws being omitted from initial ads; honesty increased the most when the item sold for less than expected, and also there was negative feedback, giving the strongest negative feedback. On the other hand, when items didn’t sell, sellers persisted in their dishonesty, and negative feedback had much less effect. He also has a tool for analysing seller feedback and enhancing buyers’ situational awareness; it can detect various common frauds such as people who sell trinkets reliably to build reputation but then burn customers on more expensive items.
Jeff Hancock studies language, deception and context. There are many known linguistic cues to deception, including increased use of the first person pronoun. He studied 253 retracted articles from Pubmed containing 2m words for obfuscation; in the fake articles it was higher, particularly in the introduction and discussion sections where there’s more room to manoever. Fake papers also include more references. In general, though, deceptive behaviour is rather context dependent.
The last speaker was Ashkan Soltani who now works at the FTC. Deception is a core focus, as it typically takes enforcement action against US firms that are deceptive about their privacy practices; firms that say nothing are generally left alone, as are firms whose privacy policies admit that they sell your data to all comers. The commission’s focus is consumer protection not citizens’ privacy rights. The sort of questions he researchers are whether we have any way to know whether Google maps took us here along an optimal route, or brought us past a store that pays them a kickback? As business models become more complex and opaque, questions like these become ever more important.
Discussion started on how people might game motion-based deception detection or be trained to evade it; it turns out that body motion is related to guilt rather than anxiety, so it’s somewhat different from the polygraph. On fake reviews: people who work at a hotel are better at writing fake reviews than professional review writers, as they know all the local context. Methodological issues include whether people who lie for real stakes behave differently from people who lie because they were told to; a recent meta-analysis suggests not. What about desirable deception, from smoothing social relationships to defensive behaviour by the less powerful? Deception detection apps already exist, and ultimately their use and control requires social or legal decisions. People have sought lie detection technology since ancient Greece and China, and there are multiple mechanisms that might be exploited (anxiety, guilt, cognitive load). But different people react differently; Johnson used fewer first-person pronouns over Tonkin, for example, as did Nixon over Watergate and Bush over WMD in Iraq, while Clinton used more over Lewinsky. Text messages are different again; people use small lies (such as a low battery) as a privacy buffer. Yet the biggest effect in deception research is still truth bias: language depends on trust, on the assumption that the partner is cooperative. The LBGT community might be a good place for deception research; a girl who’s spent four years trying not to have to explain to army colleagues why she doesn’t have a husband or a boyfriend acquires great skill at making people look somewhere else.
Bill Burns studies the role of deterrence in commercial aviation security. He’s built a probabilistic model based on vulnerabilities and the number of credible threats. Feelings of efficacy are important; people won’t make an effort if they don’t believe they can affect the outcome. There is also the “collapse of compassion”: people are prepared to donate to help one child in distress, but not if there are a million children in a famine area. Might it be similarly possible to deter terrorists by getting across that killing a few dozen people will have no effect? Or should terrorist acts be reframed as simply criminal? One way or another, the goal should be to reduce the meaningfulness of such acts.
John Scott Railton has looked for some years at repression in Syria and is now studying ISIS too. Cyber militias are using repurposed crimeware rather than military-industrial tools like Finfisher. All sorts of ruses are used; he showed an example of a hacker pretending to be a young widow. They collected large amounts of contact data, and hack the person rather than the device. The tools are the minimum that will do the job; the psychology is good and constantly evolving. Teens in Arab militias are the easiest and softest targets in the world. ISIS is now doing the same things. Both Assad and ISIS can undermine NGOs and extend their reach into the diaspora. For more see his report.
Lora Ballard joined the US army in 1992 and served four years. She described the lack of privacy in military life; for a lesbian it was terrorising and exhausting, despite “don’t ask, don’t tell”. The gay exclusion policy only started in world war two after psychologists started to consider homosexuality a disease rather than a crime; San Francisco is a gay city in part because it was a place WW2 conscripts were discharged if found to be gay. Whites forget they have a skin colour, and similarly straight people forget they have an orientation. Post-9/11, America discharged 58 gay Arabic linguists – not the smartest move from the viewpoint of national security. The people getting hassle nowadays and trans and intersex people; thanks to the growing ID card culture, someone who presents at the airport as female but whose photo-id says “M” will often get pulled for secondary screening, or even beaten up.
Peter Carnevale is investigating how awareness of video surveillance affects behaviour in security contexts. Many US campuses are going to the way of London with massive video installations. He wants to develop a framework for measuring the effectiveness of video surveillance as well as the unintended effects including the psychological dynamics of public responses and the legal issues.
Richard John is interested whether some aspects of privacy are a sacred value, namely one we don’t trade? The fourth amendment to the US constitution would have it so. He ran a study of conflicting trade-offs between privacy, ease of use, speed and cost in mobile devices. He made salient surveillance by the government, hackers, marketers and family members; devices had a single trade-off (e.g. pay extra $100 for a phone with encryption). On average people will pay $62, or take a further 12s to download a photograph, or lose 16 of 20 apps, or spend a further four hours learning to use the device. Finally, conservatives were prepared to pay 2.4 times more than liberals. People would would sacrifice more privacy to the government than anyone else, and would pay more to avoid profiling than anything else. He concludes that privacy is not a sacred value.
Discussion started out with the fact that other sacred values can be traded off in some contexts and moved to deterrence, where it’s hard to measure what the opponents care about. Is there any moral reasoning that might help? We don’t know how to target the terrorists with affecting others; maybe we should go for the doctrines designed to support it, such as the virgins available for martyrs, but how do you do that? How would you deter white male school shooters? (Politicians and the media try to play them down.) And there are many incidents in our past when improved surveillance would have been a bad thing, from gays in the military to the underground railroad. We are probably doing something right now that in 20 or 30 years we will really regret. There are significant differences in status in society which might be a place to look; another is our failure to properly consider the ethical and societal consequences of the new technologies we introduce. The stigmatisation of particular offender groups may also be an issue, such as the use of polygraph tests to recall parolees to jail. The NGOs targeted by the Syrian and ISIS militias are using the same tools that we all do, but their exposure is high consequence while ours is almost trivial in comparison; so our tools are designed on the assumption that hacks don’t matter much. Diversity in tech employment might help.
Serge Egelman studies whether people understand app permissions, and how this might be improved. The biggest problem is that users get habituated and desensitised; another is that presenting a dialogue at install time is too late; a third is that app permissions are now too complex for nonexperts to understand. He thinks 55% of permission requests (the low-risk and reversible ones) could be granted by default while many of the others could be asked for at run-time so they could be placed in context. He instrumented phones, gave them to subjects for a week, and analysed the logs. The lesson is that human attention is a finite resource, and should be focused on unexpected events.
Rick Wash looks at how everyday people deal with computer security issues. Know-how basically gets spread by gossip, so he collected about 300 stories; web pages may also have influence, so he collected 500 security advice pages; and finally there’s the press, so he got 1000 articles on security. Topic analysis showed that phishing and spam was top, then data breaches, then viruses and malware. Expert pages organise advice in terms of specific attacks and the defences against them; stories tended to be about hackers and more about who was attacking than the precise modus operandi.
Sunny Consolvo works at Google on browser warnings; 68% of people ignore SSL warnings in Chrome and 23% ignore even malware warnings. Firefox has lower clickthrough rates, so they’ve been experimenting. They tried the firefox text, with and without Google style; they tried adding eyes, removing branding … and almost nothing worked (a picture of a criminal had a very small effect; one of a cop had none). People would even ignore a Chrome warning against YouTube despite the fact that Google owns both. If the user had visited that site before, they were twice as likely to click through. The one improvement they’ve found is that moving the text to the company’s new lighter type style reduced click-through by a third.
Jean Camp hates warnings as they show a lack of respect for users’ time. Serious messages like “smoking kills” are short and to the point; security messages tend to be verbose. We need to make messages helpful and easy. She has built prototypes of systems that do risk communications for desktop and mobile; she too has found that eyeballs don’t work, but in her experiments a lock did. She even has a video to communicate a “germ theory” of computer misuse.
The discussion started off on the pointlessness of certificate warnings; perhaps a useful default would be to just ignore self-signed certificates, or fail to warn on them. Google at least makes the SSL warning less scary than the malware one. Might warnings be customised to the user, say high, medium and low sensitivity, with high for dissidents in repressive countries? Might we include social aspects, such as “most other people heeded this warning”? The biggest challenge is that we’re interrupting the user’s task and stopping them getting to the site. Another is that phishing kits are often hosted on hacked sites, which the user might have visited before without harm. (Google tried a specific warning for this, but it didn’t work.) There are also threat misconceptions, for example when a victim ignored a warning that his computer might be damaged because he wasn’t using his own computer, and entered his credentials into a phishing site. Some sites are blocked completely, but if too many are blocked then users might use a different browser, or the liability issues might dominate; similarly if more active filtering techniques were used. There is a big question about whether users don’t care enough, or whether it’s the lawyers’ fault. There’s another about whether users have any real discretion; if they term off scripting, they can’t book flights or hotels. They may also just be fed up, and not want to participate in the liability dumping that’s going on.
The session on foundations started with Tony De Tomaso talking of self-nonself recognition in immunology; genetic conflicts exist between organisms, and within organisms as they evolve. Allorecognition systems use polymorphism to ensure that the major histocompatibility complex of two random individuals will fail to match. Corals that are compatible can fuse together forming chimera, while incompatible ones have scars where they meet. Slime molds are another example of a unicellular-multicellular transition organism, which can tolerate as many as 15 or 20% of cheater cells. Germline chimerism gives real evolutionary advantages to the cheater. Fusion and parasitism must have evolved early, and are involved in cancers as well as providing the basis of our immune system. One conclusion of Tony’s is that competition and altruism are context dependent.
Dennis Egan is an HCI guy from Bell Labs who now runs Rutgers’ security research centre. NIST predicted in 2011 that by now we’d need an extra 700,000 cybersecurity professionals and he did a report on this topic. He believes it’s not just for IT or compsci people, or even STEM majors, but should be taught more generally; we should develop and value the many subspecialities. But how should you measure its effectiveness? When do you start it? (He thinks middle school.)
Bonnie Anderson works in neurosecurity, including how technostress affects the response to security warnings. Subjects think they’re evaluating weather extensions in Chrome; in the treatment conditions, various security warnings are given, and the app asks for access to all your data. Saliva samples measure cortisol levels and subjects were also asked about stress. Curiously, the reports and measurements turned out to be in conflict for most subjects.
Last up in the mid-afternoon session was Robin Dillon Merrill, our host. In the Anthem breach 80 million records, including hers, were compromised; she found she couldn’t file her taxes as someone else had filed for her. She was also part of the breach where the bad guys got her old tax form. She didn’t know whether to trust the communications from the IRS, and ended up freezing all her credit, so she can’t get any new credit cards. This was the kind of warning that just keeps on getting worse, like a fire alarm that turns out to be a real fire. So: how do we design warnings so they’ll work when the warning is real? It’s vital to keep false alarms low! In the Deepwater Horizon incident, alarms were turned off so people could sleep, and eleven people died. And the parking sensors on her car are so sensitive she ignores them; eventually she’ll hit something in the garage. Risk communication is also hosed by stuff between false positives and true positives, such as the large tube of toothpaste that the DHS confiscate from your bag. This is annoying to the passenger, and habituates the screener to believe that their job is to find toothpaste not bombs.
Discussion turned to immunity which can be adaptive or innate; there’s also a redundancy between anticipatory immunity which looks for new stuff, and another system which detects self. These different mechanisms are needed to deal with microbes that can breed way faster than us; perhaps some strategic concepts from immunology might be useful to security engineers. It’s particularly useful how rapidly second and subsequent infections with the same pathogen are dealt with. On warnings and risk, the problem is to set the threshold so that only real problems get taken to the human user. The social component matters; if there’s a disaster and you’re asked to evacuate, the first thing to do is ask your neighbours if they believe it. What is it that causes cultural change, such as the way people have started locking our doors and driving our children everywhere over the past 30 years? Well, you can figure out how to be safe in a car (wear a seatbelt, and don’t drink) but what do you do to practice safe computing? CERT has a list of 594 pieces of advice. There just isn’t any “actionable intelligence”. To get users to do stuff, you need simple and unambiguous advice, such as “get anti-virus software”; but most of the time users can do little; it’s mostly down to the websites they visit. So we have to try to educate the IRS and other website operators, but that’s hard. As for habituation, there’s a lot of literature; see the sociological literature on disasters, which has debunked the “cry wolf” effect, the decision science work on how people react differently to probability if they experience it case by case (even second-hand via a celebrity) rather than being told a numeric probability, and the literature on taking risk decisions for self versus others. Finally, most of the time trusting people is the right call to make. In real life we outsource the exceptions to law enforcement; how come online is different?
Bob Axelrod started the last session by asking what the big problems are. Probably climate change and war, and cyber can be relevant to both. When nuclear weapons came, it took fifteen years to figure out how this changed strategy; a conventional military commander wants to maximise flexibility, while nuclear command relies on commitment. Wars have a power law distribution; as the number of casualties goes up by a factor of ten, the frequency is cut by a third. As a result, it’s the big wars where most people die, even though they’re rarer. So if we’re to “fix the world”, the topic of this panel, we need to pay attention to the effect of cyber on major conflict. Where is the escalation gap, as there is between conventional and nuclear weapon use? For example, a cyber initiative to communicate with Chinese dissidents might be seen as free speech in the west but as a regime threat in Beijing. In addition to such asymmetries, cyber weapons are insidious, act at a distance, can be seen as cowardly, and may thus give rise to a disproportionate desire for vengeance.
David Clark noted that most of the problems we’ve discussed over the past two days arise at the application layer, so that’s where the problems will have to be fixed – not at the plumbing level. Apps nowadays are insecure by design; they have affordances that promote insecure behaviour. In the old days we taught kids not to take candy from strangers; now we let our kids download javascript from strangers and run it on their machines. It’s not as if there was no warning; when people started doing this, the security guys said “Stupid! Don’t do it! You’re all going to die!” and they did it anyway. The tradeoff between security, usability and features will always be with us, but let’s try to be optimistic. What engineering methods could we use for getting to better tradeoffs? Relying just on engineers to create the affordances through which people interact, and thus create the new social conventions, is imprudent. How can we get to a better place medium term?
Jeremy Epstein is from the government, and is here to help us! Within NSF, the computer science grades a letter below everybody else, and the security part about half a letter lower still. This pessimism harms security research funding, and may also make it less likely that we come up with workable stuff. Anyway, the NSF now has a programme to encourage computer scientists to apply for grants for joint work with social scientists with whom they haven’t worked before. He’s looking for projects where the two will be reasonably equal partners, and encourages us to look out for the next call. As an example of where progress might be useful, his NSF predecessor Carl Landwehr is now developing a course entitled “cybersecurity for future presidents”. What should we teach young people now that might be useful to them as senior executives in 2050? It’s not access control lists and RSA cryptography! And finally, what should the boundaries of cybersecurity be?
Peter Swire talked about the law Congress passed last week, the USA Freedom Act. He was on the NSA review group whose recommendations matched the act reasonably well, and takes the view that America is coming out of the fever that descended after 9/11. Their brief included protecting both national security and privacy, and stopping Snowden-type leaks happening again. They found that the phone record collection done under S215 had been unhelpful; the bill allows S215 orders only with judicial approval in future, and ends government bulk storage of phone records. Not all bulk collection was stopped, but agency lawyers will be more cautious in future; there will be more transparency, and public-interest advocates at the FISC. Other recommendations came in by administrative order, such as a three year limit on NSL secrecy orders. The White House took charge of the equities process and zero-days, as well as the surveillance of foreign leaders, and foreign nationals will have their privacy respected to some extent. This amounts to the biggest reform since FISA was enacted in 1978.
I talked about accommodation frauds. Dozens of people who come to Cambridge as grad students or postdocs every year rent apartments that turn out not to exist. We investigated this by responding to a thousand scammy ads and studying the persuasion techniques used by the scammers to part people from their money. It turns out that they operate just like any high-pressure legitimate sales operation. More and more, bad stuff online won’t be clearly “cyber” but often just abuse of facilities and business methods that everyone uses. How do you get the police to take an interest? Perhaps only the biggest forces will, and in periodic crackdowns; the process of marketing such crimes to the police is not straightforward. Local forces don’t want to tackle globalised crimes. This is in the context of a business environment that’s becoming more complex and opaque, as some speakers had noted; this community should start thinking of consumer protection as a significant research goal. A couple of dozen of the last two days’ talks were relevant.
Discussion touched on whether cracking down on Western Union might help, and whether the anti-money laundering regulations are effective. To what extent might consumer protection be crowdsourced? It already sort of works with Tripadvisor, Uber and so on. Perhaps the social side of marketing can help too. The FTC sees so many scam sites like the apartment rentals, in fields from weight loss to supplements, that perhaps someone is selling the website and sales technology. What is the right regulatory response? No-one really knows how to deal with globalised petty crime, as well as with privacy and antitrust. Institutions lead to more work preventing frauds by individuals against firms rather than the other way round; what could we do about such collective action problems? The precedents of spam, robocalls and so on suggest we need new funding streams or new policies, but not right away as technology outpaces law. For example, the blacklisting that stopped robocalling has failed now that VOIP makes caller reputation irrelevant. In the EU, the single digital market might lead to Europe-wide consumer protection. On the law enforcement front, is it helpful for countries to have a unified domestic surveillance agency, like the FBI? The key risk is the “Watergate” one of the government targeting political opponents. As for war prevention, wars seem to be prevented by the spread of democracy and wealth. As for leak prevention, what Snowden did was so outside the norms of the people who live in SCIFs that it was inconceivable to see him as a hero. But you can’t let the cops make the rules for how the cops get access to stuff and how it’s regulated. And what happens when sensors in the Internet of Things start being useful to law enforcement, and fall outside the current rules?