I’m liveblogging the Workshop on Security and Human Behaviour which is being held at CMU. For background, see the liveblogs for SHB 2010, SHB2009 and SHB2008. The papers are here and the session reports will appear as followups to this post.
10 thoughts on “Security and Human Behaviour 2011”
The first speaker was Frank Stajano, describing Pico, a password replacement device with a camera, display and a couple of buttons that can be incorporated in a watch or keyfob; it can capture challenges or certificates from a screen and communicate with a terminal device by radio. The novel aspect of the Pico is how it authenticates the user. It ensures it’s not been lost or stolen by communicating with “siblings” – other RF devices carried by the user, which can include other consumer electronic devices and even jewelry. The goal is replacing passwords altogether, not just moving from password to password plus token.
Matt Blaze was next with a talk he asked me not to liveblog; it will appear later as a paper “Why Special Agent Johnny Still Can’t Encrypt”.
Peter Robinson works on measuring human behaviour and the use of affect in human-computer interaction. We can interpret basic emotions from a still picture, such as fear, anger and surprise; more complex emotions require several second of evidence. The dynamics and amplitude of facial expressions are somewhat different when acting from in the real world, which can pose methodological difficulties. Many features can be extracted from voice; inferring mental states is complex but doable. Body posture is a new frontier; the Kinect does for $100 what used to require fancy equipment costing $50,000. Peter’s team is particularly interested in studying car drivers (and Grand Theft Auto at $100 is much better than commercial simulators costing $10,000). One problem was getting subjects engaged: some people were unbothered about crashing cars. The fix was to get them to fly model helicopters round the lab instead. The lesson is that experiments on human behaviour may require an innovative and indirect approach.
Pam Briggs is interested in applied mistrust and social embarrassment, and working with computer scientists at Newcastle. Interests include in how authentication methods become less usable as you age; user comfort with authentication; and socially-mediated authentication. Asking users to shield a pin pad makes them signal social distrust. They have been exploring options for multi-touch screens, such as forcing people to shield a PIN pad with their left hand before entering a PIN with the right; obfuscating which screen object you’re selecting by moving rings that select multiple objects; and using combinations of finger pressure from both hands to select digits from a keypad. The goal is to enable one person to authenticate at a multi-touch shared tabletop without other people at the tabletop being able to shoulder-surf and without creating social discomfort.
Markus Jakobsson is concerned with app spoofing, where authentication events in games (for example, to buy a magic sword) can be spoofed by an attacker. The test is whether naive users get logged in at good sites but not at bad sites; the method is to use an interrupt mechanism (such as shaking a mobile phone or pressing a home button) to cause the device to check a site certificate. His work is at http://www.spoofkiller.com and the trick is to train the user to perform the interrupt sequence as a conditioned reflex. He’s also interested in password entry speeds and error rates on mobile versus desktop devices (see http://www.fastword.me ).
In the discussion people wondered how you’d use Pico in bed or other places when your clothes and other devices aren’t nearby; why we haven’t learned the lessons from “Why Johnny Can’t Encrypt”, and in particular why people who build things seem to be falling ever further behind the research community; how people could use modern mechanisms like Pico in environments such as the DoD where people aren’t allowed to use cameras; on how facial expressions and voice features generalise well across persons and cultures while body gestures don’t; on age- and sex-related differences; on the feasibility of trying to train users versus exploiting natural behaviour (which is much easier); on whether we use folk taxonomies of psychological states or something more refined (Peter Robinson uses categories from natural language, as being those that people grasp culturally, and allows for multiple overlapping emotions).
The second session, on foundations, was kicked off by Shari Pfleeger. In her own multidisciplinary work she’s learned to explore and document assumptions, as different disciplines have different starting points. It’s also important to think in series of studies rather than one-off projects, to think of effective training, and to understand scale: for example New York has over a thousand times as many first responders as North Dakota.
Terry Taylor has a group working on Darwinian security. Nature doesn’t plan: biological systems don’t waste resources trying to predict future states of a complex and changing environment but rather invest in the adaptability. But how do we operationalise this in cybersecurity, emergency response or the resilience of commercial organisations? Responding is what you do immediately and adapting is medium-term. The best solutions tend to be recursive; according to Geerat Vermij, a modular structure of semi-autonomous parts under weak central control is often best. The DHS and FEMA are exactly the opposite, and their response to Hurricane Katrina was not impressive. Managing uncertainty also matters; living things try to decrease uncertainty for themselves for increase it for their predators, prey or rivals, while many of our anti-terrorism measures simply increase our own uncertainty. Symbiosis, or cascading adaptation, is also worth study.
Michelle Baddeley is a behavioural economist interested in the role of learning and emotions in online systems. Drivers for online behaviour include risk attitudes, time inconsistency, peer pressure, and visceral factors such as fear, greed and impulsivity. Explanations of time inconsistency are perhaps the most developed contribution from behavioural economics; the procrastination literature should help anyone developing systems for backup. Michelle’s work is largely in peer pressure and herding, and interaction with emotions. The literature on addiction, gambling and speculation is probably worth mining, as is the link with evolutionary biology: fear is a proximate mechanism for surviving predators, and leads to social learning in markets. Models of belief learning and reinforcement learning may help explain online behaviour.
Dylan Evans disagrees with Terry Taylor’s view that nature doesn’t make predictions: we are all good at making (probabilistic) predictions. He’s interested in identifying good forecasters and has devised calibration tests of people’s risk intelligence (at http://www.projectionpoint.com ) taken by over 40,000 people last year. Risk intelligence appears to be highly domain specific; the high-RQ people such as expert weather forecasters or horse handicappers build up their knowledge of a domain slowly and often unconsciously. Curiously, US weather forecasters are better than British ones; in America, forecasters are required to give numerical estimates of outcome probabilities while their counterparts in Britain are allowed to waffle. Some bridge players are also expert at estimating the probability of making a contract. He has an article on this at http://bit.ly/jIhWzN .
The last speaker of the morning was Milind Tambe who’s ben working on game theory for security, and in particular on using Bayesian Stackelberg games to model interaction between adversaries and defenders under uncertainty. Solutions are in general NP-hard; he’s designed a system that’s now used to schedule deployment at LAX (called ARMOR) and another used to allocate air marshals to flights (IRIS). Systems under development are for the allocation of guards at airports (GUARDS) and for deployment of coastguard patrols (PROTECT). These are big systems that chew a lot of data, provided by analysts who try to figure out how much damage could be done where by different types of attack in the case of airports, and tackling huge numbers of possible schedules for air marshals.
There was discussion of the value for deterrence of decoy protection measures such as fake CCTV; on the stupidity of attackers; on whether we can learn anything useful about countering terrorism (where there are too few attacks to do statistics) from data on types of attack that are common (such as fare dodging and poaching); on the effect of risk thermostats and attack displacement; the role of principal-agent theory, which has been applied to information security by Frank Pallas (see http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1471801 ); that people with high risk intelligence seem to be exactly those who have practised for a long time, with detailed numerical feedback on their performance, regardless of their IQ, and that thinking of reasons why you might be wrong is a way to improve risk intelligence quickly; and the relevance of developmental psychology for understanding attitudes to risk.
The afternoon session started with Eric Johnson talking on what we might do about spear phishing. Even senior people at tech firms can be bizarrely naive about the technical aspects of infosec; Anup Ghosh reckons educating users is a myth. He’s been spear phishing people at Dartmouth, taking them to one of five experimental pages on phishing education; then following up with a second attack five weeks later from a different pretext source. The treatment group saw over 70% hits on the first round decline to 52% on the second round. It’s still work in progress, but it seems education does have some effect.
Richard Clayton then discussed the various URL formats used by phishermen to confuse users who try to understand them. Why do phishermen still embed bank names? Well, most of them use kits, so Richard studied instant messaging worms. These use either adaptations of target service names, or URL shorteners, thus providing us with a natural experiment. It appears that URL shorteners get fewer people to click than even fairly crummy URLs. In conclusion, criminals appear to believe that having bank names in URLs helps their click-through rate, and they seem to be right.
Jean Camp is studying computer security for older people. Retirees tend to favour safety over privacy; they are more articulate and less shy, so they give better feedback. However they care enough about privacy that home monitoring systems which ignore it may fail in the market. She surveyed attitudes to a number of assistive systems by perceived usefulness, activity sensitivity, data granularity and data recipient. The data recipient had the strongest effect; seniors were least prepared to share data with vendors – yet most corporate models assume that such sharing will be coerced. This leads to the question of how we might support technologies for peer production and community support.
David Modic’s topic is whether any personality traits are common to victims of Internet scams. They use the five-factor model of personality, ignoring self-control as some say it’s not a trait but a resource than can be depleted. They surveyed 506 people (students and from arstechnica) of whom 67 turned out to be scam victims and 7 actually lost money. 28% of the variance was explained by three factors: people high on premeditation or extraversion were less likely to be victims but agreeableness increased vulnerability. But the effects are fairly weak; he needs more victims!
Finally, Stuart Schechter talked on “The security practitioner’s Nuremberg defence”. AV software is marketed as “part of a defence in depth” despite its many shortcomings. You can say the same about user account control, and many other products. But do partial defences really work, given the bystander effect, responsibility diffusion and risk homeostasis? Wimberly and Liebrock’s Oakland paper “Using fingerprint authentication to reduce system security” shows they may not; an irrelevant protection mechanism can cause people to slack off on more important defences. So we should tell people that a password is bad, but not that it’s good; and avoid any promises that could make users less vigilant.
Questions touched on methodology: why elders distrust vendors, where it appears they are leery of profit-maximising as opposed to community/NGO providers; in David Modic’s sample, the arstechnica members were less gullible than students; whether people targeted in 419 scams are particularly gullible, and whether there’s an analogy between these scams and charity balls (the current scams are more about helping distribute millions to charity than about African dictators); on how we might incentivise better defence in depth by reducing externalities between the principals responsible for different layers – including the user; whether people are vulnerable to scams because they are dishonest, or whether this is “blame the victim”; whether we should aim in general to make people more or less vigilant; and what we can realistically teach people – we can teach them what a lottery scam is, or a 419, but not technical stuff like how to parse URLs as the bad guys just adapt.
David Livingstone Smith has written a book on how politicians and others try to depersonalise their enemies in preparation for conflict by representing them as predators, prey or vermin. Representing enemies as subhumans is universal but hardly studied – maybe three dozen papers in social psychology and that’s it. The idea of a “great chain of being” from god through man to animals to inanimate matter is ancient and pervasive; we used to be just below the angels but with the death of God are now at the top. The ranks are like Mill’s “natural kinds” in a folk metaphysics, and there is much evidence that humans are natural born essentialisers. Essence is not appearance, so a member of a target group can have a nonhuman essence or a monstrous fusion of human and subhuman, thus diminishing our moral obligation to them – even to the point that they deserve killing.
John Mueller will publish a second book on risks, costs and terrorism with Mark Stewart later this year; he estimates additional costs of about one trillion dollars since 9/11 (and that’s excluding Iraq, Afghanistan, and even people who kill themselves by driving rather than flying). An NRC committee found that the DHS has good risk analysis for earthquakes, hurricanes and the like but not for terrorism. The right question to ask is how much we should pay to reduce a risk that’s already extremely low. Probabilities are neglected; sometimes added rather than multiplied. Even the World Trade Center was not a key resource as its destruction did not do grave damage to the economy; so why is the Statue of Liberty considered to be one? The $75bn spent each year directly would have to stop 4 Times Square attacks a day to make economic sense.
Stephen LeBlanc mostly studies prehistoric pottery, which led to an interest in prehistoric warfare, first in the southwest and then generally. Prestate warfare had very high death rates; where the data are good, 15-25% of the males in most past societies, versus about 5% of women. He can find no evidence of a society being peaceful for more than a century or so. When resource stress is removed (tools, epidemics) warfare declines hugely. There is no empirical basis for the myth that primitive societies were peaceful. His latest research is about pushing this back in time from early farming societies to foraging societies, when we evolved – and he finds no difference. In fact some foraging societies organise armies of thousands despite living in family groups of dozens; they fight over women and revenge, which seem to be hard-wired human universals. Indeed the social orgaisation into chiefdoms is how Afghanistan works. Why don’t we study it more?
Cory Doctorow’s subject was the “security syllogism” – something must be done, this is something, therefore it must be done. For example, should every device capable of being a software defined radio (including a PC) have to run only FCC-signed code? The EFF stopped that one. Another example is the copyright war where dongles failed completely and DVDs aren’t much better. Many mechanisms fail but create unpleasant externalities. Now: national firewalls, website blocking, lawsuits against Youtube, device lockdown, and even a camera that won’t film in the presence of an infrared signal that would be broadcast at live events. An infrastructure for preventing the public from understanding what’s happening on their devices can have huge negative social consequences. What about 3-d printers? Will they be controlled, and how? We can’t let security by design become the basis for the information society.
Baruch Fischhoff discussed how we might import what we know of behavioural science into intelligence work. Many intelligence techniques have face validity but need evaluation; intuitions cannot be trusted. Sound methods appear only slowly through peer review. See “Intelligence analysis for tomorrow” at http://www.nap.edu/catalog.php?record_id=13040 for the detail. This is particularly the case in the USA where two third of the analysts have been hired since 9/11; there was a hiring dearth during the peace dividend years. Agencies retain too many of the people who work the system and not enough of those who work the problem. Baruch learned from Alan Baddeley that science progresses by a combination of applied basic research, and basic applied research. Evaluation can be hard but is usually cheap once you figure it out and is essential for continuous learning.
Discussion topics included the reality of the gulf between public attitudes to risk and the reality; entrenched public attitudes to terrorism risk; the correlation of ancient warfare with climate change (ice ages cause wars, while the medieval warm period brought peace); the origin of gender differences in conflict-based selection; whether depersonalistion plays as big a role in tribal warfare (offence and defence are different – attackers self-select while all contribute to defence); whether we’re making progress (we are – wars have become much rarer, especially in the last 20 years); whether dehumanisation is ever a moral imperative (e.g. in the von Trier case); whether gang warfare is similar to forager warfare; the mechanics of leadership; whether we need an international convention on the recording of wartime casualties; why we don’t blame politicians for wasting money on security; the role of religion, training and the buddy system in preparing people to kill; whether the rules of war mitigate casualties (not clear); and the different levels of success enjoyed by the variety of people who try to sell fear.
Cormac Herley started the second day’s proceedings by talking about scale. There are now over 12 billion password-protected accounts worldwide: three times the total when Bill Gates told us “the password is dead” in 2003. Facebook has more accounts than the Internet did at the Netscape IPO in 1995. There are many bullshit statistics, often from firms selling non-password solutions. Certainly many mass-deployed systems are carefully optimised, but passwords aren’t – at least for the user! Hand-wringing about the hassle, and identity theft, leads to initiatives like NSTIC. But hang on: Facebook grew to 1 million users on angel funds of $200K; any per-user authentication cost north of zero would have hosed them. Forget about the goal of getting rid of passwords.
Angela Sasse now has a PhD programme across computer science, crime science and psychology with a joint taught first year and joint supervision. Her topic was security compliance in a high-risk environment and what happens when we have ever more embedded tech. We basically know how to design systems to deal with various kinds of error – read Jim Reason – but engineers don’t try when their incentives are wrong.
Rick Wash has previously documented people’s folk models of security, and is now interested in influencing them. Folk models are the basis of much human reasoning. Some people think of malware as buggy software, so they just don’t download stuff; others as mischievous software, caught by visiting shady parts of the Internet or opening shady emails; others as criminal software. Only this last type sees anti-virus software as essential, and although it’s also incomplete, leads to better decisions. Models are usually based on narratives. We need to find better stories – and don’t forget the importance of social norms / group behaviour. Stories from people like me have much more power than advice from experts. What would a PatientsLikeMe site look like for security?
Rachel Greenstadt has been working on adversarial stylometry (the attribution of documents to authors based on linguistic style). An early example was the disputed authorship of the Federalist Papers; modern stylometry systems use statistical machine learning and have about 90% accuracy in the non-adversarial case. But what happens when people try to fool them? She’s found that when people consciously change their writing style, this is devastating to accuracy, which drops to random. This in turn asks whether we can build tools to automate or help writing style adjustments (using translation software obscures most features but not those based on synonym classification), and whether we can build analytics to detect deceptive writing (we can: people write more simply, and their text has Hancock’s lying indicators except that personal pronouns are used more rather than less). There may eventually be an interactive tool to help people learn to write anonymously.
Lorrie Cranor was the last speaker in the session. Usability at CMU got hosed by NIST SP 800-63 when the university joined a consortium that had adopted it. She recruited 5000 participants on mturk to study password conditions. An interesting discovery was that passwords should be longer rather than more complex: 16-letter alpha was better than 8-digit mixed type (more entropy, less likely to write down). She’s also been studying what people do on Facebook that they subsequently regret: in descending order, personal information, sex, relationships, profanity, alcohol/drug use, jokes, lies. Indicators are bad mood, being excited, careless, under the influence, didn’t mean to post it, unintended audience. Consequences are mostly not tangible: guilt / embarrassment (a few cases of hurt relationships or getting into trouble). The worst was a lady accidentally posting a video of herself and husband having sex; she’d planned to post an innocuous video but uploaded the raunchy one too, and only learned of it when she got feedback from friends.
Discussion topics included password entropy estimates; password reuse; plagiarism detection; whether regrets offline are hugely different from regrets online (the offline regret literature is about life-scale regrets like doing the wrong college major or marrying the wrong person); the change brought about by Facebook, in that comments previously shared with three friends in a bar are now world-readable and permanent; whether excluding accidental postings would have a significant effect on the pattern of regrets (maybe not; they’re a small subset); and whether stylometry could be used to out astroturfers.
Andrew Odlyzko continued from his talk at SHB last year, arguing that gullibility is essential for our economic progress, with insecurity being an unavoidable side-effect. Gullibility creates trust and social capital; beautiful illusions beat cold reality; in modern language, the endowment effect and optimism bias drive investment. Classical economists like Smith opposed joint-stock companies as a trick to swindle investors, and if alive today might well say “I told you so!” The Railway manias in the 1830s and 1840s were the big change; the first enriched investors while the second ruined many. Andrew discussed the debates on investment regulation and property rights; the boom created real assets, prevented the outflow of capital, may have prevented a revolution in 1848, and led to mature attitudes towards investment. After Schumpeter’s entrepreneurs, Keynes’ animal spirits, and Akerlof/Shiller’s stories, we also need to study the “Pied Pipers” or snake oil salesmen who induce investors to go in.
Simon Shiu studies the security decision-making process by means of decision support case studies with firms and government departments about things like USB policy, identity policy and enterprise DRM policy. They may have found some confirmation bias, which could be countered by security framing; they think they need to get stakeholders thinking about security problems holistically rather than treating outcomes and probabilities separately. The emerging problem is that firms don’t control the software lifecycle any more with the move to the cloud; more empirical studies are probably needed here. They have another project called “cloud stewardship economics” which studies changes in how corporations procure IT.
Rahul Telang has studied why many people infringe copyright. If music companies reduced prices, would they sell a lot more? Or is it down to a lack of legal options, e.g. when movies aren’t available outside the USA for months? A natural experiment happened when NBC removed its content from the iTunes store on Dec 1 2007; its content was pirated 12% more after that, but there was no change in physical media sales. iTunes sales had been 16 sales per episodes; instead there were 33 extra downloads per episode. He argues that piracy has a fixed cost and a variable cost; once you overcome moral qualms (or learn to use BitTorrent) you download more easily. In fact, there was a spillover to additional downloads from other suppliers. So behavioural models matter, and policy interventions should pay attention to the self-regulatory system.
Henry Willis works helping organisations manage risk and is interested in whether terrorism is perceived differently. For example, how does one trade it against climate risk? The probability of a storm surge in New Orleans is likely to increase over time; what combination of regulation and risk communication is ideal? He’s done a literature survey of how people think about natural and man-made hazards from the public health aspects to dread and uncertainty. FEMA is being required to do regional and national risk assessments and they may adopt his methods.
Last speaker of the morning was Bashar Nuseibeh who’s working on location privacy. When A follows B, the vulnerable party might be either of them. They instrumented two families with a buddy tracker application for an ethnographic study and found that it made people feel conflicted: they were tempted to see where family members were but felt guilty. There were given tracking tasks, whereupon they were disinhibited. People found being tracked even more uncomfortable, and didn’t want to know if it was happening; they didn’t want to use privacy mechanisms as it would signal they had something to hide. There’s a dialectic between technology affordances and family contracts: people get uncomfortable even when they have nothing to hide. In conclusion, close-knit tracking is difficult as the closer relationships are the less scope there is to develop privacy technology.
Discussion topics included how to deal with benign location obfuscation; the inevitability of treating terrorism risks in different programs from flood risk; whether it’s sensible to treat bioterror quite separately from epidemics, and to treat cyber only as terrorism thus excluding the possibility of accident (as Morris said his worm was, and as the Brazilian power outage turned out to be); where the money goes in manias (diffused in the infrastructure, which can’t meet expectations anyway, and in advisers, who provide reassurance); how the content industry adapts poorly to technology with risk-averse executives who are insufficiently disciplined by shareholders; and the extent to which people actually want (or are prepared to pay) to be honest.
Adam Joinson started off the afternoon talking about the tension between family and Facebook; excessive social contact is a bad thing, like crowding (or the old saying of guests and fish going off after three days). People who live in crowded quarters tend to be less sociable and more reserved. We use boundary regulation as a signal when developing relationships, and the inability to manage them dynamically is disruptive! We need to be able to use gaze aversion or other intuitive signals, not have to go to a privacy preferences page. He’s been using computational linguistics to identify 82 LIWC categories for privacy: see http://www.privacydictionary.info for more, and http://www.interactionslab.com/sensitweet for an application to Twitter that will measure the sensitivity of your tweets. Finally he’s editing a special issue on privacy methodology for the International Journal on Human Computer Interaction.
Ashkan Soltani is interested in online tracking. He looked at the top 100 sites for third-party beacons and party cookies. Google beacons were found on 92 of the top 100, for example, and sees 88% of online browsing activity. Incentives undermine privacy: Safari is the only browser that blocks third-party cookies by default, and is the only vendor without lots of ad income. In addition, 54 of the top 100 use flash cookies. Smartphone apps are also bad; over half send device IDs home, and just over half phone home to more than one location. Although we “buy” our phones and our browsers are “user agents” they report to others. Is one approach to help users visualize the data, or make it salient? When people made nice graphics for Apple location privacy, people suddenly cared.
Andrew Adams is interested in privacy by default. Why should developers bother? Apple sells hardware, Microsoft sells software, and Google sells you. Eric Schmidt once remarked that customers should determine privacy settings; indeed, his customers are advertisers. Sometimes things go wrong, as with Facebook Beacon and Google Buzz. Best thing we can do is maybe build decent tools to enable users to lock stuff down.
Paul Syverson introduced onion routing and Tor, the anonymous communication system he created. DoD personnel are told not to wear “US Navy” baseball caps overseas, lest they become a target; similarly, a Navy man logging on from a hotel overseas gives himself away if the local ISPs notices him having an encrypted session with navy.mil. There are many other uses from open source intelligence to limiting visibility of traffic data on classified networks; in business, customers may want to resist price discrimination. He’s now doing some trust modelling of node compromise, based on game theory and aimed at stopping correlation attacks. That’s OK for the Navy, but what can we do for the little guy, such as a Tunisian activist?
Most of Alessandro’s work is behavioural and aimed at privacy, but he also does mining work. Augmented reality as seen in “Minority report” could be closer than we think because of improved face recognition algorithms and huge accessible online databases, notably at Facebook. He did experiments to see if off-the-shelf software could be used for large-scale recognition and found it easy to correlate between open systems (Facebook) and private systems (a dating site, and people walking round campus). It did indeed! He used the subject to verify photo matches in real-time live work and mturk in offline studies. Match rates were 30-42%. What will privacy mean in an age of augmented reality? Wordlens already translates signs; a future app will recognize people in the street, and tell you everything from their credit score to their last tweet. What will the effect be on social interactions?
In questions, topics included the inevitability of real-name policies; cultural differences, such as the default of using pseudonyms on the dominant Japanese social network; whether privacy disclosure should be regulated in the way that credit card terms are disclosed; whether privacy is a property or a right (I can’t sell my freedom, or my children); whether we should teach kids to be anonymous instead of saying “don’t be naughty as you’ll be seen”; privacy market failures, including the fact that free and paid apps collect the same data; whether data poisoning might give any privacy or whether false data we give will just be swamped; the effects of convenient single sign-on via Facebook or other identity providers; whether there might be some way of getting firms to delete data after some period of time, perhaps by attaching liability or attaching other real costs to retention for excessive periods (this is an issue in the EU); the value of of imposing reciprocity, as in David Brin’s “Transparent Society”, versus the fact that abuse is more a function of power than of access; whether single-sign-on is a serious threat to privacy by tying threads and collapsing multiple vantage points; the risks of being tagged as the wrong person by accident, and malicious impersonation; and the risks of trusting in technology to assess people rather than in our instincts that evolved over millions of years.
What an absorbing session! Thank you so much for these notes — lots of food for thought, and a tremendous service for those of us who could not be there.
The first speaker was Frank Stajano, describing Pico, a password replacement device with a camera, display and a couple of buttons that can be incorporated in a watch or keyfob; it can capture challenges or certificates from a screen and communicate with a terminal device by radio. The novel aspect of the Pico is how it authenticates the user. It ensures it’s not been lost or stolen by communicating with “siblings” – other RF devices carried by the user, which can include other consumer electronic devices and even jewelry. The goal is replacing passwords altogether, not just moving from password to password plus token.
Matt Blaze was next with a talk he asked me not to liveblog; it will appear later as a paper “Why Special Agent Johnny Still Can’t Encrypt”.
Peter Robinson works on measuring human behaviour and the use of affect in human-computer interaction. We can interpret basic emotions from a still picture, such as fear, anger and surprise; more complex emotions require several second of evidence. The dynamics and amplitude of facial expressions are somewhat different when acting from in the real world, which can pose methodological difficulties. Many features can be extracted from voice; inferring mental states is complex but doable. Body posture is a new frontier; the Kinect does for $100 what used to require fancy equipment costing $50,000. Peter’s team is particularly interested in studying car drivers (and Grand Theft Auto at $100 is much better than commercial simulators costing $10,000). One problem was getting subjects engaged: some people were unbothered about crashing cars. The fix was to get them to fly model helicopters round the lab instead. The lesson is that experiments on human behaviour may require an innovative and indirect approach.
Pam Briggs is interested in applied mistrust and social embarrassment, and working with computer scientists at Newcastle. Interests include in how authentication methods become less usable as you age; user comfort with authentication; and socially-mediated authentication. Asking users to shield a pin pad makes them signal social distrust. They have been exploring options for multi-touch screens, such as forcing people to shield a PIN pad with their left hand before entering a PIN with the right; obfuscating which screen object you’re selecting by moving rings that select multiple objects; and using combinations of finger pressure from both hands to select digits from a keypad. The goal is to enable one person to authenticate at a multi-touch shared tabletop without other people at the tabletop being able to shoulder-surf and without creating social discomfort.
Markus Jakobsson is concerned with app spoofing, where authentication events in games (for example, to buy a magic sword) can be spoofed by an attacker. The test is whether naive users get logged in at good sites but not at bad sites; the method is to use an interrupt mechanism (such as shaking a mobile phone or pressing a home button) to cause the device to check a site certificate. His work is at http://www.spoofkiller.com and the trick is to train the user to perform the interrupt sequence as a conditioned reflex. He’s also interested in password entry speeds and error rates on mobile versus desktop devices (see http://www.fastword.me ).
In the discussion people wondered how you’d use Pico in bed or other places when your clothes and other devices aren’t nearby; why we haven’t learned the lessons from “Why Johnny Can’t Encrypt”, and in particular why people who build things seem to be falling ever further behind the research community; how people could use modern mechanisms like Pico in environments such as the DoD where people aren’t allowed to use cameras; on how facial expressions and voice features generalise well across persons and cultures while body gestures don’t; on age- and sex-related differences; on the feasibility of trying to train users versus exploiting natural behaviour (which is much easier); on whether we use folk taxonomies of psychological states or something more refined (Peter Robinson uses categories from natural language, as being those that people grasp culturally, and allows for multiple overlapping emotions).
The second session, on foundations, was kicked off by Shari Pfleeger. In her own multidisciplinary work she’s learned to explore and document assumptions, as different disciplines have different starting points. It’s also important to think in series of studies rather than one-off projects, to think of effective training, and to understand scale: for example New York has over a thousand times as many first responders as North Dakota.
Terry Taylor has a group working on Darwinian security. Nature doesn’t plan: biological systems don’t waste resources trying to predict future states of a complex and changing environment but rather invest in the adaptability. But how do we operationalise this in cybersecurity, emergency response or the resilience of commercial organisations? Responding is what you do immediately and adapting is medium-term. The best solutions tend to be recursive; according to Geerat Vermij, a modular structure of semi-autonomous parts under weak central control is often best. The DHS and FEMA are exactly the opposite, and their response to Hurricane Katrina was not impressive. Managing uncertainty also matters; living things try to decrease uncertainty for themselves for increase it for their predators, prey or rivals, while many of our anti-terrorism measures simply increase our own uncertainty. Symbiosis, or cascading adaptation, is also worth study.
Michelle Baddeley is a behavioural economist interested in the role of learning and emotions in online systems. Drivers for online behaviour include risk attitudes, time inconsistency, peer pressure, and visceral factors such as fear, greed and impulsivity. Explanations of time inconsistency are perhaps the most developed contribution from behavioural economics; the procrastination literature should help anyone developing systems for backup. Michelle’s work is largely in peer pressure and herding, and interaction with emotions. The literature on addiction, gambling and speculation is probably worth mining, as is the link with evolutionary biology: fear is a proximate mechanism for surviving predators, and leads to social learning in markets. Models of belief learning and reinforcement learning may help explain online behaviour.
Dylan Evans disagrees with Terry Taylor’s view that nature doesn’t make predictions: we are all good at making (probabilistic) predictions. He’s interested in identifying good forecasters and has devised calibration tests of people’s risk intelligence (at http://www.projectionpoint.com ) taken by over 40,000 people last year. Risk intelligence appears to be highly domain specific; the high-RQ people such as expert weather forecasters or horse handicappers build up their knowledge of a domain slowly and often unconsciously. Curiously, US weather forecasters are better than British ones; in America, forecasters are required to give numerical estimates of outcome probabilities while their counterparts in Britain are allowed to waffle. Some bridge players are also expert at estimating the probability of making a contract. He has an article on this at http://bit.ly/jIhWzN .
The last speaker of the morning was Milind Tambe who’s ben working on game theory for security, and in particular on using Bayesian Stackelberg games to model interaction between adversaries and defenders under uncertainty. Solutions are in general NP-hard; he’s designed a system that’s now used to schedule deployment at LAX (called ARMOR) and another used to allocate air marshals to flights (IRIS). Systems under development are for the allocation of guards at airports (GUARDS) and for deployment of coastguard patrols (PROTECT). These are big systems that chew a lot of data, provided by analysts who try to figure out how much damage could be done where by different types of attack in the case of airports, and tackling huge numbers of possible schedules for air marshals.
There was discussion of the value for deterrence of decoy protection measures such as fake CCTV; on the stupidity of attackers; on whether we can learn anything useful about countering terrorism (where there are too few attacks to do statistics) from data on types of attack that are common (such as fare dodging and poaching); on the effect of risk thermostats and attack displacement; the role of principal-agent theory, which has been applied to information security by Frank Pallas (see http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1471801 ); that people with high risk intelligence seem to be exactly those who have practised for a long time, with detailed numerical feedback on their performance, regardless of their IQ, and that thinking of reasons why you might be wrong is a way to improve risk intelligence quickly; and the relevance of developmental psychology for understanding attitudes to risk.
The afternoon session started with Eric Johnson talking on what we might do about spear phishing. Even senior people at tech firms can be bizarrely naive about the technical aspects of infosec; Anup Ghosh reckons educating users is a myth. He’s been spear phishing people at Dartmouth, taking them to one of five experimental pages on phishing education; then following up with a second attack five weeks later from a different pretext source. The treatment group saw over 70% hits on the first round decline to 52% on the second round. It’s still work in progress, but it seems education does have some effect.
Richard Clayton then discussed the various URL formats used by phishermen to confuse users who try to understand them. Why do phishermen still embed bank names? Well, most of them use kits, so Richard studied instant messaging worms. These use either adaptations of target service names, or URL shorteners, thus providing us with a natural experiment. It appears that URL shorteners get fewer people to click than even fairly crummy URLs. In conclusion, criminals appear to believe that having bank names in URLs helps their click-through rate, and they seem to be right.
Jean Camp is studying computer security for older people. Retirees tend to favour safety over privacy; they are more articulate and less shy, so they give better feedback. However they care enough about privacy that home monitoring systems which ignore it may fail in the market. She surveyed attitudes to a number of assistive systems by perceived usefulness, activity sensitivity, data granularity and data recipient. The data recipient had the strongest effect; seniors were least prepared to share data with vendors – yet most corporate models assume that such sharing will be coerced. This leads to the question of how we might support technologies for peer production and community support.
David Modic’s topic is whether any personality traits are common to victims of Internet scams. They use the five-factor model of personality, ignoring self-control as some say it’s not a trait but a resource than can be depleted. They surveyed 506 people (students and from arstechnica) of whom 67 turned out to be scam victims and 7 actually lost money. 28% of the variance was explained by three factors: people high on premeditation or extraversion were less likely to be victims but agreeableness increased vulnerability. But the effects are fairly weak; he needs more victims!
Finally, Stuart Schechter talked on “The security practitioner’s Nuremberg defence”. AV software is marketed as “part of a defence in depth” despite its many shortcomings. You can say the same about user account control, and many other products. But do partial defences really work, given the bystander effect, responsibility diffusion and risk homeostasis? Wimberly and Liebrock’s Oakland paper “Using fingerprint authentication to reduce system security” shows they may not; an irrelevant protection mechanism can cause people to slack off on more important defences. So we should tell people that a password is bad, but not that it’s good; and avoid any promises that could make users less vigilant.
Questions touched on methodology: why elders distrust vendors, where it appears they are leery of profit-maximising as opposed to community/NGO providers; in David Modic’s sample, the arstechnica members were less gullible than students; whether people targeted in 419 scams are particularly gullible, and whether there’s an analogy between these scams and charity balls (the current scams are more about helping distribute millions to charity than about African dictators); on how we might incentivise better defence in depth by reducing externalities between the principals responsible for different layers – including the user; whether people are vulnerable to scams because they are dishonest, or whether this is “blame the victim”; whether we should aim in general to make people more or less vigilant; and what we can realistically teach people – we can teach them what a lottery scam is, or a 419, but not technical stuff like how to parse URLs as the bad guys just adapt.
David Livingstone Smith has written a book on how politicians and others try to depersonalise their enemies in preparation for conflict by representing them as predators, prey or vermin. Representing enemies as subhumans is universal but hardly studied – maybe three dozen papers in social psychology and that’s it. The idea of a “great chain of being” from god through man to animals to inanimate matter is ancient and pervasive; we used to be just below the angels but with the death of God are now at the top. The ranks are like Mill’s “natural kinds” in a folk metaphysics, and there is much evidence that humans are natural born essentialisers. Essence is not appearance, so a member of a target group can have a nonhuman essence or a monstrous fusion of human and subhuman, thus diminishing our moral obligation to them – even to the point that they deserve killing.
John Mueller will publish a second book on risks, costs and terrorism with Mark Stewart later this year; he estimates additional costs of about one trillion dollars since 9/11 (and that’s excluding Iraq, Afghanistan, and even people who kill themselves by driving rather than flying). An NRC committee found that the DHS has good risk analysis for earthquakes, hurricanes and the like but not for terrorism. The right question to ask is how much we should pay to reduce a risk that’s already extremely low. Probabilities are neglected; sometimes added rather than multiplied. Even the World Trade Center was not a key resource as its destruction did not do grave damage to the economy; so why is the Statue of Liberty considered to be one? The $75bn spent each year directly would have to stop 4 Times Square attacks a day to make economic sense.
Stephen LeBlanc mostly studies prehistoric pottery, which led to an interest in prehistoric warfare, first in the southwest and then generally. Prestate warfare had very high death rates; where the data are good, 15-25% of the males in most past societies, versus about 5% of women. He can find no evidence of a society being peaceful for more than a century or so. When resource stress is removed (tools, epidemics) warfare declines hugely. There is no empirical basis for the myth that primitive societies were peaceful. His latest research is about pushing this back in time from early farming societies to foraging societies, when we evolved – and he finds no difference. In fact some foraging societies organise armies of thousands despite living in family groups of dozens; they fight over women and revenge, which seem to be hard-wired human universals. Indeed the social orgaisation into chiefdoms is how Afghanistan works. Why don’t we study it more?
Cory Doctorow’s subject was the “security syllogism” – something must be done, this is something, therefore it must be done. For example, should every device capable of being a software defined radio (including a PC) have to run only FCC-signed code? The EFF stopped that one. Another example is the copyright war where dongles failed completely and DVDs aren’t much better. Many mechanisms fail but create unpleasant externalities. Now: national firewalls, website blocking, lawsuits against Youtube, device lockdown, and even a camera that won’t film in the presence of an infrared signal that would be broadcast at live events. An infrastructure for preventing the public from understanding what’s happening on their devices can have huge negative social consequences. What about 3-d printers? Will they be controlled, and how? We can’t let security by design become the basis for the information society.
Baruch Fischhoff discussed how we might import what we know of behavioural science into intelligence work. Many intelligence techniques have face validity but need evaluation; intuitions cannot be trusted. Sound methods appear only slowly through peer review. See “Intelligence analysis for tomorrow” at http://www.nap.edu/catalog.php?record_id=13040 for the detail. This is particularly the case in the USA where two third of the analysts have been hired since 9/11; there was a hiring dearth during the peace dividend years. Agencies retain too many of the people who work the system and not enough of those who work the problem. Baruch learned from Alan Baddeley that science progresses by a combination of applied basic research, and basic applied research. Evaluation can be hard but is usually cheap once you figure it out and is essential for continuous learning.
Discussion topics included the reality of the gulf between public attitudes to risk and the reality; entrenched public attitudes to terrorism risk; the correlation of ancient warfare with climate change (ice ages cause wars, while the medieval warm period brought peace); the origin of gender differences in conflict-based selection; whether depersonalistion plays as big a role in tribal warfare (offence and defence are different – attackers self-select while all contribute to defence); whether we’re making progress (we are – wars have become much rarer, especially in the last 20 years); whether dehumanisation is ever a moral imperative (e.g. in the von Trier case); whether gang warfare is similar to forager warfare; the mechanics of leadership; whether we need an international convention on the recording of wartime casualties; why we don’t blame politicians for wasting money on security; the role of religion, training and the buddy system in preparing people to kill; whether the rules of war mitigate casualties (not clear); and the different levels of success enjoyed by the variety of people who try to sell fear.
Cormac Herley started the second day’s proceedings by talking about scale. There are now over 12 billion password-protected accounts worldwide: three times the total when Bill Gates told us “the password is dead” in 2003. Facebook has more accounts than the Internet did at the Netscape IPO in 1995. There are many bullshit statistics, often from firms selling non-password solutions. Certainly many mass-deployed systems are carefully optimised, but passwords aren’t – at least for the user! Hand-wringing about the hassle, and identity theft, leads to initiatives like NSTIC. But hang on: Facebook grew to 1 million users on angel funds of $200K; any per-user authentication cost north of zero would have hosed them. Forget about the goal of getting rid of passwords.
Angela Sasse now has a PhD programme across computer science, crime science and psychology with a joint taught first year and joint supervision. Her topic was security compliance in a high-risk environment and what happens when we have ever more embedded tech. We basically know how to design systems to deal with various kinds of error – read Jim Reason – but engineers don’t try when their incentives are wrong.
Rick Wash has previously documented people’s folk models of security, and is now interested in influencing them. Folk models are the basis of much human reasoning. Some people think of malware as buggy software, so they just don’t download stuff; others as mischievous software, caught by visiting shady parts of the Internet or opening shady emails; others as criminal software. Only this last type sees anti-virus software as essential, and although it’s also incomplete, leads to better decisions. Models are usually based on narratives. We need to find better stories – and don’t forget the importance of social norms / group behaviour. Stories from people like me have much more power than advice from experts. What would a PatientsLikeMe site look like for security?
Rachel Greenstadt has been working on adversarial stylometry (the attribution of documents to authors based on linguistic style). An early example was the disputed authorship of the Federalist Papers; modern stylometry systems use statistical machine learning and have about 90% accuracy in the non-adversarial case. But what happens when people try to fool them? She’s found that when people consciously change their writing style, this is devastating to accuracy, which drops to random. This in turn asks whether we can build tools to automate or help writing style adjustments (using translation software obscures most features but not those based on synonym classification), and whether we can build analytics to detect deceptive writing (we can: people write more simply, and their text has Hancock’s lying indicators except that personal pronouns are used more rather than less). There may eventually be an interactive tool to help people learn to write anonymously.
Lorrie Cranor was the last speaker in the session. Usability at CMU got hosed by NIST SP 800-63 when the university joined a consortium that had adopted it. She recruited 5000 participants on mturk to study password conditions. An interesting discovery was that passwords should be longer rather than more complex: 16-letter alpha was better than 8-digit mixed type (more entropy, less likely to write down). She’s also been studying what people do on Facebook that they subsequently regret: in descending order, personal information, sex, relationships, profanity, alcohol/drug use, jokes, lies. Indicators are bad mood, being excited, careless, under the influence, didn’t mean to post it, unintended audience. Consequences are mostly not tangible: guilt / embarrassment (a few cases of hurt relationships or getting into trouble). The worst was a lady accidentally posting a video of herself and husband having sex; she’d planned to post an innocuous video but uploaded the raunchy one too, and only learned of it when she got feedback from friends.
Discussion topics included password entropy estimates; password reuse; plagiarism detection; whether regrets offline are hugely different from regrets online (the offline regret literature is about life-scale regrets like doing the wrong college major or marrying the wrong person); the change brought about by Facebook, in that comments previously shared with three friends in a bar are now world-readable and permanent; whether excluding accidental postings would have a significant effect on the pattern of regrets (maybe not; they’re a small subset); and whether stylometry could be used to out astroturfers.
Andrew Odlyzko continued from his talk at SHB last year, arguing that gullibility is essential for our economic progress, with insecurity being an unavoidable side-effect. Gullibility creates trust and social capital; beautiful illusions beat cold reality; in modern language, the endowment effect and optimism bias drive investment. Classical economists like Smith opposed joint-stock companies as a trick to swindle investors, and if alive today might well say “I told you so!” The Railway manias in the 1830s and 1840s were the big change; the first enriched investors while the second ruined many. Andrew discussed the debates on investment regulation and property rights; the boom created real assets, prevented the outflow of capital, may have prevented a revolution in 1848, and led to mature attitudes towards investment. After Schumpeter’s entrepreneurs, Keynes’ animal spirits, and Akerlof/Shiller’s stories, we also need to study the “Pied Pipers” or snake oil salesmen who induce investors to go in.
Simon Shiu studies the security decision-making process by means of decision support case studies with firms and government departments about things like USB policy, identity policy and enterprise DRM policy. They may have found some confirmation bias, which could be countered by security framing; they think they need to get stakeholders thinking about security problems holistically rather than treating outcomes and probabilities separately. The emerging problem is that firms don’t control the software lifecycle any more with the move to the cloud; more empirical studies are probably needed here. They have another project called “cloud stewardship economics” which studies changes in how corporations procure IT.
Rahul Telang has studied why many people infringe copyright. If music companies reduced prices, would they sell a lot more? Or is it down to a lack of legal options, e.g. when movies aren’t available outside the USA for months? A natural experiment happened when NBC removed its content from the iTunes store on Dec 1 2007; its content was pirated 12% more after that, but there was no change in physical media sales. iTunes sales had been 16 sales per episodes; instead there were 33 extra downloads per episode. He argues that piracy has a fixed cost and a variable cost; once you overcome moral qualms (or learn to use BitTorrent) you download more easily. In fact, there was a spillover to additional downloads from other suppliers. So behavioural models matter, and policy interventions should pay attention to the self-regulatory system.
Henry Willis works helping organisations manage risk and is interested in whether terrorism is perceived differently. For example, how does one trade it against climate risk? The probability of a storm surge in New Orleans is likely to increase over time; what combination of regulation and risk communication is ideal? He’s done a literature survey of how people think about natural and man-made hazards from the public health aspects to dread and uncertainty. FEMA is being required to do regional and national risk assessments and they may adopt his methods.
Last speaker of the morning was Bashar Nuseibeh who’s working on location privacy. When A follows B, the vulnerable party might be either of them. They instrumented two families with a buddy tracker application for an ethnographic study and found that it made people feel conflicted: they were tempted to see where family members were but felt guilty. There were given tracking tasks, whereupon they were disinhibited. People found being tracked even more uncomfortable, and didn’t want to know if it was happening; they didn’t want to use privacy mechanisms as it would signal they had something to hide. There’s a dialectic between technology affordances and family contracts: people get uncomfortable even when they have nothing to hide. In conclusion, close-knit tracking is difficult as the closer relationships are the less scope there is to develop privacy technology.
Discussion topics included how to deal with benign location obfuscation; the inevitability of treating terrorism risks in different programs from flood risk; whether it’s sensible to treat bioterror quite separately from epidemics, and to treat cyber only as terrorism thus excluding the possibility of accident (as Morris said his worm was, and as the Brazilian power outage turned out to be); where the money goes in manias (diffused in the infrastructure, which can’t meet expectations anyway, and in advisers, who provide reassurance); how the content industry adapts poorly to technology with risk-averse executives who are insufficiently disciplined by shareholders; and the extent to which people actually want (or are prepared to pay) to be honest.
Adam Joinson started off the afternoon talking about the tension between family and Facebook; excessive social contact is a bad thing, like crowding (or the old saying of guests and fish going off after three days). People who live in crowded quarters tend to be less sociable and more reserved. We use boundary regulation as a signal when developing relationships, and the inability to manage them dynamically is disruptive! We need to be able to use gaze aversion or other intuitive signals, not have to go to a privacy preferences page. He’s been using computational linguistics to identify 82 LIWC categories for privacy: see http://www.privacydictionary.info for more, and http://www.interactionslab.com/sensitweet for an application to Twitter that will measure the sensitivity of your tweets. Finally he’s editing a special issue on privacy methodology for the International Journal on Human Computer Interaction.
Ashkan Soltani is interested in online tracking. He looked at the top 100 sites for third-party beacons and party cookies. Google beacons were found on 92 of the top 100, for example, and sees 88% of online browsing activity. Incentives undermine privacy: Safari is the only browser that blocks third-party cookies by default, and is the only vendor without lots of ad income. In addition, 54 of the top 100 use flash cookies. Smartphone apps are also bad; over half send device IDs home, and just over half phone home to more than one location. Although we “buy” our phones and our browsers are “user agents” they report to others. Is one approach to help users visualize the data, or make it salient? When people made nice graphics for Apple location privacy, people suddenly cared.
Andrew Adams is interested in privacy by default. Why should developers bother? Apple sells hardware, Microsoft sells software, and Google sells you. Eric Schmidt once remarked that customers should determine privacy settings; indeed, his customers are advertisers. Sometimes things go wrong, as with Facebook Beacon and Google Buzz. Best thing we can do is maybe build decent tools to enable users to lock stuff down.
Paul Syverson introduced onion routing and Tor, the anonymous communication system he created. DoD personnel are told not to wear “US Navy” baseball caps overseas, lest they become a target; similarly, a Navy man logging on from a hotel overseas gives himself away if the local ISPs notices him having an encrypted session with navy.mil. There are many other uses from open source intelligence to limiting visibility of traffic data on classified networks; in business, customers may want to resist price discrimination. He’s now doing some trust modelling of node compromise, based on game theory and aimed at stopping correlation attacks. That’s OK for the Navy, but what can we do for the little guy, such as a Tunisian activist?
Most of Alessandro’s work is behavioural and aimed at privacy, but he also does mining work. Augmented reality as seen in “Minority report” could be closer than we think because of improved face recognition algorithms and huge accessible online databases, notably at Facebook. He did experiments to see if off-the-shelf software could be used for large-scale recognition and found it easy to correlate between open systems (Facebook) and private systems (a dating site, and people walking round campus). It did indeed! He used the subject to verify photo matches in real-time live work and mturk in offline studies. Match rates were 30-42%. What will privacy mean in an age of augmented reality? Wordlens already translates signs; a future app will recognize people in the street, and tell you everything from their credit score to their last tweet. What will the effect be on social interactions?
In questions, topics included the inevitability of real-name policies; cultural differences, such as the default of using pseudonyms on the dominant Japanese social network; whether privacy disclosure should be regulated in the way that credit card terms are disclosed; whether privacy is a property or a right (I can’t sell my freedom, or my children); whether we should teach kids to be anonymous instead of saying “don’t be naughty as you’ll be seen”; privacy market failures, including the fact that free and paid apps collect the same data; whether data poisoning might give any privacy or whether false data we give will just be swamped; the effects of convenient single sign-on via Facebook or other identity providers; whether there might be some way of getting firms to delete data after some period of time, perhaps by attaching liability or attaching other real costs to retention for excessive periods (this is an issue in the EU); the value of of imposing reciprocity, as in David Brin’s “Transparent Society”, versus the fact that abuse is more a function of power than of access; whether single-sign-on is a serious threat to privacy by tying threads and collapsing multiple vantage points; the risks of being tagged as the wrong person by accident, and malicious impersonation; and the risks of trusting in technology to assess people rather than in our instincts that evolved over millions of years.
Comments on Twitter
What an absorbing session! Thank you so much for these notes — lots of food for thought, and a tremendous service for those of us who could not be there.
Thanks for all these notes – most interesting.