I’ll be liveblogging the workshop on security and human behaviour, which is online this year. My liveblogs will appear as followups to this post. This year my program co-chair is Alice Hutchings and we have invited a number of eminent criminologists to join us. Edited to add: here are the videos of the sessions.
Luca Allodi started off SHB 2020 with a discussion of the complexity of the modern cybercrime ecosystem: there’s a spectrum of marketing from strategic to deeply technical. Campaigns can be single-shot or complex and multi-stage. A simple marketing analysis based on Cialdini’s ideas doesn’t come to grips with this. What’s needed if we want to model sophisticated social-engineering campaigns?
Richard Clayton was next, talking about booters: these are DDoS-for-hire services, of which the biggest is selling a million attacks a week. They cost $10 for as many attacks as you want in a month, and are mostly bought by online gamers. Richard analysed the attack volumes over time and measured the impact of various law-enforcement interventions. There was a blip at Easter, when more people are at home playing games, and the lockdown is having a similar effect. There are about three times as many attacks now as there were in January. All graphs on the Internet go up and to the right, so how do we measure? Different countries locked down at different times. Are they all games, or kids DDoSing their maths homework, or civil rights activists attacking the government of Minnesota? That’s what we’re looking at now.
Peter Grabosky is interested in overreach of the criminal law and studies common factors contributing to abuses, ranging from stigmatisation through task orientation and professional tunnel vision to performance measurement such as arrest quotas. He’s now looking at abuses in undercover investigations ranging from police impersonating children online to their seizing of criminal websites. To what extent can we carry over the controls developed for in-person investigations to the online world?
Anita Lavorgna’s topic is cyber-organised crime. She believes that crime is less organised online than some cybersecurity companies and politicians suggest. Many studies use very low thresholds to classify crime as organised; Anita tries to disentangle this from an empirical, legal and theoretical point of view. It goes both ways, in that some organised cybercrime doesn’t meet real-world criteria of seriousness per offence to get classified as such. And traditional organised crime gangs may be marginalised in cyberspace as they’re not efficient enough. In short, don’t see organised crime as just a hodgepodge of different serious offences. We need more careful definitions.
Cassandra Cross works on the definition of fraud; in Australia this is gaining a financial or other advantage by means of deception. Reported fraud is half a billion a year in Australia and it’s mostly online. Her work’s focus is victims and their support: prevention and response are important but millions of people become victims regardless and we must not ignore them. This involves working with banks, government agencies and others.
Alice Hutchings has been looking at the evolution of cybercrime in response to Covid through the lens of postings scraped from underground forums, and in particular Hackforums, which has compelled users to transact via its marketplace system since 2019. This facilitates trade between anonymous parties by having third-party arbitrators. There has been a huge increase in membership since February 2020, and new market entrants seem to overcome the cold start problem by doing small currency trades to establish a reputation. The structure remains the same, with a “business-to-consumer” model of a few power users transacting with large numbers of smaller users. If anything there has been a slight concentration. Marketplace comments indicate boredom, people being furloughed, and people becoming unemployed and needing to make money.
The next speaker, Sergio Pastrana Portillo, was the guy who started scraping Hackforums while at Cambridge. Now at Madrid he’s working on eWhoring, an offence whose existence we only discovered from forum data. It consists of hackers pretending to be girls and selling “packs” of explicit videos and photos that were obtained by deception or on underground markets. Three-quarters of the images were stolen from adult porn sites, social networks and so on; three dozen child sex abuse images were identified, reported to the Internet Watch Foundation, and taken down. Payment is usually via Amazon or PayPal. One participant revealed that to earn real money you need to do blackmail as well.
In discussion, Cassandra observed that everyone is vulnerable to fraud, regardless of their level of education; it just takes a different pitch. How do we intervene before offenders become highly skilled? We’d need to look at their sociocultural context; look at the NCA work with young people. How are marketplace rules enforced? No official enforcement has been seen but it’s convenient to use the PM facility, and if you didn’t and ask for arbitration, you’ll lose and be banned. How is the online/offline distinction evolving with lockdown? We’re seeing a lot of fraud around PPE and covid storylines used to drive classic cybercrimes; in Australia the frauds around bushfires segue into covid scams, and the most common involve an appeal to authority (they pretend to be from a government or a bank), plus urgency. As governments scare people with urgent health messages, perhaps little else could have been expected. Is the move to fraud online causing it to concentrate, with market “power users” following the rich-get-richer dynamics so common in the legitimate information industries? Might the move from email to teams / zoom / webex affect this? From the viewpoint of frauds against consumers and attacks on home workers, it’s not clear; distractions from kids, pets etc may also be a factor. Measurement might be hard. In the UK the police try hard to arrest people doing PPE scams, and the NCSC has been targeting “fake shops” (with some false positives), so this will bias the figures. How can we get data? SMEs have about the figures as private individuals, according to a forthcoming paper by Sophie van der Zee. Rick Wash pointed out the different social norms around email, slack and teams are different, in particular in terms of whether an immediate response is expected; Zinaida has done some work on this by comparing responses on Facebook and email.
Zinaida Benenson started the second session. She’s interested, among other things, in the trade-off between confidentiality and availability, and reported experimental work on how experience with ransomware affects people’s willingness to pay for backup and security services. Subjects saw a ransom note, did a learning quiz, then saw a second random note, and were then quizzed on their willingness to pay. Between a third and a half would pay, and willingness was correlated with being young or old, trusting the criminals, being well-off, being frightened and having no actual experience with ransomware.
Maria Brincker is a philosopher interested in knowledge opacity. If you move your arm, you know you did it, but you don’t know how. When we use digital tools, we are therefore comfortable with their mechanisms being completely opaque. We just expect that results will be predictable, and our cognitive processes developed to deal with exceptions, however we sense or define them. But when breaches of trust by platforms are out of sight, and not particular but diffuse, and don’t impact immediate usability, we’re sort-of helpless. That’s the case for FDA-style regulation of these kinds of harms.
Judith Donath also works on trust and trustworthiness. What do we gain or lose when we found trust on enforcement mechanism? In the old days a store owner would extend credit to people he knew; now it’s about credit cards and rating agencies. Forty years ago people hitchhiked; now we take an Uber. Is technology moving us from a world of trust which is essentially positive to a world of constraints, which is fundamentally based on fear? Judith believes that we should start seeing trust as an end in itself. Our technical ability to make protein pills for astronauts has not led us to abandon old-fashioned food with all its problems and side-effects.
Leigh-Anne Galloway works on payments. Everyone has to use payment systems but few understand what goes on under the hood. She wants to make security more accessible. She is also an artist, interested in exploring different ways of representing meaning: a chair, a picture of a chair, and so on. She wants technologists to be better at explaining our ideas by showing them in different ways. This is particularly important for security people as the concepts we deal with are subtle and counterintuitive. See the recording; much of the effect was in her slides.
Eliot Lear described how his cousin’s oven suddenly decided one morning that it needed to be cleaned, and this had the effect of a service denial attack. Can this be scaled up? Sure. The chain of trust involved in controlling an oven from a smartphone has a lot of known weak points. So what does it mean to protect devices in the Internet of Things? This is not at all obvious. You’d be stunned to know how much IoT stuff is being built into new hotels; if your son or daughter goes to a drugs party in a hotel room, the hotel manager will have evidence of it. He has a lot of data on what things talk to.
Simon Parkin continued the theme. What happens when two users in a shared space disagree? For example, one might want the loudspeaker on full while the other wants mute: do you just set it at half volume? What if one user wants notifications and the other doesn’t? Usability suddenly becomes a tussle space. Do you have a primary user, and a secondary one? One might be a survivor, or perpetrator, of abuse. Is human memory enough to recall all the settings across all the devices in a home, and all the rules for which features are compatible? Quite apart from the technical protection aspects, what should count as correct usability design in the first place? Then we need a lot more work on the unintended harms of risk controls, as controls designed to exclude outsiders can dump risk on legitimate users or classes of users.
Discussion started on issues of agency. Should a smart device know that someone approaching it is ‘logged in” or otherwise allowed to use it in some context? And what does assurance mean in this context? A good start might be some documentation from the manufacturers about what they thought they intended it to. You can’t just put drugs on the market; they need to be tested, and there must be some transparency. But how can you ensure that regulation is more competent and less captured than the FDA? It takes community pressure, which happens from time to time (when there’s a crisis). Many of the issues are around scaling: the crypto definition of trust, in that a trusted component is one whose failure can harm you, can scale (sort-of) while Judith’s affective version of trust is interpersonal. Quite simply, constraints scale better. You can scale affective trust via religion, but religions use social surveillance and some of the most successful religions made God into a kind of CCTV camera inside your head, which is a bit like a tech constraint mechanism (though religions have ideals and role models too). Usability does matter in tech: see how zoom has brushed aside dozens of competitors in the videoconference market by investing real engineering time and effort in getting the user experience right. As a result people are prepared to cut zoom a lot of slack when they screw some things up. It should not be rocket science to do it better: in real life, two people can have a private chat by moving away slightly from a group, and you can cut someone who’s being obnoxious by moving away. (Mozilla hubs does this with sound levels of avatars but is limited to a couple of dozen people in a room.) bringing a number of these threads together, how do we encourage good respiratory hygiene in the pandemic? Do we go the tech route and entangle people with sensors and trackers, or do you build manners and courtesy? In any case, technological constraints can be fragile unless backed by law; look at how Silicon Valley has disrupted lots of old constraints such as the presumption that states could wiretap all private communications. To explore this you need specifics: many constraints are perspectival, in that they constrain Alice but empower Bob. Context matters: the early Internet was well-behaved in part because people got online via work-based or college-based accounts, so misbehaviour could have costs.
Sadia Afroz started the third session discussing externalities between users: if someone at your IP address runs a spam campaign, you can see CAPTCHAs everywhere or just get blocked outright. She set out to measure the effect for users and for network operators. She started with 151 public blocklists with 2.2m addresses including Dshield, spamhaus etc) and surveyed 65 network operators; 80% use blocklists, and 59% for IP addresses. How do these lists work with dynamically allocated addressing and NAT? She used a bittorent crawler to crawl 48m bittorrent IPs, found 2m that were NATted and 45k were on the blocklist; over 70 users could be blocked at a single IP address.
Laura Brandimarte has been testing the hypothesis that emotional arousal might moderate privacy behaviour. As arousal decreases the effort we put into cognitive processing, it might make phishing easier. She recruited subjects who used mobile banking apps, and showed either a scary message or a neutral one on their phones saying the phone was hacked, and used software to assess emotion from facial expressions. Companies assume that trust in technology will mute privacy concerns and increase willingness to adopt tech. Her work suggests this won’t work as well on people who have had a scary experience. The effects on willingness to use were not significant.
Sunny Consolvo has been working with colleagues at Google on US political campaigns. Her talk was off the record. We’re sorry but we lost the recordings of the two previous talks when we switched off. For a workshop partly about usability…
We know that people’s willingness to share information is context dependent, and Alissa Frik has been studying this in detail. She’s building a richer model which includes the sharer, type of data, recipient, purposes, system, environment and risks at the first level, and a number of other factors at the second level (so the risk to a bank account depends on how much money you have in it). Every study of context invents its own framework, so it would be good to have one to share. The full paper will appear at WEIS later this year.
Frank Krueger is a neuroscientist trying to build a coherent model of trust that makes sense all the way up and down the stack in the brain. In the brain, trust is quite complex: there’s an emotional state associated with the risk of betrayal and typically arising from experience of reciprocity. He plays trust games with subjects in fMRI machines and tracks trust evolution through repeated interaction. There’s a balance between treachery and reward that can be mediated by economic (cognitive) or social thinking. Trust in someone is learned in a process that turns it from something conscious to something knowledge-based until it becomes a habit, at which point it’s identity based (when I see you, I trust you, before you ask me for anything). The path from thinking to knowing to feeling is familiar from the acquisition of other skills.
Alan Mislove has been studying ad delivery algorithms. In work reported at SHB 2019, he showed that ad algorithms took content into account, so that jobs for lumberjacks are shown mostly to white men and jobs for janitors mostly to black women. His most recent work has been on political advertising. He ran Trump and Bernie ads during the primaries and found that the platforms sent them to republicans and democrats respectively. It’s more expensive and slower to run an ad to your opponents’ supporters, and this is surely a side effect of relevance algorithms. Is it due to complaints? No, he tried serving “Go out and vote” neutral ads to everyone except Facebook (who saw partisan material at that URL) and found that getting your opponents’ supporters out to vote is more expensive. He concludes we need more transparency around ad targeting.
Rick Wash has been looking at email, as it’s the most open platform and thus the most open to phishing. Last year at SHB he reported a survey of how experts used email and detected phishing; now he’s been looking at the general population, to try to identify good options for training. The survey question that stood out was “What stood out about the email?” and the top answer was the action the email requested. Content was down and spelling mistakes were way down. It therefore seems that everyone parses email by trying to understand whether it requests action; in this respect normal users are no different from experts. Yet out technical phishing detection systems don’t look at this, and neither do our anti-phising training programs.
Discussion started on ad economics: Trump paid slightly less for ads, perhaps as he has more supporters or because they share more, or because he has learned better to game the platform to rate his ads as more relevant. The algorithms recognise pictures so an attack ad could be cheaper if it has the opponent’s picture. The next topic was the underlying neural model of trust-building experiences, such as teenagers sharing risky behaviour; hormones such as oxytocin can modulate behaviour but the details at the neurological level are unknown. Then there’s what systems people feel are secure; that may be a matter of culture and marketing as well as the actual workflow: secure and insecure messaging systems work the same (and some that are thought secure are actually less so). Business models and workflows are more entrenched then technical details, and this works for bad software as well as good. Work context can provide a useful alarm: apparently personal messages coming from apparently corporate senders are often phish (or marketing). Some CISOs place much store by systems that distinguish internal and external traffic clearly. Criminals get this wrong; they don’t realise that chats in Discord or Telegram channels can be harvested back to the year dot by anyone who gets access later. Some corporates delete old emails but that can impose a severe cost in terms of corporate memory; you end up hunting for old emails for all sorts of purposes. But people get used to annoyances: in Bangladesh everyone has to solve CAPTCHAs all the time, as the whole country’s behind NAT and everyone is on some blocklists. Yet people get used to that too.
Yi Ting Chua started off the second day by discussing the importance of considering multiple perspectives of risk. She has worked on the unintended harms caused by security measures, especially to vulnerable populations such as the elderly, and done separate work on the risk perceptions of criminals using underground crime forums: how do drug dealers spot cues of law enforcement, and of unreliable sellers? A third interest is gender; she’s analysed forums where men talk about how to control female partners.
Ben Collier was next, explaining how cybercrime is often boring. He’s been studying cybercrime as a service through underground forums. Cybercrime has become deskilled and industrialised over time, from the lone artisan hacker to gangs using tools made by others in a hacker community to service economies using shared infrastructure. This has created a lot of really boring support jobs in customer service and system administration. Evading law enforcement takedowns involves a lot of tedious setup work; so does policing bad services, such as to crack down on rippers and CSA material. These deviant office jobs have low hacker cred; workers get bored and burn out. Sociologists talked of anomie driving people to more exciting deviant cultures; the reality is now the other way.
Maryam Mehrnezhad has been doing a cross-platform analysis of the presentation of privacy policies and options across a range of apps and of websites, seen through different browsers. This was done as a GDPR reality check. Most services start tracking the user before interaction with the consent mechanism, and most popular designs are not the most effective ones from the viewpoint of user engagement! There are lots of inconsistencies, though most services nudge people to accept cookies (and cookies are explicitly mentioned less frequently on apps). Future work will tackle IoT platforms, and user emotions.
Daniel Thomas measures security and cybercrime and studies the ethics of this, for example when data fall off the back of a lorry. Sergio’s talk yesterday described measuring eWhoring; in such work one doesn’t want to see the images, particularly any indecent images of children. The processing pipeline involved passing all images through PhotoDNA to filter, report and delete known images of child abuse; then Yahoo’s NSFW image classifier which fed a different category classifier, while in parallel there was an OCR system to pull out any interesting text for analysis. This sort of processing is harder than it looks!
Sophie van der Zee has built a personal model of Trumpery. Trump often makes factually incorrect statements, some 18000 since coming to office according to the Washington Post. But is he ignorant or lying? In the latter case we’d expect different language use. Sophie ran a linguistic analysis and found that almost half the word categories differed %ndash; perhaps the largest difference she’s ever seen. She built a logit model based on the training data and found she could 72% of out-of-sample tweets’ truthfulness correctly. This is the first ever personalised deception detection model, and it outperforms all existing general models on Trump, albeit only by a few percent.
Lydia Wilson went to New Zealand last year to interview widely about responses to the Christchurch shooting. The country is small and relatively homogeneous; the media are closer to the people and mostly locally owned. Early decisions were made on the hoof; very quickly they decided not to amplify the shooter’s hate speech. His name was not mentioned, a decision quickly followed by the Prime Minister Jacinda Ardern, and got together before the trial to work out a protocol to report it. Some people said they were compromising media independence; unanimously they rejected this and said the main thing was not to be played. The responses of social media were far less clear. The livestreaming was pushed thanks to Facebook’s algorithms, even to Ardern’s own stream; it went viral and Facebook claimed they had to pull it over a million times in the first 24 hours. The regulator called them “morally bankrupt pathological liars”. Arden went to Paris and signed Macron up to the Christchurch Accord, which other countries (except the USA) also signed as well as the major social media firms. The devil is of course in the details. How will it play outside western democracies?
Discussion started around methodological difficulties: of assessing whether people are bored, and indeed of measuring anything dependably through the NLP mining of hundreds of thousands of posts. In Trump’s case, perhaps his staffers made a number of the tweets? There are some tells, such as the type of phone and his staff’s time schedules, but these aren’t infallible and one has to treat the data as noisy. Cleaner data would surely give a stronger signal. One good thing about this research is that the fact checking was done independently, by a newspaper. Other findings are that Trump is more likely to speak the truth about religion, and to lie about money. (There’s actually quite a literature on deception detection via linguistic analysis, going back to work on detecting depression. Previous work shows liars use negative terms more, which is certainly the case with Trump.) On censorship, were the kiwis naive in expecting online firms to take the video down? Maybe, but they managed to get the Daily Mail to take it down. And what might be the impact of specialised platforms on radicalisation? Lydia thinks it will greatly help group bonding, due to the shared space and shared messaging. The growth of the far right is extremely complex, very disturbing, and understudied as governments have preferred to focus on Islamism. The social media companies are putting a lot of money and effort and smart people into counterterrorism and the big companies are sharing the tech with the smaller ones, but it’s not top of their list. Livestreaming is too important for weddings and the like for a two-second delay to be acceptable. How does recruitment work for cybercrime as opposed to terrorism? You even see help wanted ads! The lockdown boom is great for business and the big firms get bigger thanks to network effects. And what about enforcement? For example, is the GDPR survey work leading anywhere? May it have any effect on corona services? Time will tell…
Yasemin Acar has been investigating who users think should be responsible for security and privacy, particularly of IoT devices which can violate privacy in creepy ways. Users are indeed concerned but fatigued and resigned. They do try to do stuff, such as fiddling with volume and home network settings; some won’t have private conversations in a living room with a smart TV in it. Many assume that they signed away their rights when they bought the thing; digging deeper, they assigned ultimate responsibility to manufacturers, to governments and standards bodies – other industries have standards bodies, so why not smart homes?
Steven Murdoch is interested in turning data into evidence, which often involves the legal system. How well does it actually work? An important case is the Horizon appeal in the UK. Horizon was an accounting system at the UK Post Office that was riddled with bugs, and it resulted in some 900 Post Office staff being wrongly prosecuted for fraud. This may be the worst miscarriage of justice ever in Britain; the court has finally found that the system was entirely unreliable and the Post Office has agreed to pay some compensation. There has been not enough attention paid to evidence-critical system, and there isn’t much of an incentive to build them if they might find against their operators.
Katharina Pfeffer has been looking at the usability of security warnings; it’s not much good. The underling protocols are complex and normal users can’t be expected to understand them. She has analysed people’s mental models of https and found that normal users underestimate the security benefits, while they ignore and distrust security indicators and warnings. Even administrators don’t understand the link between authentication and encryption. Things get better with letsencrypt and certbot.
Kami Vaniea has been looking at how sysadmins patch. The key is online communities of practice, such as patchmanagement.org, where groups build social bonds and swap knowhow. Groups release additional information on the back of Patch Tuesday: what are the CVEs? Which patches are safe? Some sysadmins work directly with Microsoft and pay money to get patches early; others have massive test rigs. Some reach out into weird corners of the Internet and pull in evidence of obscure stuff failing. These communities have a massive impact on patch rates worldwide, and engage in collective bargaining to escalate issues to Microsoft. Yet they are basically unstudied.
Marie Vasek investigates cryptocurrency crime. In pump and dump, scammers buy an obscure cryptocurrency and then get people to pile in and buy it. When starting a community you have to bootstrap it somehow, perhaps with your own money, and add news manipulation to persuade people you have some kind of technical or inside knowledge. Pump and dump basically only works on coins with thin markets. She has tried to collect data over time but over time she got kicked out of one group after another, because she didn’t invest in any of the schemes. The volume of fraud seems to have gone down over time as the victim pool, which was limited to techies with bitcoin, got exhausted.
Discussion started with why people buy products with which they’re uneasy; the point at which people could realistically opt out has passed. With one technology after another, this point is reached. Is it worthwhile to force these things to be safe to use? And is it worthwhile for governments to force computers used in evidence to be fit for that purpose? Should we change the presumption that computers tell the truth? The presumption saves a lot of money, which is why it was introduced. Nonetheless there are things we could do. For example, Horizon had a list of all keystrokes entered into the system, so you could check (in theory) where a transaction came from. In that case the keystroke log was not useful because it had been reconstructed, but if done properly it might help. As for communities of practice, this is of more general interest; does it not lead to a risk of monoculture? Kami talks of group A (patch everything), group B (patch important stuff) and group W (only patch if otherwise the world will end). There are plenty reports of bugs in bug fixes (don’t use patch X if you have two monitors) and secondary strategies such as patching one month late for reliability. It all depends on where you are on the spectrum between stability and security; and the groups also split depending on whether they take all the patches or the security-only ones. On the question of evidence, making computers better at producing dependable evidence has both technical and legal aspects, with the latter often being about procedural law, such as how much evidence is proportionate and necessary, and who has to pay the legal costs. As for who’s responsible for the underlying security, opinions are all over the place, whether you talk to legislators or random citizens, while most vendors behave as badly as they can get away with. Historically the threat of reputation loss has been insufficient; there need to be baselines set by law. Academics should evaluate IT standards to mitigate the risk of a race to the bottom, and we just expect that firms will blame users for the consequences of their bad design. (This is much wider, as victims of crimes such as fraud and even rape often blame themselves.)
Alessandro Acquisti has been writing a policy paper on the behavioural economics of privacy; usually he avoids policy. Before the Internet age, Altman talked of a selective process of access to the self, and pointed out that we engage in privacy behaviour all the time, by opening and closing doors, and we try to do the same online all the time at the meta level by deciding what goes on email, on zoom, on signal and so on. It’s only when you look cross-platform that you get the big picture. Then you have to work through the supply and demand sides to track the market failures.
I was next talking about how I just discovered Josh Lerner and Jean Tirole’s 2006 paper on forum shopping and found that it can teach us a lot about how security certification and standards fail.
Bob Axelrod has been thinking about insider negligence, the biggest cause of problems. Yet when a user is negligent, most of the time their neighbours don’t intervene. How do you encourage peer reporting? An organisation can clarify the obligation to report, stress the moral dimension, make reporting easy, protect whistleblowers and assure people that the punishment meted out to wrongdoers will be reasonable.
Serge Egelman has been examining privacy behaviours of mobile apps at scale. How can I ensure my kids’ apps are privacy protective? Disassemble them and do deep packet inspection. Or read the privacy policy? Tiktok’s is 6000 words and at more than an intermediate reading level. Policies hand wave about “business partners” without naming these third parties whose own policies you therefore can’t find. Serge has his own build of instrumented Android and a pipeline for analysing apps in a clickfarm. His firm AppCensus helps developers and regulators. For example, he finds abuses by apps by having apps given no location permissions and finding them sharing location data. This triggers disassembly and inspection, leading to discovery of hacks in third-party SDKs.
David Livingstone Smith is a philosopher, interested in issues such as how ideologies persist and become reactivated by changing social and political circumstances. He sees ideologies as systems of belief that advantage one group by some oppressive means, and may become dormant when they no longer work. An example is anti-semitism in Europe which was dormant from St Augustine until 1095 when Pope Urban called the first crusade. During the 12th and 13th centuries jews were gradually racialised; in 1347 the bubonic plague pandemic was portrayed as a demonic plot by Jews in league with Muslims to wipe out Christians and torture Christ himself by torturing the host. Things calmed down until the late 19th century when extreme rightists started talking about jews again, but got no traction. That appeared with WW1, the flu pandemic and the Bavarian revolution, which together revived all the old medieval themes and made them causally efficaceous again.
Tyler Moore has been working on the role of insurance in cybersecurity. Bruce Schneier predicted in 2001 that insurers would eventually determine what firewalls and network monitoring and operating systems firms use; why did that not work, when insurers had such a huge influence on fire safety by stipulating everything from how many water hydrants a city has, to who runs the fire department? Experience so far suggests that insurers don’t have much clue about which firms are at risk or even what works. The market dynamics seem to be that smaller insurers are happy to share data but the big players won’t. Competitive pressures are currently driving a race to the bottom
Elissa Redmiles has been applying her privacy usability skills to assess the current crop of covid19 apps. As the benefit scales quadratically with adoption, the big question is whether people would actually use them. Talking to a sample of 1000 Americans, 82% would adopt, but if there’s any hint of a privacy issue this falls to about half. The same effect is seen when false negatives or false positives are made salient, and people care more about false negatives. Other factors matter less. She’s working with colleagues at Microsoft on a descriptive ethics for covid apps.
Damon McCoy has an online advertising transparency project. After the 2016 election, Facebook promised to disclose more information about political ads: creatives, impressions, geographical spread and so on. He extracted over 3m ads on which over $600m were spent, finding in the process that the ad library isn’t a static archive because of retrospective policy changes that remove ads. There are all sorts of ways the library could be attacked, such as sybil ads pointing to burner pages, or just failing to include a disclosure string. There are already inauthentic communities, clickbait and all sorts of other abuses that only get removed if you press. It took some pressure, for example, to get Facebook to agree to exclude Chinese state advertisers.
Bruce Schneier was the last speaker of SHB2020. The tax code has bugs (loopholes) and attackers (tax attorneys) who exploit them at scale. So what’s a hack? A perversion of the system goals that’s unintended and unanticipated? We get hacks against our systems of democracy and indeed our cognitive systems. We think of hackers as loners but usually they’re rich and powerful. Hacking is ubiquitous; parasitical, and thus not aimed at destroying the underlying systems; often patched but sometimes accepted into the system (e.g. rule innovations in sport); often done by the wealthy who can buy the expertise and are more likely to ensure their hack becomes normal; so context matters. Hacking is a way of exerting power; it’s how systems evolve. The common law is how they’re adjudicated. There are hierarchies: you can hack the tax system, or go up and hack the legislature, or again and hack the media ecosystem; again you can go down and hack turbo tax. Hacking will become more pervasive and serious as we automate more stuff and as AIs get turned loose to find more vulnerabilities. Will AIs be trained to hack legislators, and to hack us for private gain? This is going to turn into a book; Bruce is interested in examples.
Discussion started with whistleblowing. Bob had been thinking about internal complaints, but if that doesn’t work the whistleblower might go outside the organisation, which is likely to turn against them. Moving to covid apps, there’s quite a literature on communicating health risks, which can be used to ground discussion of the various types of errors. You might have thought that people would be afraid of false positives, but in fact they’re afraid of false negatives; as background, about a third of people say they’d ignore any advice from an app to self-isolate. However since the work was done in May quite a lot of employers are thinking of running their own apps and the risk calculation might be very different. Did people understand that the app helps other people, not the phone owner? Only about a third will install it “to be part of the fight”, another third will install it if there’s an incentive (such as a month of free healthcare), and a third won’t. Much of the discussion ignores the abuse risk; a case for centralisation is cheater detection. The engineering is also not encouraging, as using an app is likely to push up both the false positive and false negative error rates. Moving to hacking versus cheating, hacking is where you abide by the rules but exploit them in ways that the rule-maker did not anticipated. An example is backstroke: people realised you could swim underwater most of the time and go faster. That seemed so wrong that the rules were changed to ban it. In fact the entire history of formula 1, and the entire history of Judaism, reflect this. Another aspect is whether the hacker is exploiting the system or subverting it? Socrates said at his trial that he didn’t want to subvert the system and so would accept his death penalty. Another aspect is Dan Ariely’s tests of bankers and others about whether it was OK to cheat on a test to make money; the rich and powerful behave differently, you can expect to find differences between professional communities, and also between countries – in some, the reaction to tax cheating is ‘well done”!
Many thanks sir for these updates. I am learning quite a lot from this site. Where can I get the video/audio recording of the workshop?
Finally, in August, I got round to adding the videos of the sessions which I’ve also linked from the main post above.