The Workshop on Security and Human Behaviour is happening right now in Carnegie-Mellon University and I’ll be liveblogging it in followups to this post. The participants’ papers are here, while the liveblogs and papers from previous workshops are here.
Lorrie Cranor kicked off SHB23 with a talk on how, years ago, she argued that we should present privacy as a kind of nutrition label; Android and Apple now do this, but the labels are unhelpful. Why? Her latest research suggests that neither developers nor users understand what’s meant, and it’s made worse by Android and iOS unfolding the matrix in different directions. After the White House called for cybersecurity labels on IoT devices, she got involved in various working groups with industry people, who are trying to get away with as little as possible. There’s a similar story around the California regulations; choosing labels is hard, and ultimately the lawgivers have to crack the whip.
Laura Brandimarte has been investigating what makes parents trust mechanisms for protecting teens from online predators. People trust humans more than automation; what underlies this “algorithm aversion”? Are the worries about correctness, privacy, or agency? She’s found algorithm aversion less extreme than previous experimenters, so experimental design may be an issue. Given an either-or choice, parents will go for the most accurate with sometimes even some evidence of human aversion. Supporting decisions is also good, as agency appeals.
Tesary Lin was next, talking about how firms use choice architecture; some optimise consent banners to maximise data volume, while others are starting to realise that quality matters too. Does a vol-max frame exacerbate sample bias? Her hypothesis is that consumers who share are those whose valuation is less than the price offered; or crudely, that you miss more of the rich consumers. She tried different choice architectures at different valuations, and found that poor young people value privacy less but are more responsive. In conclusion, a vol-max frame does lead to more bias.
The last speaker was Geoffrey Tomaino who studies the data market as a barter market; how does this make it different from a traditional cash market? He evaluated cash versus barter values of three hours of the subject’s location data, finding that people wanted £50 of products rather than £40 in cash – but had intransitive preferences.
The discussion started off with whether algorithm aversion might be entangled with privacy; Laura found it didn’t matter, except for concerns over the location of the cloud service. What about implicit cost assumptions? Laura didn’t control for this, while Geoffrey found that the products sought were unlinked to the data collected. He did not see the intransitivity for other choices such as the value of labour; a further experiment might be pricing location data that was not copied to the cloud. Europe is different, allowing an EU label or a national one, which pushes some US firms to want a US label despite the fact that the US label will be for cybersecurity rather than privacy. The overall effect of all this is that consumers are nudged etc into systematically undervaluing their data; Geoffrey reckons that cash valuation is more accurate. But there are many other ways that firms get consumers to sell stuff cheaply, e.g. lotteries, and assessing consumer welfare is hard – unless you get the prices from the data markets. But then many data markets are monopolies or oligopsonies; can we say anything relevant to the new antitrust movement? A more competitive market might drive up prices, but in any case when I sell data to a broker it doesn’t feel like I’m selling it any more. GDPR may sometimes get in the way as if you offer options they might have to be opt-in, destroying the data flow; the cookie banners, which are mostly served by Onetrust, which does minimal EU compliance if you check the “EU” box; NOYB badgers firms for illegality and advises them which Onetrust boxes to check. What about the IoT labels for patchability, post-Mirai? They don’t seem to affect consumer behaviour, and labels generally are a weak policy tool. There’s a lot of work in other contexts on whether labels are supposed to change user behaviour or industry behaviour, and in the former case whether they’re supposed to have systemic effects. The meaning of ‘privacy’ is also rather undefined across technologies and even between researchers; it probably changes over time for consumers. Consumers also suffer increasingly from privacy resignation, and it matters whether they think their data is already out there anyway. The uncertainty about what data gets shared means that people don’t know what they’re trading, and industry naturally works hard to maximise revenue by obfuscating this even further.
Nicolas Christin has been studying how investment information has shifted from regulated prospectus channels to social media influencers, and in particular how many of them act in good faith. He got access to performance data for 5m investors, of whom 75k are on twitter, and measured who’s long and who’s short on bitcoin versus what they’re saying. After the peak many went short, but 30-40% kept telling others to go long. Cautious people, who advocated moving to cash, were much more consistent. Social trading shows similarly questionable behaviour. In short, maybe the dynamics of social media can exacerbate bubbles.
Serge Egelman has been studying failures of 2fa apps with his student Conor Gilsenan. The backup mechanisms of 22 Android apps had many issues from reliance on SMS to serious crypto flaws that made backups too easy to recover. Four even sent the key along with the ciphertext to the developers’ servers. When they disclosed the flaws, most developers didn’t seem to care.
Tony Vance studies security expertise on boards of directors. Boards come under pressure from the SEC and designate directors as the security person; there’s a class run by CMU to teach some of the basics in a day or so. Once alumnus was then described as having “substantial background and deep expertise with cybersecurity issues” and meets the CIO/CISO every two months. That meant they were spending some hours every year to train this person. Various excuses are made to not appoint a director with existing expertise; many directors believe that existing general expertise carries over.
Elissa Redmiles has been studying how workers perceive harms and protect themselves while joining a new firm of starting gig work with a new client. Workers are more concerned with physical safety, for example against sexual harassment, than with data privacy. Curiously, dating site users were less concerned with safety than gig workers, although it’s not always obvious how you ask just the right questions. It is still entirely natural to value bodily safety more then privacy; but perhaps online safety could be better linked to the physical variety?
Rainer Boehme has been studying breaches at security vendors with Svetlana Abramova“. These can help attackers to pick valuable targets, such as hardware crypto wallet users. This leads to ethical and legal questions around sampling from leaked data and using it to approach respondents. In advance of consent, you need to use legitimate interest to justify the approach, but it comes with high bars. They got ethical approval to talk to victims of the leaked database of Ledger customers.
The discussion started on the practicality and ethics of doing breach studies in partnership with the leaking company; that is rare and would underime confidence in the results as the company will not want to know some of the answers. The priority given to data privacy will depend on where in the Maslow hierarchy a subject is operating at the time of the decision. The qualification for a cybersecurity director is as vague as for financial experts; the companies have to justify an expertise claim, but a CEO can be seen as a finance expert if accountants work for him. Being a CISO doesn’t help you get on boards of other companies, unlike other C suite officers. You often don’t have bargaining power as it’s not as easy to measure security posture as it is to measure financial reserves. Boards may have different cultures, such as mostly sales, mostly engineering or mostly finance, and expert directors may be easier to sell to. There’s a more general point about much of the research we do here falling on deaf ears, because it’s not in the company’s interest to hear what we discover; and another about the companies often being hard to find or even offshore (this is a particular worry with some wild-west application areas such as sex work). There are also differences between balance-of-power harms (as in gig work) and emotional harms (as in dating) though it’s complicated.
David Livingstone Smith discussed the increasing racialisation of security since 2016 in rural America, and its back history in terms of lynchings a century ago, as a lead-in to the philosophy of race: biological realism is accepted by very few; social realism is accepted by more; David and his wife are anti-realists who dismiss race as a fok theory, whose function is to legitimise violence.
Angus Bancroft investigates illicit markets, particularly drug markets. Markets rationalise, commodify, and change the way people perceive transactions. Systems for ratings and escrow empower and drive change but this involves culture. In lockdown an economist would anticipate higher prices and lower quality; yet in practice dealers tried hard to limit prices in a “moral ecology” or imagined community. This affects attempts to introduce harm reduction mechanism. Similarly, legal markets involve a morality of exchange. Cultural norms are an important but overlooked aspect of how markets work.
Damon McCoy asked for his talk not to be blogged.
Li Jiang has been investigating the costs and benefits of being privacy-conscious. She found that such people are seen as higher status because of perceived human capital and autonomy, though there can also be a perception that they have something to hide. The dominant mechanism appears to be autonomy, though this may need to be couple with other cues. Her next research topic will be learned helplessness.
Discussion started on the point that drug sellers are just operating like businessmen, who know that it’s easier to sell to repeat customers than new ones. A business approach might also illuminate people’s willingness to share information; to buy things like insurance, you need to. As for realism and anti-realism, don’t confuse this with social constructionism; constructionists see race as perfectly real. Academic anti-realists take the view that talking about race is playing the oppressor’s card; David’s Jamaican wife, for example, did not feel she became Black until she came to America. (In Jamaica the issue is class.) She’s pressured to adopt African-American culture but that’s different from Jamaican or Kikuyu, and wrote a paper “On being race queer”. David believes that race cannot be separated from racism; race carries entrenched ideas of hierarchy. EDI efforts often reinforce problematic categories rather than destabilising them. How could research programmes in this field be conceived? Some managers in grant-giving bodies are so scared of FOIAs that they’d rather pay for pure computer science and not talk about humans at all. Academics might be better defended if there were fairer copyright warranties; but then, university admins often try to dump liability on academics. The perverse incentives have been seen in other applications such as vulnerability disclosure were there’s a need for legal sanctuaries if we’re to have transparency.
Coty Gonzales is working on shared mental models for human and AI defenders, and an application of this is training human defenders. Cognitive agents provide a better learning experience that more mechanistic optimal attack agents. She uses CybOrg, which uses instance-based learning, to construct agents for the attacker and the defender, and sets up an interactive defense game.
Rick Wash has done a lot of work studying how experts and non-experts spot phishing experts. Non-experts would often feel an email was weird but not formulate the hypothesis of phishing; and they didn’t spot as many of the weird things either. In short, security experts have a sceptical stance towards email; so how much does stance matter? So he’s been working at the initial step of making sense of an email. How do you know it’s actually from your boss? It turned out that their sceptical stance made it harder for security experts to accept an email as legitimate, as well as to flip between views of something as genuine or bogus. So the sceptical stance imposes some real costs.
Julie Downs has been working in scepticism and unravelling. Although people are notoriously inattentive to missing information, sceptical buyers assume that missing information is bad, while unravelling among sellers can lead to all but the worst disclosing. This is important to universities going test-optional: students will submit good stores and missing scores will be punished regardless of the reason.
Bart Knijnenburg studies how people make privacy decisions; careful and snap judgments are different. Defaults make decisions less nuanced and can induce reactive judgments, and can get embedded in autocompletion tools. So how can we “design for elaboration”? He compared two efficiency-increasing designs with three tools which allowed users to add or remove records on a disclosure forms. He found that people were more likely to have a higher level of self-efficacy, and think more carefully about privacy.
Idris Adjerid talked on Killing the bees to stop the roaches. Even if defaults drive people to say no more often now, which are fairly impervious to experience, over time things might converge to active choice. He presented his experiment as a signup to be a mobile app tester and asked whether they’d share the data with a third party; he manipulated the permissions, added a social norm nudge.
The discussion highlighted the importance of whether missing information is made salient. A useful data source might be the history of phishing, where we have a quarter century of case studies, and where every innovative new way of sending plausible emails at scale reminds us that the experts are not as expert as they think. The emails sent in competent phishing campaigns can have extremely high quality text, even if the URLs are a giveaway to a suspicious expert. Some feel that security must be more usable if people are to do it, others that it must be somewhat obtrusive to perform its social function. Yet we need to trust some things, and the boundaries can be obscure. Decisions that have high stakes, or that at least are perceived to, invoke attention and can bring in further considerations, including sludge and other dark patterns, and the fact that making privacy salient is often cunterproductive. Underlying many of these phenomena may be either social effects (people like me) or economic effects (saving towels benefits the hotel, not me). Publicity about nudges, which has been more widespread in the UK than the USA, may also be having an effect as people may notice nudges and move out of reactive mode. In short, nudges and framing are super complex, and you really have to test stuff before you try it at scale. And what happens once people start to use LLMs for sludge and for ordeals to rate limit service demand? You can ask ChatGPT to be empathetic, and you can surely also ask it to be manipulative. Over time, we’ll learn what such AI can and cannot do, and we may want better metrics of human-likeness than the Turing test.
Norman Sadeh started Thursday’s proceedings with a talk on how usability is now privacy’s greatest challenge. Increasing regulation makes privacy policies longer and more complex; people don’t read them anyway. Mobile app controls have gone from inadequate to perplexing. This has led him to work on privacy assistants – automation to help people deal with things like cookie popups. Most recently he’s working on software to parse privacy policies and highlight the salient stuff, and on collecting detailed data on privacy preferences and communication failures.
Sophie van der Zee has been investigating whether shareholders care about corporate fraud. Which comes first, news coverage in the media or changes in stock prices? She looked at 28 scandals from 2000-03 before reporting was improved, and before social media. Even studies with both 6-day and 31-day windows show abnormally low returns after the first media mention. Conspiracy was the best type of fraud to be engaged in, and tech was affected most.
Kevin Roundy of Norton does security warnings for consumers. He’s been studying the populations who reject security advice to understand the justifications they use; for example, some people don’t use password managers as it’s “putting all your eggs in one basket” while others don’t realise they can sync between their phone and their laptop. He’s working on better ways of warning people of scam web pages by being more explicit about reasons.
Florian Schaub found that 73% of people had breaches associated with their email address, with over 5 breaches on average, and most were unaware. So he surveyed 400 of them. 53% said it would have no or little effect on their behaviour; the minority would change a password or enable 2fa, while very few would do a credit freeze or file a complaint. He went back later and asked them what they actually did. Only about half of those who said they were “very likely to act” did anything.
Christobal Cheyre studies how the online advertising ecosystem affects user welfare in the round. For example, what are the welfare consequences of ad blocking? He sees how many users will uninstall their ad blocker for a month, and then follows up with a repeat once they’ve reported their experiences and their well-being. It cost on average $20 to get a user to uninstall, and $25 for a non-user to install. Users are slightly happier than non-users; the latter are more likely to welcome relevant ads but also to admit that they feel less satisfied with their goods.
Discussion probed the likely variation across different types of data breach. It then turned to the fact that ad blocker users are more knowledgeable about benefits, which my explain why they wanted more money to uninstall than non-users wanted to install, despite the fact that installing software is normally more risky. Next was the poor usability of credit freezes, and the way that the credit reporting system is designed to dump costs and risks on users. User exposure to fraud and abuse has changed, from the old to the young, as the young spend more time online and have unjustified self-confidence, but it’s hard to measure this precisely because of poor reporting. As for password management, the commercial password managers are at best holding their own, but many people use the browser as it’s low-friction and it makes phishing harder. But then banks make an effort to break passwords managers, which appears to be counterproductive. Usability failures are often a matter of incentives; when password managers are sold to corporates the convenience of the end user is less of a concern. Lock-in also leads to less usable passwords, and does the wish of banks to blame customers for fraud by forcing them to write down passwords in breach of contract terms. Complexity is continually moved from vendors and service providers to users in ways that disempower them. Finally there are lots of inaccurate claims by firms, such as “we use differential privacy to protect your privacy” with no mention of epsilon.
Avi Collis measures consumer welfare in the digital economy. The information industry has stayed at about 4-5% of the economy for the past 40 years; what else is there? Free digital goods end up substituting stuff we used to pay for. Avi showed a catalogue from Radio Shack in 1992; every single thing is now an app on your smartphone, and between them they’d have cost about a million dollars thirty years ago. He does choice experiments to see what people need to give up a service like maps, and calculates demand curves. There’s a lot of heterogeneity; the lowest 20% value Facebook at zero while the highest value it over $100 a month. The median American’s valuation has fallen from $46 in 2016 to $18 now. The most valuable good is google search, which people prefer to meeting friends in real life (at least, pre-ChatGPT); then YouTube, Google Maps and so on. In Germany and Mexico, WHatsApp is top; in Korea, it’s YouTube.
Ahana Datta used to be a CISO. CISOs usually complain of being unable to move the board; it this a function of organisation reporting lines, or complexity, or what? She’s been investigating how it works across firms. Some small firms have virtual CISOs – people who work on contract for five or six different firms. One firm even forgot they had a CISO until after a breach when they needed comms written. At larger firms, some CISOs have decent relations with the board and can get stuff done. Some have a BISO (a business information security officer) reporting to the CEO and a CISO reporting to the CIO, which guarantees stasis. Boards mostly don’t watch security as there’s no growth metric. Ahana has become depressed about all this and doubts that there’s a solution.
Ananya Sen looks at how algorithmic performance varies with the amount of personal data used for training. Specifically, can a news recommender keep your attention for longer? The answer is not much; a 2.7% increase over a six-month experimental period. A human editor does better, especially with fast-paced breaking news, which some algorithmic approaches don’t deal with too well. Future work will look at external, third-party and public data.
Yixin Zhou studies the problems of older adults with tech, and particularly their vulnerability to scams. Seniors don’t accept that there is much difference, except possibly with cybercrime, where they may be in a more exploitable state when phish arrive. They tend to have crystallised knowledge in the form of common sense around transactions that can be used to identify scams. Other themes include trust in service providers and the willingness to use self-protection strategies such as unsubscribing. But there’s a lot of heterogeneity, and framing senior as vulnerable may be counterproductive.
Tyler Moore has started a “cyber hygiene improvement lab” whose goal is to measure and improve cybersecurity in organisations. Subgoals include measuring incidents, building on his 2022 WEIS paper; and measuring culture, which can be done with situational judgment tests, which ask for example whether you’d help a coworker circumvent controls to get their work done.
In discussion, there’s substitutability between social-media apps and messaging apps, which makes valuation hard if you test them separately. On corporate priorities, some companies are less afraid of a fire in their factory than of being fined by the fire department for failing an inspection: can compliance be used to improve security posture? Well, as seen earlier, boards can just pretend to comply. It’s ard for CISOs to be trusted when their job is distrust; and they’re just the latest iteration in a series (external auditor, internal auditor,…) all of which broke down because they all play to their own incentives and pay little attention to the local context of the client company. Where there’s an operational risk that follows from information risks, such as a correspondent in the Middle East who might be compromised along with a source by bulk intercept or bulk traffic data collection, these traditional corporate mechanisms are almost irrelevant. And there are real limits to a CISO’s ability to influence a corporate culture; might a better approach be to measure and understand it before attempting interventions? The problem there is how do you measure it, particularly across large orgs such as government ministries. When it comes to the costs of privacy, it’s not enough to just look at the measurable effects on direct participants; we have to consider the 10% or more of adults who completely refuse to shop online. Other experiments suggest that you can’t rely on panel providers to reach these demographics. And with seniors there are significant privacy and financial risks in care homes; when declining people ask staff who’ve not had any vetting at all to help with their banking, things can do wrong. Nobody takes responsibility as seniors have to sign away their rights on entering care, while banks will blame anyone who shares credentials. And once chatbots are widely deployed to keep lonely old people company, we can expect them to sell them a lot of bitcoins.
Christof Paar works with trusted hardware, which is now so complex that nobody understands it. The big projects to reshore silicon from Taiwan may make things even more complex and opaque. The scope is not just specific chips but all the way up to motherboards. Christoph has a project for making all this explainable.
Richard Clayton presented some joint worth with the FBI – taking down DDoS-for-hire websites, or booters. there have been law-enforcement actions for years to take these down, such as the takedown of webstresser.org in 2018; most recently, in December 2022 the FBI seized 49 domain names, of 108 active in the world, and arrested six people. Richard has been measuring the effects of these takedowns. There’s also work in progress on demand suppression, by warnings ads to users and police operated spoof booters, as well as technical work making spoofing harder.
Lujo Bauer has been working on detecting iPhone compromise in simulated stalking. Can people detect digital compromise in the context of intimate partner violence? Some apps are sold for interpersonal surveillance while others, like maps, can be repurposed for it. He asked participants how they’d help a coworker detect location tracking; most had enough clue to look at the iPhone location settings, but nobody made the leap to check the settings in Google Maps. Jumping between different app interfaces was just too hard; and as harms can be done via individual apps, it’s not clear that a platform-level solution exists.
Sagar Samtani has been working in cyber threat intelligence. Hacker forums are a rich source of data; how do their users understand what exploits might be useful for what purpose? He’s been trying to sort, classify and index them, and understand how they might relate to scanning tools used by enterprises such as Qualys or Burpsuite. Can the two be linked up to increase the relevance of threat assessment? Sagar is trying to develop a deep structured semantic model with an appropriate attention layer. He’s doing case studies with SCADA systems and hospitals with a view to ranking vulnerability / exploit linkages.
Dan Cosley believes we are not good at pitching our findings at the right level of abstraction and describing them at the right level of detail. Whenever a anew tech comes along, it’s as if nobody had thought about surveillance or power or cameras before, and people do empirical point studies from ground zero. Right now, it’s all AI, and everyone is reinventing the wheel in security. We need to be better at working how these things add up.
The discussion started off with who’s the customer of hardware explainability – the policymakers who pay for it, or the engineers who will try to build stuff with it? Then there’s the pricing of botnet services, which seems to settle at about $20 as that’s what high-school kids can afford. Underlying incentive problems include how we might make platforms care a bit more, and get developers to think a bit harder about how their cool toys can be abused. Thinking about what can be done by an attacker with temporary access needs to become a reflex. It would also be useful if we reviewed safety features every few years and retired those that are no longer relevant, or that are now being exploited to do harm. It would be helpful to break our addiction to novelty – but that’s what gets NSF panels excited enough to back projects. And rather than trying to fix social problems with tech, maybe we should prosecute more stalkers. However you can’t just embed online safety in the real-world equivalent; the assumption that parents should protect kids breaks down when the kids are LGBTQ. And people don’t understand what laws protect them, whether online or off; they believe in consumer protections that don’t actually exist, out of optimism about societal fairness, while at the same time they are resigned to being relatively powerless about online stuff.
I talked about extremeBB and how its data let us analyse the Kiwi Farms takedown.
Susan Landau wondered whether the lack of diversity in the security research community might have biased our view of covid app uptake in Black communities. People are aware that the authorities use metadata to enforce rules like the prohibition on former felons staying overnight in public housing, the experience of which has made many people distrustful of government apps. We have to think of our research in the frame of “the world is going to hell” and pick problems where we might make a difference. For her part, Susan is now working on phone telmetry data; even if you turn off the GPS, the inertial nav data can tell someone whether you’re in the oncologist’s office or the abortion clinic.
Matt Blaze is working to make elections safer. There are many technical vulnerabilities, but no evidence that any of them have ever been exploited; which of those you emphasise will depend on whether your candidate won. It’s great that we’re making elections more secure technically; but we have to build public confidence too, in the face of increasing scepticism. Have we misdirected our attention? Rather than eliminating all the defects in counting mechanisms, we might go for software independence (from Rob Rivest) and risk-limiting audits (Philip Stark).
Sara Kiesler is interested in research on information integrity, broadly defined. In the real world, people want more than just information; they want to be entertained, they want to validate their values and communities, and reaffirm what they already believe. Integrity has a subtle relationship with context; an accurate report of a failure in one vaccine trial was spread as propaganda against vaccine in general. Truthful information on how to commit suicide can also be treated as misinformation. The NSF has six priorities, spanning the whole information ecosystem, including TV, radio and gossip in bars as well as on regular social media and extremist groups. We don’t understand the flows, or how narratives spread globally. We also don’t understand the available treatments, though Finland seems to have a good program.
Bruce Schneier argues that the real problem isn’t the misinformation so much as the misalignment of incentives in our governance structures. We accepted the costs of conflict between local and national because of the problems of central planning, but conflict as a problem-solving tool only works for simple problems, not the complex wicked problems we have now. Powerful players hack the rules which are always incomplete. Add the risks arising from new technologies, where our reactive governance structures are too slow and the precautionary principle doesn’t work. Democracy, like any other form of government, is an information processing system that collects data, processes it, and issues rule updates. But it’s informationally poor. An election is a very low-bandwidth way of eliciting preferences, designed in the eighteenth century. The regulatory agency is a nineteenth-century patch. How can we align individual and group preferences better? The optimal might be something we’ve never seen, with technology as a key component. Can AI uncover preferences? Can I have an AI in my pocket that votes on my behalf thousands of times a day? I’m happy with my thermostat; what more? Outcomes matter, but so do mechanisms. Our social preferences are created through the process of democracy and we must remain in control.
The discussion started off with whether we want AI to make elections even more messy, or whether the priority is human comprehensibility of the mechanism. How can we make academics and others more aware of how their work might be weaponised? As in ancient Rome and the modern Nordic countries, we need to teach kids rhetoric as well as logic, and embed the lessons in civics. This is needed all the way through from junior school to grad school. We need to be cautious about blaming tech for everything; everyone talked about filter bubbles for a few years until research showed that the idea was somewhat overblown. The problems far transcend both our discipline, and higher education. Civics aren’t the only way of teaching rhetoric; you can teach kids about marketing, and how to see through it. At the level of elections, they must do more than pick they winner; they must also convince the loser. There may be an underlying problem if some group in society loses out, as skilled working class people did before the Brexit / Trump disruption; that may till the soil for conspiracy theories. In modern schools, youngsters aren’t being taught to think, and to develop critical sensibilities; they’re being taught obedience. Middle schoolers think that being good at math means being quick at math. And scales are all off, now that Facebook has more users than Christianity.
As a bonus, here is a blog post about SHB written by David Livingston Smith, and a fascinating paper on race and EDI, by him and his wife Subrena, that we discussed over lunch.
And here’s a blog post by Bruce Schneier.