I’ll be trying to liveblog the seventeenth Workshop on the Economics of Information Security (WEIS), which is being held online today and tomorrow (December 14/15) and streamed live on the CEPS channel on YouTube. The event was introduced by the general chair, Lorenzo Pupillo of CEPS, and the program chair Nicolas Christin of CMU. My summaries of the sessions will appear as followups to this post, and videos will be linked here in a few days.
The first paper was given by Jonathan Spring of CMU, talking on Stakeholder-Specific Vulnerability Categorization. He’s unhappy with the CVSS framework as it’s one-siz-fits-all, aimed at worst cases, and forces people to think up numbers that give false precision. His a system tries to be logical rather than algebraic, and it comes down to a decision tree which in the paper has 224 cases but in his current version 2 has about 70. There are decision points for both supplier and deployers.
Next up was Dennis Malliouras talking about the Underlying and Consequential Costs of Cyber Security Breaches. Our normal ways of totting up the costs of breaches get only the tip of the iceberg; the reputational damage can increase their cost of capital, and ultimately the best metric, he argues, is equity – the effect on the firm’s market capitalisation. Trawling through 8,921 breach events, he focused on 655 suitable firms that suffered 202 serious breach events and applied the capital asset pricing model to measure upside and downside beta, over a period sixty days before and after the breach. There is a coherent downside beta trend but no significant effect on upside beta. This should translate to a greater weighted average cost of capital. The average value of an earnings announcement is +0.04, while a merger is typically -0.058; breaches cost -0.022.
The third speaker of the first session was Tianjian Zhang, talking on Cybersecurity Investments and the Cost of Capital. Cybersecurity expenditures have more than doubled over the last decade but the amount of disclosure is approximately static. He’s been studying whether disclosing cybersecurity investments might cut a firm’s cost of capital. He studied US public firms from 2000-2018 and measured weighted average cost of capital against cybersecurity disclosures in SEC reports, with variables for whether firms were in industries where breach disclosure was compelled or expected by customers, and whether stocks were institutionally preferred. Disclosures do indeed cut the cost of capital, particularly if security investment is high and institutions take an interest. There are also tangible business benefits, and with credit rating agencies starting to take an interest, sensible firms may decide to disclose even more.
Discussion touched on a number of topics from site ratings to asset management systems and sector-specific effects. Dennis found that firms listed in the US are more likely to suffer beta falls after breach disclosures than firms elsewhere; this may be because equity markets are larger and more liquid. As for whether markets can predict breaches, see the next session.
On the topic of investments, this could have been interesting:
Alessandro Fedele & Cristian Roner, 2020. “Dangerous Games: A Literature Review on Cybersecurity Investments,” BEMPS – Bozen Economics & Management Paper Series BEMPS75, Faculty of Economics and Management at the Free University of Bozen. https://ideas.repec.org/p/bzn/wpaper/bemps75.html
The second session started with Milena Dinkova discussing Cyber incidents, security measures and financial returns: Empirical evidence for Dutch firms. Do dutch firms over-invest or under-invest in cybersecurity? She conducted two analyses looking at the correlation of incidents with investment, and with financial results. Her data come from a survey of IT use by 14,000 firms, and administrative data such as tax records. She analysed profitability and the probability that and incident was reported as a function of security investment. She found no change in profitability by security level; the probability of reporting an incident first increases with security level, and then decreases, presumably as there are fewer incidents.
Taha Havakhor was next, asking Can Insider Trades Reliably Predict Cybersecurity Hazards in Public Firms? US law lets executives trade stock on hunches, so long as there are no hard figures being withheld from outside shareholders. Also, laws don’t require disclosure of data that itself would compromise cybersecurity, and it’s known that insider trades signal heightened risk in general. There are also illegal insider trades, e.g. by Equifax’s CIO after the breach but before disclosure. He used a Cox proportional hazard model. Abnormal trading does predict breach reports in the next quarter, and more so when accompanied by share sales or tweaks to relevant wording. Investors and analysts alike should look at combinations of factors.
Monday’s last refereed talk was by Hooman Hidaji on Provider Output and Downstream Firms’ Service Tier Choice. Would a firm choose a higher quality of security service if the provider that offered a greater range, in the same way that airlines which offer first-class seating may thereby tempt some passengers to upgrade from cattle to business? This is a theory paper with a monopolist provider but two downstream firms, between which there may be positive or negative externalities. With weak externalities the monopolist will have insufficient incentive to improve its output.
Discussion included the uncanny valley in security as an explanation for how disclosures appear to go up and then down in Milena’s work. There’s also uncertainty, when security responsibility is split between a principal firm and a cloud service provider, about assigning responsibility for failures; Milena didn’t take this into account, but it could be interesting future work.
Our panel brought together officials from both US and EU institutions, and its topic was “Dependencies, Vulnerabilities, and Their Impact on Cybersecurity: EU and US Perspectives”.
The first speaker was Ruth Yodaiken of the Federal Trade Commission, who explained its consumer-protection mission. This is founded on a general law against unfair business practices, and is not sector-specific. They’ve brought over 70 cases, 80 security cases, and over 100 spyware and spam cases; they’re looking at everything from lamps to covid, and base their cases on false claims made by vendors. In addition to vulnerabilities they look at repair restrictions and other consumer harms.
Allan Friedman works for the National Telecommunications and Information Administration in the US Department of Commerce, whose mission is to help businesses do things; the NTIA does telecom policy, where Allan’s focus is market failure. He mainly has covening power and asks firms: how do we make it easier and cheaper to do the right things? For example, he convened meetings of firms and hackers to work on responsible disclosure, starting with multi-party and safety-critical sectors. This went on to consider software update, and is now engaging issues around software bill of materials. Attempts to compel this some years ago just died, as people didn’t want to disclose that they were violating the GPL; before you regulate stuff, you have to work out a practical path to adoption. Software BoM may come in via the medical device makers, once they agree how to do it – and they’re starting to get keen. Once it’s a compliance issue, the managers will get the resources to implement stuff.
Aristotelis Tzafalias works for DG Connect at the European Commission, and is interested in translating the WEIS body of knowledge into law. The NIS directive will be upgraded, and as the announcement will be on Wednesday, the details are under embargo: yet at least three of today’s papers are relevant, and they actually read the Dutch paper when preparing the new regulatory update. The law can’t specify vulnerability management in detail as it’s at a higher level, but today’s paper on prioritisation may nonetheless be relevant. Further down the line, specific product regulations will interact with lifecycle management. He welcomes Allan’s work, so that by the time the regulators get to work, the pragmatic ways forward have been identified. The European General Product Safety Directive talks of “reasonably foreseeable conditions of use”.
Christian D’Cunha works with Aris on data protection and privacy. There’s a lot of compliance activity and enforcement now with GDPR; there have now been 160,000 breaches reported, including hundreds of cross-border cases that have resulted in decisions, including 19 at the level of the EDPB. The first big decision on a tech company, Twitter, is expected tomorrow. Opinions on GDPR differ; some say it hasn’t changed business models as it hasn’t generated any eye-watering fines, but it’s fundamentally about harmonisation across a patchwork of legal cultures and rules. We now have an entire class of dp officers round the world working in the same way. The e-privacy directive complements this by securing devices from unlawful interference, and the courts struck down the Privacy Shield in the Schrems case. This has led to discussions between the US and the EU. At the global levels, there are 142 countries which had data protection laws by the end of 2019, and California is about to get stringent rules. Even China has adopted rules, although they don’t apply to the communist party. The EU’s mission is to situate data flows within a framework of democracy and the rule of law, and compared with China the USA is pretty close to Europe. How can we converge on data governance and
Discussion started with the Fireye hack. We’ve spent 20 years developing responsible disclosure for general use; should security companies have to disclose the source code of attack tools that they build and lose, so that the defenders now have a level playing field with the attackers? Allan pointed out the complexity: who’s the inner circle who’re to be trusted with such things? There’s a very thick tail. Ruth noted that the FTC would focus more on the effects on consumers. Rayna then asked: “In addition to public-private cooperation on Coordinated Vulnerability Disclosure, are there any anticipated actions on EU level harmonised Vulnerability Equities Policy for the EU Member States? ” Aris said we should wait and see Wednesday’s announcement on coordinated vulnerability disclosure to understand the Commission’s view on Fireye; the equities policy is mixed, as intelligence agencies are outside the EU’s remit. However more and more EU member states are giving hacking powers and tools to their police forces, so equity issues will inevitably come within the tent over time. Lorenzo pointed us to a CEPS paper on the subject. Jean Camp asked: “We have proposed a risk-and-harm approach similar to toxic wastes for security as economically efficient. So if you knowingly keep highly toxic vuln you risk real liability. Is there a possibility for this?” Aris replied that we may need to revisit product liability. I asked whether the product liability directive would incorporate services, as we suggested? Aris believes this may well happen, as the nature of products is changing, but doesn’t know how it will materialise in legislative terms. Allan noted that when trying to effect change, regulators have to offer a safe harbour to motivate firms, but need to think through very carefully whether it’s safe enough to do real good, or just work as a privacy shield. The FTC looks at a calculus of security practice and harm when assessing incidents. The next question was from Jeff Hall: ‘My concern is the speed with which we pump out “updates” or “enhancements” that are not fully tested when pushed . However, security is attempting to be automated and is not always as fully vetted as it should be’. I acknowledged that patching was an issue whose importance the EU clearly acknowledged, for example via the Sales of Goods Directive 2019/771… but what was the US view? Ruth’s take was that automated updates are better for consumers. The FTC asks, “Is the consumer being put at risk with no ability to do anything about it?” Businesses may see things differently; Allan noted that patching really interferes with availability, as people end up not seeing their kids that night as they get tested and implemented. To what extent can a regulator mandate updates being shipped, or applied? There’s no good algorithm, so you need to look at he actual risk case by case. Aris reminded us that the Solarwinds hack was an exploit of an auto-update! In conclusion, the question of how we get the corporate behaviour in the online world that people have come to expect in meatspace is one that will run and run, even if we’re just limiting our attention to safety, security and consumer protection.
I’m interested in why you are mentioning service liability. That’s an interesting concept, but I would have thought it was handled differently e.g. as malpractice. I’ve been arguing that software is a product, even if when executed on a computer it performs a service, so standard product law should apply (the EU law could apparently fairly easily clarify that this is so, though I haven’t heard that that is in the DSA or MSA).
After the WEIS workshop, the FTC issued a call for papers about privacy and security for PrivacyCon July 27, 2021: https://www.ftc.gov/node/1584654
The first speaker on Tuesday was Ben Collier, explaining why Cybercrime is (often) boring – or, Das Kapital for system administrators. Over time, hacking has evolved from a hobby to a subculture to an elite crime to volume crime. Scaling it up involves a lot of routine support work; so how does this square with the view of hackers as an elite subculture? Support workers need to support customers despite constant interruptions from cops, intermediaries and general incompetence; it’s remote from the creative breaking work of the hacker aesthetic. As well as low social capital, the pay is lousy too. The lived experience of this work is deeply tedious. Ben concludes that as the economies of illicit subcultures reach an advanced stage, they produce the same kind of anomie and labour alienation as much of the regular economy. People simply burn out. As Kurt Vonnegut said in Hocus Pocus, “Another flaw in the human character is that everybody wants to build and nobody wants to do maintenance.”
Next was Qasim Lone, on SAVing the Internet: Explaining the Adoption of Source Address Validation by Internet Service Providers. Why do operators not prevent IP spoofing using ingress filtering? It’s not just the cost and the know-how, but the fact that benefits mostly go to others and the information asymmetry as most customers can’t see which networks filter, so there’s no reputation gain. Of 334 ISPs in 61 countries, he found that 250 didn’t appear to filter, and analysed them for explanatory factors. Large ISPs were more likely to filter; knowledge of how to do filtering makes a different, and reputation may help. Governments might also insist that all their network service providers do source address validation, while national CERTs could also exert pressure.
The third speaker was Vincent Lefrere, who’s being studying The Impact of the GDPR on Content Providers. In the run-up to GDPR, industry lobbyists claimed that their income might fall by a quarter or more. Did this happen? He’s been comparing 5000 EU and US domains before and after the GDPR came into force, from EU and US IP addresses. Many US websites appear to have stopped or reduced their tracking of EU visitors; EU domains discriminate much less. Page views per visitor fell very slightly in the EU, but there appears to have been no effect on the amount of content that EU websites publish, their reach, or the degree of content engagement.
Discussion ranged over whether defenders also have a conflict between self-image and reality, and burn out too, and the different incentives facing transit, multi-homed and single-homed stub networks.
Tuesday’s third session was started by Ben Harsha, talking about An Economic Model for Quantum-Key Recovery Attacks. He’s interested in using Grover’s algorithm to recover symmetric keys, and assumes that solving a 128-bit AES key takes 2^64 queries. Might this be feasible? He concludes that it won’t be, even under rather optimistic assumptions for quantum attackers. One key insight is that Grover’s algorithm cannot be partitioned into parallel subproblems without increasing the work factor.
Dann Arce was next talking about Security-Induced Lock-In in the Cloud. In his model, two cloud service providers compete in a multi-round game with an option of two-period pricing, where CSPs cannot enforceably commit to second-period prices because of feature creep. Users have switching costs, but also learning benefits if they don’t switch.
Daniel Woods has been studying The Commodification of Consent. Adtech vendors have no way of obtaining consent from users for monetising their data, so firms may enter into coalitions to minimise the consent deficit, namely the proportion of users who haven’t given consent to at least one member of the coalition. This can be a new form of network effect, with winner-take-all dynamics. He’s been tracking the Global Vendor List over time; they have 500 vendors on it which each pay Eur 1200 and have been stable since the middle of 2018. Adtech vendors believe this mitigates legal risk, though we don’t know whether this will stand up in court. Meanwhile thousands of websites collect consent for them. A legal mechanism designed to provide privacy for users has ended up a cartelised business asset.
The session’s discussion ranged from the assumptions underlying quantum keysearch, through ways in which cloud lock-in can provide a defence against outside threats, to other ways in which consent can be seen to lead to market concentration. More generally, consent is becoming a thicket of conflicting stakeholders (from OEMs to the IAB) rather than taking a more consensual multi-stakeholder approach.
Bart Roets was the first speaker of the last session and a chief engineer at Belgium’s railway company. His talk was on The Use of Autonomous Decision Systems, and the setting is railway traffic control centres. Belgium’s network is one of the densest and most complex in the world, and has an automatic system to assist controllers. He and Robin Dillon-Merrill measured when controllers decided to delegate some control authority to it. They did so when other controllers in their unit used them; when traffic volumes went up; and when they were tired. They used it less if they were more experienced, if the task was becoming more complex, or if errors had been made recently.
Tobias Fiebig has been working on Understanding the Knowledge Gap: How Security Awareness Influences the Adoption of Industrial IoT. In consumer IoT we’ve become used to devices being recruited to IoT botnets; what about the industrial side? He recruited firms via a Dutch industry association, and learned a number of worrying things – including that most C-level people who responded were unsure of who was responsible for software updates for their critical equipment. A cluster analysis split companies into willing and less willing adopters on IoT; the former were less aware of security, and the latter more so. However the willing adopters were also willing trainers, so the problem may be fixable.
The last speaker of WEIS 2020 was Alisa Frik, presenting A Qualitative Model of Older Adults’ Contextual Decision-Making About Information Sharing. Elder care involves a lot of communication between family, carers and other service workers; how are privacy risks managed? Older adults may overshare in some circumstances and refuse to share in others, and this behaviour isn’t well modeled by current privacy theories. She interviewed 46 older adults in care facilities in the San Francisco area and found that opinions are indeed context-dependent, often relying on paradigm examples – of both sensitive and less sensitive data, and of good and bad potential recipients, but with multiple ifs and buts. The paper has the details of some of the complexities.
Discussion started with the nature of rail controllers’ trust in autonomous systems and what this might teach us about when a driver might be expected to take over an autonomous car. The controllers receive no special instruction on this, just as with car drivers; however the controllers can delegate parts of their task, so they focus on an area that needs it and let the automation deal with the other areas. Raw error rates are deceptive, as almost all reported errors are typos and the automation doesn’t do those. Mistyping a train number is less important than not knowing where a train is, and there’s a safety system standing behind the control system, so it’s complex: but an analysis of safety-critical errors such as near misses is underway. The second theme was who older adults actually trust; they often don’t want to share with families for reasons other than trust. For example, they may not want to share ominous medical information because they just don’t want to worry their relatives.