I’m liveblogging WEIS 2013, as I did in 2012, 2011, 2010 and 2009. This is the twelfth workshop on the economics of information security, and the sessions are being held today and tomorrow at Georgetown University. The panels and refereed paper sessions will be blogged in comments below this post (and there’s another liveblog by Vaibhav Garg).
The conference started with a panel of cybersecurity people from the US government. Bob Kolasky of the DHS described its cybersecurity role as protecting government systems, protecting critical infrastructure and growing the cybersecurity economy. As Congress couldn’t get its act together, Obama used regulation: executive order 13636 covers critical infrastructure. A cybersecurity framework for businesses will be announced next February. The buzzword is incentives, and they’re studying ways of nudging the market. Recommendations from Treasury and Commerce should come out in the next few weeks.
Ari Schwarz of Department of Commerce described NIST’s contribution: its work on standards has been extended to critical infrastructure and the new cybersecurity framework. It has received many conflicting comments on insurance and reinsurance: should the Terrorism Reinsurance Act cover cyber incidents? Large firms and trade associations are lobbying for tort and antitrust liability protection, as well as tax incentives.
Lee Williams used to be chief risk officer of a securities firm, where customers demands came first, the firm’s risks second, regulation third and government incentives last. Now he’s at the Treasury Department he wants market forces to do most of the work, with nudges where helpful. The two big market failures are information asymmetries, such as barriers to sharing; and externalities, particularly network effects. Efforts will range from quicker clearances through R&D support to standards and certification programs; there will be less interference in markets (whether for crypto or insurance).
Carol Hawk of the Department of Energy has a research program involving seven national labs, academia and industry, aligned with an energy sector roadmap (see http://www.controlsystemsroadmap.net). Her colleague Chris Irwin described how the Recovery Act provided $9.5bn investment for existing systems; in spending it they just insisted that recipients have a cybersecurity plan, which they vetted using national labs and outside experts. This led to a lot of innovation, particularly by small rural cooperatives. Protection must be risk-appropriate and scale-appropriate; 99% of the problem is state or local (and he does not believe all the hype around smart meters). We should manage security the way we manage safety.
Tony Cheesebrough of the DHS is studying the effectiveness of adoption incentives, comparing proposed security incentives with those for green energy and concluded, after a workshop in April, that cost-sharing incentives might work best. Estimates of cyber-incidents from industry sources cen be up to fifty times higher than estimates derived from government figures (e.g. FTC): how can we get a better view of likelihood, consequences and the behavioural aspects such as interpretation?
In questions, panelists acknowledged that homeland security has a strong international element because of foreign ownership of key assets and vendors, the global nature of financial systems. As a result, the most desirable standards are international ones, such as the IEEE / IEC series. There are also mechanisms for companies to share vulnerability information with the government and yet have it protected from their competitors. The tension between “voluntary” computer-science standards and the highly risk averse power industry can make risk-based compliance problematic; and the definition of critical infrastructure can be questioned when it ends up encompassing a popcorn factory.
Julian Williams gave the first refereed paper. He defines an information steward as a principal who works to ensure the resilience and sustainability of a system. He models a number of identical targets with given discount rates and marginal effectiveness of attack and defence. A sustainability steward might have a lower discount rate, leading to magnified externalities (especially if the attackers also have low discount rates). There are interesting parallels with the social discount rate in other fields, from central banks’ base rates to the discount rates used in environmental economics; in each case the risk appetites of firms and society diverge.
Russell Thomas models the impact and severity of security breaches via the anticipated costs of recovery and restoration by all affected stakeholders. The model is a stochastic branching process, covering the various possible technical, legal and business responses, while its empirical base is behavioural evidence. Precursor events and near-misses often signal trouble; such a model can factor them into decision marking. Russ is doing pilot testing, and developing case studies with large public breaches.
The third speaker was Terrence August, who’s been studying the effects of offering cloud versions of common software products. The experience of attacks like those on Salesforce and Linkedin suggests that threats become more directed, and that firms may face more liability. Cloud adoption also reduces the number of firms running unpatched on-premises systems and thus contributing to the general undirected risk. When patching costs are high, the vendor should target its cloud offering at the middle market; but if patching is cheap it should aim it at the lower end.
The lunch talk was by Eric Zitzewitz on forensic economics. Eric has written a survey of this field, whose focus is on using data to ferret out behaviour that people would rather keep hidden. Do judges and prosecutors abuse discretion? Do fund managers churn customer accounts? Is the independence of the media respected? Is development being undermined by corruption? One study was of parking in New York City: do diplomats take advantage of immunity? It turns out there’s a correlation between a country’s ranking on Transparency International’s corruption index and its diplomats’ unpaid parking tickets. In another study, they looked at whether road contractors in Indonesia used the materials billed, and found that government audits were more effective than community monitoring. In finance, unadvised investors chose products with lower fees, and got better returns. Advisors prefer to sell whole-life rather than term-life insurance policies because of higher commissions. The abolition of the Glass-Steagall Act created opportunities for all sorts of abuses such as front-running.
Doctors are harder to analyse, but the DartMed atlas found a number of medical centres overtreating (including one clinic in California using open heart surgery at several times the normal rate). Paul Klemperer has pointed out that best auctions in the absence of collusion (the English auction) is the most vulnerable to collusion. A variety of tests can uncover biases, such as the implicit association test for racial, gender and other discrimination. Yet test results can also be doctored, as Steve Levitt found with Chicago teachers. Now people have incentives to do all sorts of bad things and don’t, which is what it’s not enough to have a theory of conspiracy; we need empirics.
His survey article describes five different ways of getting data. Sometimes you just get lucky; or you may find two measures of something one of which contains hidden information (e.g. Hong Kong’s exports to China versus China’s imports from Hong Kong, as a signal of corrupt misclassification of goods to avoid tariffs); looking for correlations between economic behaviour and incentives for hidden behaviour (such as a bank’s analysts blessing client companies’ stocks), which can become particularly evident following sudden changes in enforcement (such as the introduction of hygiene grade cards in LA, and hospital admissions for food poisoning); looking for deviations from an honest-behaviour model (Jacobs and Levitt detecting school test cheating from correlated wrong answers, violations of Benford’s law in Iranian elections, late trading detected via correlation with post-close price movements, backdated stock options detected from sweet choice of grant days and confirmed when Sarbanes-Oxley almost stopped the practice); and various correlations (such as between small and large misdeeds, seen in the parking ticket case above). The good news is that small interventions can have big effects, as with the hygiene grade cards, and in a case where teachers were paid a small bonus to prove school attendance with a photo, and test scores went up sharply.
Daegon Cho has been studying how anonymity affects online commenting behaviour and in particular the use of offensive words. 88% of newsrooms have commenting systems, and a quarter of US Internet users have used them; and previous researchers documented both the disinhibition of anonymity and the prosocial behaviour associated with a social user image. How do these tie together? Daegon gave users the choice of real-name or anonymous SNS commenting, or a non-SNS logon, to manipulate identifiability and self-image independently. Both are; but there’s also a strong correlation between offensive words and “likes”, which suggests that commenting systems might be designed more carefully.
Irina Suleymanova studies demand for data on consumer transportation costs via a two-dimensional Hotelling model of price discrimination under demand-side asymmetry, where consumers have different transport cost parameters and profits may increase with additional data. There can be a rent-extraction effect and a competition effect; the balance depends on whether consumers are flexible (or, if they aren’t, firms don’t know this).
Huseyin Cavusoglu noted that social networks are somewhat behind in revenue: Facebook gets $15 per user per year in advertising while Google gets $88. Yet Facebook has much more insight into social interaction, and has made openness a specific goal since 2006. Successive changes to privacy controls included an attempt to get people to share more by giving them the perception of more control in December 2009. Previous studies had looked at the effect on static profile data; this one looked at dynamic user-generated content, and in particular the proportion of private messages to public wall posts. This turned out indeed to be the case. Privacy sensitivities also become less extreme.
Soeren Preibusch has been investigating the value of privacy in web search. Individuals can be identified from their search logs, which can be highly sensitive; yet technical privacy approaches tend to have failed to get much traction. He did an experiment with N=191 mostly student subjects with good computer literacy; they had 20 search tasks of differing sensitivity and nine configurable search options, some of which cost money in the “paying for privacy” option (such as not recording search history) or even earned it instead (such as tweeting searches). 58% of people will suppress search logs if it’s free, but this drops to 15% if it costs 0.4c per click. In questions, it was asked whether lab work is sufficiently high-stakes.
Milton Mueller and Andreas Kuhn have been studying intrusion detection and prevention using DPI in the US government and how this affects organisational arrangements. Cybersecurity can be a private good for government as well as a public good for the nation. The incentives can lead to functions being insourced, outsourced, or projected on to the private sector. Starting in 2007, the US government consolidated its Internet access points under the Einstein programme, under which the IPS/IDS migrated from government to commercial control and developed into a broader programme over time. This led to a complex power struggle between military and civilian arms of government. Questions raised the issue of whether real-time defence could scale to large commercial ISPs.
Joshua Kroll has been investigating the economics of bitcoin mining; double spending is controlled by a public ledger maintained by many mutually mistrustful miners. If two miners add blocks to the history at about the same time, the rule is that the longest valid branch wins. Until then, forks can cause uncertainty about which transactions have been committed. Thus bitcoin is not just a crypto protocol but a social one; we need consensus about game state, which can fail for various reasons. The aggregate mining reward is now $500K per day and the effort exceeds the power of the top 500 supercomputers. Yet the “51% attack” where a cartel takes over most of the work would let them do anything, leading to a “bitcoin death spiral”. What if Goldfinger’s goal is the death spiral? Their paper models such an attack; for example, if Goldfinger bluffs, he can scare away miners, making the attack easier. Social attacks can happen, as when 0.7 and 0.8 diverged (on the day of the WEIS deadline) and the community persuaded people to downgrade from 0.8 to 0.7 despite its not being the longest chain. He concludes that the rules of bitcoin are now open to regulation. In fact the main bitcoin broker, Mt. Gox, is having a hard time for not registering for the anti-money-laundering regs.
Huw Fryer’s work is on whether we can use tort law to make owners liable for compromised machines. His example attack is the zombie attacks on Spamhaus; open recursive DNS resolvers, plus networks that allow source address spoofing are basically at fault. Tort law should provide redress, but there are assorted problems and drawbacks with using it in practice. In questions, negligence depends on industry practice, so it can nudge ISPs towards filtering out forged packets once some of them start doing it; but it’s less good for promoting innovation. In a vote, the audience supported liability for ISPs (e.g. on open DNS resolvers) and software vendors (for obvious vulnerabilities like buffer overruns) but not for private users.
The last talk on Tuesday was from Qian Tang on whether social information and in particular peer esteem can improve the supply of public goods and in particular security. She collected data on spam volumes from the CBL and PSBL blacklists, and mapping data from CBL and Team Cymru. She created the website http://www.spamrankings.net and watched to see if the experiment had any effect on spam volumes from Mar 2011 to Jan 2012. She found a significant effect, peaking about four months after the start of the experiment. The average spam reduction in the four treated countries was about 16%. Questions raised doubt about the effect size given the website’s traffic, whether the effects should be measured by AS rather than country, whether peer-pressure experiments will work where the affected firms are not breaking local social norms, and whether the effect might be due to the more targeted efforts of firms like Spamhaus who use overlapping data.
The second day started with a panel on “Is the Market for Security Working?” First off was Jeff Brueggeman of AT&T which sees the security market growing 2-3 times over the next few years to maybe $40bn. So what are the challenges? First, growing complexity of corporate IT with staff accessing systems from home and mobiles; second, different vendors and service providers ought to cooperate in serving a company’s needs, yet they compete; third, growing threats from large-scale DDoS to insiders; and finally it’s all global now.
Shane Tews of 463 Communications lost her office network for a week thanks to a virus brought in by a colleague, emphasising both the risks of BYOD and the reality of externalities. Yet there’s a lack of incentives for firms to adopt best practice, let alone actually work together. We have not yet caught up societally with the implications.
Nadya Bartol of the Utilities Telecom Council reports that utilities pretty much agree on what their challenges are. There is tremendous diversity in architecture, which compounds the workforce crisis: it’s hard enough for a utility to retain people who understand how its own systems work, let alone how to protect them. Also, formerly isolated systems are now hooked up to the Internet. She’ll be interested to see how the new executive order will be implemented.
In discussion: geeks collaborate OK but once the lawyers get involved, everything stops. For example networks don’t want to become liable for users’ copyright-infringing content. A more serious example: IETF took 10 years to do DNSSEC and now ISPs want to break it; it was written on the principle that the engineers all knew and trusted each other, but the world is now bigger and firms want to redirect traffic for various business reasons. You might hope that regulation will help, but it’s mostly stick rather than carrot, and can only set a floor; it can’t incentivise people to meet the next thing. As for information asymmetries, might the CDC model provide some insight? AT&T thought about setting up a private-sector information fusion centre where data on cybercrime could be shared outside the government, which would enable things to be more flexible. But there’s an issue of who you tell, and what they do. If you know someone has the plague then the customs folks can stop them getting on a plane. But it’s hard to get users to do stuff.
Cormac Herley started the morning refereed-paper session by discussing collisions among attackers. Cybercrime has seemed for years to be easy money: maybe only one person in 100,000 falls for a simple lure, but if you send a million a day you can earn a living. However the supply of victims is not endless, so what happens when a scam market saturates? Cormac models how the vulnerable population evolves over time, assuming that victims become immune after being attacked once. This leads to a Lagrange optimisation in which an attacker seeks to to minimise the number of attacks per victim. With multiple attackers a greedy approach is still best to begin with, but other strategies become better once the victim pool starts to thin out; eventually a random strategy is best. This is essentially the coupon collector’s problem. Competing attackers extract less value than a solo attacker as they can’t coordinate so well. Questioners pointed out that in the real world some people are phished multiple times and some machines are infected with multiple malware.
Alan Nochenson’s subject today is timing in security decisions. In the flipit game, an attacker and defender can each silently take control of a resource at any time but at some fixed cost; the defender doesn’t know if or when she was compromised until she takes control. She wants to move right after her opponent, but not too often. Alan ran experiments to see whether people could work out an optimal strategy. 300 Mturkers played 6 quick rounds against a periodic opponent under six treatments; three-quarters figured it out and made more than random. But it’s a lot harder than they thought! People don’t adapt as well as predicted, but people with a higher need for cognition did better. However more information is not always better, and there are interesting effects in subsets of people (such as risk propensity). We need to model learning behaviour during conflict more carefully.
Stuart Schechter reminded us how primitive the world was in 1978 when Bob Morris and Ken Thompson published their seminal paper on weak passwords. Can composition rules make passwords “very safe indeed”, as they claimed? But Morris and Thompson also made it taboo to look at users’ passwords by popularising password hashing. Big data came to the rescue when the RockYou hack published 30m passwords. We find that “P@ssw0rd” accounts for 0.88% of entries, despite following the Morris-Thompson rules of using upper, lower, numeric and special characters; in entropy terms, they were off by the square root. Most password strength meters have obvious vulnerabilities, leading us to train users to do dumb stuff. He showed a “reactive proscriptive” approach that tries to guide user selection by predicting the next character based on compromises like RockYou, keyboard patterns and natural-language models. To improve such systems, he wants to collect zillions of passwords from volunteers, and argues that the technology now exists to do this safely. He discusses possible choice architectures, such as letting people opt out of password collection but limiting the benefits to those who don’t.
Hadi Asghari has been studying the incentives in the CA value chain, and wondering what can be done about them. He used the EFF SSL observatory data from 2010: 1.5m certificates from 140 organisations (which had issued at least 500 seen certificates) in 54 jurisdictions. The market is 51% DV certs, 46% OV and 3% EV. Symantec owns four brands with 40% of the market, while Godaddy has 22% and Comodo 10%. Symantec are among the most expensive at $150; few firms use the many low-cost vendors, and the average price is $81. With OV certs, Symantec/Verisign is the most expensive and yet has the biggest market share. It also has 60% of EV certs. Yet digital certificates are nearly perfect substitutes, buyers cannot tell which certs are better, and there’s the weakest-link property whereby any bad CA can provide a cert wrongly for any firm’s site. How do the vendors do it? They bundle security services like malware scans; they provide enterprise services such as multicurrency billing; Verisign’s brand reputation is a liability shield, as unlike Diginotar they won’t get removed from the browser. He is pessimistic about the proposed EU e-id regulation because of these factors and because of the weakest-link problem. Technical fixes like pinning look more promising, as they give benefits to early adopters regardless of CA cooperation; and that’s a good job as the market leaders benefit from the existing market concentration.
Michael Wellman is interested in when non-malicious participants in a protocol will execute it faithfully, which he interprets as a predicate on network behaviour, and explores with empirical game-theoretic analysis (EGTA). As an example, introduction-based routing (IBR, Frazier 2011) effects global introductions based on local reputations. He looks for role-symmetric Nash equilibria, identifies maximally complete subgames, does an equilibrium search, and prunes those subgames refuted in the full game. He did 750K simulations of a 5K network with a 6-player game and found that servers always complied; other principals complied enough that the servers were protected enough. With an 8-player game he found clients and servers both complied. This provides a new tool for exploring complex incentive issues in network protocols.
Arman Khoumani talked on incentive analysis of bidirectional filtering. Is it effective to ask for more egress filtering, or will more filtering lead to more free-riding? He models ISPs that randomly re-evaluate whether to purchase, enable or disable ingress and egress filtering based on the rate of intrusion attempts and the contingent expected utility; success probabilities depend on filtering at both the attacker’s and the target’s ISPs. It turns out that as adoption grows, so does the incentive for free riding, and there’s a unique and stable equilibrium level of adoption, but this is always less than the socially-optimal level (where everyone is better off). In fact, egress filtering lowers the adoption level! He suggests regulation to require that firewalls do both ingress and egress filtering, to minimise the social price of shortsightedness. Other results include that ISP penalties can improve social welfare if they are transferable, and that regional as opposed to global regulation can undermine social welfare.
Blaze Ur compared 3,422 financial institutions’ privacy practices. In 2009 eight federal agencies devised a model privacy form for FIs to report compliance with Gram-Leach-Bliley and enable consumers to compare policies; the form has defects, and is optional, but is a start. He googled the FDIC’s directory of 7072 FIs to find the form. There are some consistent patterns: most banks use personal data for their own marketing (and with affiliates) but don’t share with nonaffiliates for marketing. Most won’t share with affiliates for credit reporting, as in the USA the Fair Credit Reporting Act requires them to offer an opt-out if they do this. Surprisingly, there was great diversity in practice between different types of FI. But the largest banks share very much more; other significant factors included geographic location (midwestern banks share less than those on the west coast or the northeast) and the number of bank branches. There’s more detail in the paper.
Stephane Grumbach had been unwell and didn’t make it to WEIS, so his talk was given by Allan Friedman. The USA dominates the web with 72 of the top websites; China has 16 and Russia 6. But what about the “invisible web”, the surveillance infrastructure of bugs, tags and beacons that drives analytics? Stephane used ghostery and adblock to survey the web from proxies in 37 countries, and found that 87% of trackers are US-based. America controls the invisible web even more than the visible one! In fact the only country without majority-American tracking is Russia.
Alessandro Acquisti also couldn’t make it to WEIS but gave his talk by video link from Edinburgh. He reported a two-year experiment in hiring discrimination via online social networks. There are certain types of information you’re not supposed to ask for in interviews, or use in hiring decisions, such as religion. Various federal and state laws block information about family status, religion, sexual orientation and so on. Yet this information is often available on Facebook. Surveys show firms search on candidates; but are they breaking the law and using what they find to discriminate? Following Bertrand and Mullainathan’s famous resume study, “Are Eric and Greg more employable than Lakisha and Jamal?” he did two field studies, a pilot on mTurk and a resume study of 5,000 firms. He created 10 unique names, each with an associated resume and social media presence. A lot of work went into devising realistic profiles and verifying that the mTurkers could draw the desired inferences while pretending to be HR people. About one out of four employers search on candidates. Significant discrimination in the raw data arises only for a subset of the manipulations; with some more processing (controlling for state and conditioning on searching) he can find more evidence. His conclusion is that public disclosures of legally protected information raise significant issues of privacy in practice.
The rump session chair Stuart Schechter spoke briefly on “Enforcing time quotas on discrete chains of loquacious speakers” as a mechanism design problem: speaker i+1 may interrupt speaker i at time plus 0, and if he doesn’t do that then speaker i+2 may interrupt speaker i at time plus 5 seconds, thereby also booting speaker i+1 to the end of the queue. The enforcement mechanism was demonstrated, when Stuart ran over, by the second speaker:
Tyler Moore announced the APWG eCrime Research Summit on Sep 17-18 2013 in San Francisco, which he’s chairing, and invited submissions by July 12. See http://ecrimeresearch.org/events/eCrime2013/. He is also recruiting research students from Fall 2013 and offers a distance-enrolment security economics course in Fall 2013.
Jeremy Epstein from NSF’s talk title was “Take my cash please (but be sure to call it, please, research”. SaTC is their flagship research program in cybersecurity and they expect to put out a solicitation for $70m: trustworthy computing; social, behavioral and economic aspects; and cybersecurity education.
Vaibhav Garg’s title was “Cars, condoms and information security”. People with seatbelts drive faster, people with ABS drive closer to the car in front, and condoms make people do more risky sex. What worked to cut car fatalities was pushing down on drunk driving, while cutting STDs needs education as well as condoms. What sort of risk compensation happens in infosec and what should we be doing about it?
Eliot Lear announced a workshop being planned by the IAB at Cambridge University in December on “Technology Diffusion and Adoption”. With the 7000th RFC about to come out, what should the IAB and IETF be doing to ensure that protocols succeed? There is particular interest in http2 and in the transition to IPv6. The official call for papers should be next week.
Robin Dillon-Merrill talked on near-misses. For example, if you lost a laptop and all your laptops are encrypted, that’s not a near miss; but if the one you lose was the one that was encrypted, that is. It’s rather context sensitive.
Soeren Preibusch talked on “Fairly truthful personal data disclosure”. 2360 mTurkers filled in a financial questionnaire which contained some unexpectedly sensitive questions and asked them later whether they’d been truthful; subjects were more truthful with questions they thought were fair (in the sense of being relevant to credit scoring).
His second rump talk was “Sign-up or give-up”. People opted in to having their browsing behaviour recorded; two billion sessions were recorded and he observed whether people jump a website sign-up hurdle. It turns out to depend on what people are looking for: they will go through the hurdle a lot more for storage services than for support or help pages.
Nicolas Christin asked whether everyone had received Viagra spam (yes) and whether anyone had ever bought it (no); so how do online pharmacies survive? He has a paper on pharma economics with Tyler coming out next week. There is actually quite a lot of customer demand, which you can measure from inventories. They compared 265 unlicensed pharmacies with 265 blacklisted ones; the former are cheaper and have different inventories. There is concentration in suppliers. Paper title “Pick your poison”.
Marie Vasek looked at wordpress and joomla to see whether content management systems matter; they were heavily targeted by criminals for both phishing and search redirection attacks. She found that doubling the number of servers increases the odds of being hacked by 1.09 times. Curiously, the old versions of wordpress are hacked a lot less than one might think.
Brian Smith-Sweeney is a security practitioner, fed up with companies’ home-grown risk assessment methodologies. They typically multiply “low, medium, high” by “frequent, infrequent” and get 6. It’s qualitative assessment with a quantitative veneer. How can we test such things? Could we present realistic information to a number of such systems and compare them?
Richard Clayton asked not to be blogged.
Andre Robachevsky works for the Internet Society and wants help on routing security. The routing commons consists of everyone else’s network, and can be polluted by anyone else. The classical options of privatisation and government control are unattractive; can Ostrum’s model of cooperative management work?
finally, Hadi Ashgari is in search of an identity. He asked us how we describe work in an interdisciplinary field like ours. If he just says “I’m a cybersecurity researcher” the instant question is “Are you a hacker?” If he says “economics of information security” they ask “Are you an accountant?” If he says in Farsi he works in security people assume he’s a secret policeman or spy. He got us to email in our self-descriptions which scrolled up on screen in real time such as “I help keep the Internet safe at night”.
After the rump session, the program chair announced that Jens Grossklags will be program chair for 2014.