I’m at the sixteenth workshop on the economics of information security at UCSD. I’ll be liveblogging the sessions in followups to this post.
I’m at the sixteenth workshop on the economics of information security at UCSD. I’ll be liveblogging the sessions in followups to this post.
The sixteenth workshop on the economics of information security was opened by Sadegh Farhang, whose topic was time-based security games. As an example, when an Ethiopian plane was hijacked to Geneva, the Swiss air force could not intercept it as they only fly in business hours. He builds a model of attack and defence timing, based on empirical evidence of incident and response timing from 5.856 incidents in Verizon’s data breach investigations report, including 439 malware cases and 1655 of hacking; of these 325 have good enough timing data for analysis with a mean of 198 days and a median of 60. Reaction time was just over 10 days mean and 2 days mode. These data inspire a two-player temporal game model to give insights into when the defender should prioritise protection, detection and reaction. It turns out that there are four different cases of time priority ordering depending on the parameter ranges; the detection and reaction times are typically the most important. For example, if the defender is slow, the protection time dominates performance. The Nash equilibria of this game are explored. In questions people raised issues around ops versus intel, asymmetric information and the issues of noisy data in heterogeneous networks.
The second talk was given by Armen Noroozian, and over skype because of the new US visa policy towards certain nationalities. His topic was how we can evaluate security, building on concentration metrics (see Clayton et al, WEIS 2015) to deal with noisy data in heterogeneous networks. He starts from a standard attack model and uses “item response theory”, a statistical technique for assessing underlying attributes such as student ability from noisy data such as exam results. He uses WHOIS and passive DNS data to look to hosting provider performance, building on work such as this; his model fitted the empirical data with Bayesian parameter estimation, on the assumption that parameters were Poisson distributed. The providers had a small but very negative tail of abuse and a larger number of providers with little abuse but large confidence intervals on the measurement: confidence that a provider is careless (or wicked) increases with the number of incidents. Other models are built for other aspects of the abuse data. Providers’ security performance turns out to have explanatory power; and the available feeds give more information on malware than about phishing. The lessons are that it’s worth studying the market as a whole, and using appropriate statistical tools; and that it can be easier to spot bad operators than good ones.
The third speaker was Arrah-Marie Jo. She’s been studying the web browser market and worrying about whether market concentration is bad for end-user security. She models market concentration as a series of tournaments; where firms compete on security, market concentration can have a positive effect on the provided security level. But will this hold in practice? She uses patching time as a proxy for quality and analyses patch data going back to 2005 and find that concentration does indeed accelerate patching, although the effect becomes weaker as a firm becomes more dominant. Her explanation is that the browser vendors don’t monetize browsing directly but a tied market namely advertising. In questions, it was pointed out that Google and by proxy Mozilla care more about advertising than Microsoft or Apple do; and indeed it turns out they do patch more quickly although the difference is not that large.
The final speaker of the first session was Sam Ransbotham, who’s been studying how security management affects events. He’s found that how actively firms manage open ports has a real effect on vulnerabilities; the former is a better proxy, he argues, for actual (as opposed to stated) security preferences. He has little time for the kind of survey where you put some students in a room and say “Pretend you’re an attacker” or “Pretend you’re a Fortune 500 CEO”. His data come from 133k daily observations of 33m events over 480 F500 firms; his observables are botnet activity, malware, potential exploits and unexpected communication. The more open ports, the more botnets, exploits and unexpected communications we see; ports have no effect, however, on malware. A lot of this appears to be driven by firm-specific effects. He tries a hidden Markov model of transitions between low-security and high-security states, and finds the transitions are sticky; firms tend to continue being secure or insecure. However the more open ports there are the more likely it is to fall from grace. Two big breaches during the period – the JP Morgan and Home Depot incidents – made firms in the same industry more likely to clean up by shutting down open ports. The paper’s contributions include modelling management and outcomes across many firms, modelling security as hidden state, and examining strategic responses.
Olga Livingston is from the office of the chief economist at the DHS and talked on government perspectives: she wants defensible estimates of cybersecurity costs and benefits based on empirical analysis, and preferably open data. As part of the US CERT, she does have incident data, and can use and defend bottom-up Monte Carlo, even though there may be more elegant mathematical models. Federal figures don’t always help as the cost of an incident is set to annual budget divided by number of incidents. Policy proposals may be attacked viciously so must be robust. Insurance data are of limited help; it always underestimates risk as it’s limited to certain risk types and bounded by claim limits. Also one needs to map tactics, techniques and procedures (TTPs) sensibly. She welcomes research community input into the framework which classifies attack types, losses, countermeasures, cleanup and recovery costs, etc. Some government agencies (NASA) have goodwill to include; others are negatively viewed already. One starting point was the Mitre Att&ck framework; ROI is estimated as potential losses avoided divided by defence investment. She is prepared to share more details with researchers who’re prepared to review it; contact her at Olga.Livingston@hq.dhs.gov.
Andrew Stivers is the deputy for consumer protection at the FTC’s bureau of economics. The FTC’s mission is to protect consumers by protecting markets; section 5 has led to about 500 cases over the last decade, where firms made untrue claims that harmed customers or where unfair practices harmed consumers who had no choice. They have 23 PhD economists and established work streams in ads, other marketing practices, privacy, and injury analysis. His position paper takes a standard information-economics approach as the FTC Act doesn’t let them take a rights-based one. They’re interested in both outcomes (who gets what data, price and product offers, crime and other harms) and process (privacy policies, data breaches etc) to enable consumers to make choices that stick. The biggest change with tech is the persistent follow-on effects that come from data, and then the external effects on other parties. Information asymmetry can invite entry by low-quality firms and a race to the bottom. The immediate cost to consumers is complicated by the fact that the firms themselves have inadequate data: they under-invest in protection and reporting . Outcomes can vary by consumer groups, e.g. the old, the poor and children, making policy still harder. Policy tools range from education up through enforcing truth-telling on privacy policies (which the FTC does a lot), monitoring data practices (the FTC can’t do this in practice), mandating privacy policy disclosure (which the FTC does for children and financial institutions) and setting standards directly. The FTC is organising a privacy economics conference next year, for which Andrew solicits papers; he may be contacted on astivers@ftc.gov.
Erin Kenneally is program manager at DHS’s cybersecurity division where she directs R&D on cyber-risk economics (CyRiE). She tries to let the pain points and capability gaps of her stakeholders drive the research agenda. This includes not just understanding ways to deal with externalities, the value of liability, targeted versus collateral damage, regulation versus experience sharing, but also tech transfer – helping security innovators cross the valley of death to deployment. The high-level goal is to improve decision making, taking both rational-actor and behavioural approaches. The themes are how investments are made, what impact they have, the value of cyber and business risk, and the incentives needed to optimise risk management. The history includes the 2010 NITRD report, the 2013 cybersecurity incentives study and the 2016 cybersecurity R&D plan. Operationalising the vision includes funding products, modelling the value of stolen information, understanding cybercrime, and pulling it together into a concept of operations as in the CyRiE green paper, studies of how regulation affects outcomes, and similar studies of insurance, liability and organisational behaviour particularly across diverse supply chains. Finally she collects data at ImpactCyberTrust and makes it available to researchers. She can be contacted at Erin.Kenneally@hq.dhs.gov.
Monday afternoon’s sessions started with Platon Kotzia presenting an Analysis of Pay-per-Install Economics. Commercial pay-per-install (PPI) services ship a lot of potentially unwanted programs (PUP) ranging from aggressive marketing apps to downright crimeware. He’s been studying the PUP ecosystem; players churn companies to get new code-signing certificates after old ones are revoked; PUP publishers are often in bed with PPI services and download portals. He’s looked closely at 3 clusters of companies in Spain that are in the top 15 worldwide, building the graph of people and companies so he can work out who’s earning how much. Most PUP companies share addresses, have no employees, revenue or web presence, and are created in batches; they get code signing certs, and don’t seem to do much else. One person can run 20–30. The operating companies can have revenue in the tens of millions and income in the single millions; the main revenue source is PPI, and from outside the EU. The companies all claim that 90% or so of their revenue goes on “other expenses” and all are suffering revenue falls since June 2014 when Symantec, Microsoft and Google all started flagging or blocking such operations. (Some companies had declared such action by big firms in their risk register.) At present the firms still have revenue of Eur 202.5m and income of Eur 23m. In questions, Platon admitted no idea why the firms declared such large expenses.
Next was Ryan Bryant analysing a payment intervention in a DDoS-for-hire service. Booter services account for a sizeable chuck of global DDoS traffic and offer DDoS for a few dollars; they cause quite a lot of nuisance. PayPal therefore started cracking down on booter accounts, leading to them shifting to bitcoin. Ryan has leaked data: a backend database for the vDOS website showing registered users (75,000 with 10,000 paying for attacks); $600,000 of revenue over 2 years, and 270,000 victims of 900,000 attacks. In the middle of this two-year period, PayPal became unavailable. PayPal dominated revenue in July 2015 but had almost completely disappeared by September; up till then they were on a growth trend, and afterwards they were in a steady decline. The $30k of Paypal and$100k of bitcoin before became $29k bitcoin plus a hodgepodge of other, mostly card-based, channels that evaporated over the rest of the year. Only 300 users actually switched from PayPal to bitcoin. Attacks follow revenue with about a one-month lag; they fell about 20% (though of course the customers may just have gone to other booters). One possible conclusion is that as bitcoin becomes more prevalent and easier to use, it may be harder to move against online crime.
James Hamrick has been exploring price manipulation in the bitcoin ecosystem. Two suspicious periods in the exchange rate can be traces to a single suspicious actor on Mt Gox in 2013 and 2014. His contributions are ways of identifying suspicious activity and analysing its effect on bitcoin. Mt Gox dominated bitcoin trading until 2013 when smaller exchanges took off; eventually in early 2014 it declared bankruptcy. Its trading records from April 2011 to November 2013 were leaked in 2014; as most exchange trading isn’t recorded on the blockchain, this gives a unique insight. The Willy report identified a suspicious trader, Marcus, who bought $76m worth of BTC in 2013 without paying transaction fees, and paying random amounts for BTC (from under $1 to over $100,000). he wasn’t actually paying for them but selling them back to Mt Gox for double or more. The second suspicious trader, Willy, had 49 accounts that bought $112m using abnormally large userids. This created a fiat deficit as he didn’t balance the database; it also created an artificial BTC surplus, in effect turning it into a Ponzi scheme. Was it pump and dump, or was the Ponzi created by hacker exploitation of bad coding? Was the Mt Gox operator Mark Karpales trying to cover losses? When playing, Markus and Willy were often buying 20% of daily trades. Willy’s trades helped drive up the bitcoin bubble, while Markus’s didn’t. Within three months of the fraud, Mt Gox collapsed and the price of bitcoin fell by half. He argues that exchange operators should share trading data with regulators, and their opsec should also be audited. In questions, he confirmed that the total amount of BTC in these trades was very close to the amount Mt Gox claimed to be missing. However James doesn’t have the Mt Gox order book so can’t trace it in detail.
Sriram Somanchi’s paper reports a field study on the Impact of Security Events and Fraudulent Transactions on Customer Loyalty. Many regulations force firms to compensate the customer for some losses, but never for all. However customers don’t know everything and may have limited choice; so is there any way to measure the effect that breaches have on customer behaviour? He has assembled a dataset of 500,000 bank customers over 5 years with 20,000 unauthorised transactions, whose victims turn out to be 3% more likely than others to move their business elsewhere. Previous work has looked at the impact of breaches whether on the firm’s stock price or the users, but not on customer loyalty (apart from the survey by Hoffman and Birnbrich). He’s run a proportional hazard model and found, for example, that customers are less likely to switch out if in an area where the bank is dominant. The quit rate comes back to normal after 9-12 months; and in cases where the customer withdraws a complaint there’s no effect (presumably this is because they found out that the transaction was by a family member). However, unauthorised transactions that are refunded will also make customers more likely to terminate, especially if they have used the bank for a long time. Future research might investigate when customers blame the merchant as well, or instead, and trying to design better interventions.
Eric Jardine’s theme was Sometimes Three Rights Really Do Make a Wrong – or how aggregation bias can lead us to base policy on the wrong measures. Cybersecurity trends appear negative but this is largely driven by big outliers; vendor-related data has its own problems from missing data through biases in collection to a failure to normalise for the increase in Internet traffic. As a cautionary tale, the Reagan “Nation at risk” report on education warned of rising mediocrity, yet disaggregated trends showed improvements in every quintile; the “problem” was that the bottom quintile was starting to take the SAT. Might cybersecurity trends be positive if we disaggregated them in the right way? A first step is to consider savvy users, naive users and IoT devices separately; the first have peaked at 1bn, the second have grown to 2.5bn over ten years and the third to 25bn over about 6. He built a model of how this might work, and warns against all the lurking confounders that might be tripping up cybersecurity researchers. Basically we need more fine-grained data if we’re avoid these traps.
Fabio Massacci was next with The Work-Averse Cyber Attacker Model. We’ve had attack models for 34 years since Dolev-Yao; we’ve had random oraccles, honest-but-curious and untrusted clouds on the CS side, and strategic models on the game theory side. Fabio is sceptical; is all possibilities were exploited with equal probabilities we’d find attacks at maximum entropy, but we don’t. The data show that attackers switch from one vulnerability to another only when they have to! His key idea is that attackers are work-averse because crafting attacks is an engineering process that costs money. He builds on Stokey’s logic of inaction and Symantec’s WINE database. The optimal time to refresh can be cast as a dynamic optimisation problem; a stochastic programming solution can be found on some assumptions (such as negligible malware maintenance costs) that may or may not be reasonable. Symantec’s dataset has telemetry on 130m machines; it was quite complex to extract a de-identified database for analysis that still has usable repeat victimisation characteristics. The data analysis confirms attacker laziness in a number of interesting ways; bulk attackers who aim to compromise many users rather than one targeted user are particularly likely to be work-averse.
Orçun Çetin wants to Make Notifications Great Again. How can a security researcher get hold of resource owners at scale? abuse@ is supposed to work, as is the registrant email field in WHOIS under RFC 2142. But the domain owners have the stronger incentive to clean stuff up. And can you get people’s attention by sending a link to a demonstration rather than just a plain email? He built a website to enable people to check whether their nameserver is vulnerable to domain poisoning, where it allows non-secure dynamic updates. He ran sequential campaigns to contact nameserver operators, then domain owners, then network operators. In the first campaign about 70% of emails bounced; in the second, about 40%; only network operators were really reachable, but were furthest away from the resource. The demo didn’t improve things significantly; about 10% visited the website (but those who did cleaned stuff up better). What might work: a non-email notification, legal or blacklisting threats, trusted advice sites? We need new ideas. Questioners suggested notifying end users and automated remediation; and discussed whether we can find any options that work at all for naive website operators.
The last talk of the day was mine, on Standardisation and Certification of the `Internet of Things. The paper reports a project for the European Commission into what happens to safety regulation once we’ve got software in everything; the project taught us that the maintainability of software is set to become an even worse problem, once people expect that durable goods such as cars and medical devices will be patched regularly, just as phones and laptops are now. I’ve already blogged the paper here.
Min-Seok Pang started Tuesday’s sessions with a talk on Security Breaches in the U.S. Federal Government. The IRS has a 56-year old system in the IRS (the Individual Master File) while the DoD’s Strategic Automated Command and Control system is 53. To come across COBOL systems you can go to a computer history museum, or to Washington DC! The OPM system that leaked 22m clearance files was in fact too antiquated to encrypt social security numbers. Are these systems more secure (as young hackers don’t understand old languages) or less (from lack of modern protection mechanisms, and accumulating cruft)? In short, does “security by antiquity” work? Min-Seok analysed the FMISA reports to Congress, finding 96 relevant incidents over four years; it turned out that systems rated as less secure by the inspector general had more security incidents if the maintenance cost was higher. Privacy rights clearinghouse data suggested that maintenance spending had been more effective in recent years, perhaps because of cloud migrations undertaken for cost-saving purposes (more cloud spending meant fewer incidents).
Sung Choi was next, discussing whether hospital data breaches reduce patient care quality. In 2011-5, 264 hospitals reported data breaches, and a breach triggers remediation expenses, litigation and regulatory inquiries; so do breaches harm patient outcomes? He compared readmission rates and 30-day mortality rates for heart attack patients (as their admissions are unscheduled) for 2,619 acute-care US hospitals with the DHHS breach database. Over the period, mortality declined steadily by 0.34% to 0.45%, but the decline was arrested in breached hospitals costing them about a year’s progress. In numeric terms, mortality went up about 0.5%, recovering after about four years. One possible causative factor is that average wait times before treatment increase by about 0.2 days. Sung concludes that breaches have a material effect on care, and breached hospitals should carefully consider the implications of any new procedures on patient care.
Joseph Buckman was next, analysing repeat data breaches within firms. He looked at 2,488 privacy rights clearinghouse breach records from 2010-6 (as data breach laws were stable by 2010) and applied a hazard model. Notification laws cut the interval between breaches, as do insider compromises and criminal charges brought against the firm. Regulated industries were more prone to breaches, such as government, education, medical and financial; though education was more vulnerable to system hacks, government to insiders and medical to inadvertent disclosure. Repeated breaches were correlated: firms suffering system hacks or insiders or inadvertent disclosures tended to suffer the same thing again.
Fabio Bisogni has been estimating the size of the iceberg from its tip. Given the security breaches that are notified, how can we estimate those that aren’t? Some are notified but not reported; others detected but not notified; and others not detected. He used the ITRC list, plus 430 notification letters made available by attorneys general in four states and set out to model the breach rate by state and sector. Reported breached rise by a third where the AG publishes notification letters or where credit agencies must be notified. Government, finance, medial and education report 2, 4, 9 and 12 times more than retail; it appears that commercial firms are more concerned with reputational damage than with any financial penalties that may apply. Uninformed exposure times vary, with the financial sector being best; this seems correlated with competence. Total notifications could be increased by 46% if all firms had to informed credit agencies and the state AG, and 17% more if the risk of harm provision was eliminated. A deeper question is whether breach notifications will eventually become background noise, or whether they will continue to affect corporate behaviour.
Ying Lei Toh started the afternoon sessions with a talk on privacy and quality. Concerns that firms like Facebook and Google abusing their dominant position have led to proposals for data use restrictions; to what extent may these damage firms? In her model, a monopolist can invest in its quality level (which is also a function of shared data) and derives revenue from selling a share of the data it collects. At equilibrium, the firm underinvests in quality. The planner can cap the disclosure level; the firm can invest in quality to get consumers to share more information, so as to share more. The outcome depends on the elasticity of user demand for the service; thus we can expect a trade-off between privacy and quality when an unregulated market is partially covered. A cap is socially desirable when a marginal reduction in disclosure increases welfare; various other cases are considered too. Overall, quality can either increase or decrease under a disclosure cap; whether there is a trade-off between quality and privacy depends on the detail.
Hasan Cavusoglu is interested in privacy uncertainty, and specifically in whether it can be a new market friction affecting physical goods with which apps are linked, in view of the substantial literature on uncertainty related to asymmetric information. He created a gaming app and set up an experiment manipulating privacy notice at information collection, use and protection; he found that privacy uncertainty makes a significant contribution to willingness to buy. It tuns out that post-purchase information asymmetry has more effect; and hidden information has less of effect that hidden action or hidden effort. It may be particularly important in markets with fragmented sellers, where trust and reputation are harder to establish.
Alessandro Acquisti is researching online distractions. He’s noted that most modern privacy research is about informational self-determination; but what does the “right to be left alone” mean in a world of constant distractions? There’s a growing industry of self-help mechanisms from rescuetime to the pomodoro technique, while many firms block access to distracting sites. Do any of them really work? One of the effects of blocking primary sites is that people may go to secondary sites to get their fix. So he did a four-week experiment where mTurkers installed Freedom on their phones and laptops. There were three groups: the control’s installation blocked random websites, the first treatment group blocked Facebook and YouTube for 6h per day; the second used Freedom as they wished. Performance measures were earnings and completed HITs, as well as a standard proof-reading task at weeks 2 and 4. Treatment group 1 saw earnings rise from $80 to $100; there were no changes for the control of treatment group 2. The effect was concentrated among medium users of social media. He’s planning to do longer-term experiments next.
Sasha Romanosky kicked off the last session with an analysis of the content of cyber insurance policies. He collected 69 cyber insurance contracts from state insurance commissioners, and analysed questionnaire terms, exclusions and premiums. Currently 500 firms offer policies with annual premiums $1-2bn; less than 1% of corporate insurance, with average premiums in tens of k and limits in tens of millions. Companies wanting nine-figure coverage get a tower of smaller policies. Most policies cover legal and PR costs; only a minority cover ransomware; none cover state action or terrorism. Common exclusions are crimes committed by the insured, seizure by the government and “intentional disregard for computer security”. Security questionnaires ask about 98 different topics from the technical through organisational (such as budgets for prevention, detection and response to compliance. As for how they price risk, most insurers have little clue. Premiums for small companies seemed to be flat rate, perhaps 0.2% of the sum assured; for medium firms it might be 0.2% of turnover for $1m coverage, perhaps with an extra 20% weighting for regulated industries. The insurers’ behaviour seems fairly reasonable in the circumstances but anyone who thinks they have a magic insight into vulnerability is mistaken. In questions, it was pointed out that filings are often rather old; Sasha remarked that they dated from 2001-17.
Rui Zhang has been thinking about attack-aware insurance of interdependent networks. Cyber risks are not created by nature but by malicious and often targeted action over networks. Rui models the interactions between user, attacker and insurer as a game characterised by various saddle-point equilibria. He also considers how network effects and principal-agent effects can moderate risk levels.
The last speaker at WEIS was Julian Williams who’s been working on self-protection and insurance with endogenous adversaries. Does it make sense to delegate some of the public-policy work on cyber-risk to insurers, as many have called for? He’s studies four games: no insurance, regulator mandates minimum investment, actuarially fair insurance and monopolist insurer mandating minimum investment. Julian assumes that attackers can’t target, Von Neumann-Morgenstern utility and that all targets start from low security investment. There’s an insurance trap for the monopolist insurer which is basically a prisoners’ dilemma, where once targets enter the trap the insurer can price-gouge.
Dong-Hyeon Kim kicked off the rump session, talking about his work on North Korea’s cyber operations. Is it being rational, or is it an outlier? He suspects the former, working from case studies like Sony Pictures and the Bangladesh Bank heist.
Eric Jardine is starting to write a paper making the case against individual use of commercial anti-virus software. A survey he ran indicates that AV use is correlated with being hacked in the past year; might this be a moral hazard effect? He checked that 90% of the participants started using it before being hacked, and where they learned they’d been hacked. Neither of these was big enough to account for the effect. Audience members suggested other possible confounding effects.
Susan Landau discussed the FBI’s tussle with Apple. In her testimony she noted that Apple had deliberately made the iPhone secure to compete with Blackberry, in which it succeeded. She has expanded her testimony into a book for the general public, “Listening”, which will discuss investigations in the age of encryption.
Daniel Arce has been working on pricing anonymity, looking at coin-join transactions on bitcoin through the lens of cooperative game theory. All the players in such a game get some anonymity; he’s come up with a closed-form Shapley value solution, which should be empirically testable.
Michel van Eeten is about to advertise a new tenure-track assistant professor position open at Delft in cybersecurity measurement and analytics; contact him if interested.
Tyler Moore edits the open-access interdisciplinary Journal of Cybersecurity and invites submissions, including revised versions of papers that appeared at this conference.
Sanchari Das has been doing a usability study of Yubico security keys, running from the Yubico site of the Google one. People had all sorts of issues with finding the right instructions, knowing when they’d finished registering because of the lack of confirmation, understanding that the dimple wasn’t a fingerprint reader, and so on. A third of the subjects couldn’t complete the demo, even with a researcher’s help. To figure out whether they were done, some people realised that they could go to incognito mode and try to log in. People were also worried about what would happen if they lost their keys. Overall, the process needs to be better engineered. Next, she proposes to repeat the study with older citizens rather than students, and do it in situ in their homes.
Bahareh Sadat Arab is working on provenance and re-enactment; how can you not just explain how a particular payment or other transaction output was generated, but also redo it dependably? There’s a lot of stuff follows such as figuring out which subsets of transactions top replay.
Luca Allodi infiltrates live dark markets and markets in languages in Russian, unlike most research which does English-language analysis and of dead markets only. The numbers of users and exploits rose steadily from 2011-6, with several actual new exploits every year but it seems to take 6 months to 2 years to get them out after disclosure. Exploit prices are in the $100-1000, with some above that; a neat Microsoft Edge exploit was sold repeatedly for $8000 while the bug bounty for reporting it to Redmond was only $15,000, so the markets are competitive after a fashion.
Sasha Romanosky is on loan from Rand to DoD and is cyber-policy adviser to the Secretary of Defense. He’s responsible for the equities process and the vulnerability disclosure program. He invites everyone to find and submit vulnerabilities on DoD websites but warns “there are restrictions on the extent to which you can validate vulnerabilities”, and in particular immunity won’t extend to people who exploit and compromised systems. You can submit findings through the HackerOne platform
Julian Williams is looking at high-frequency trading, and wondered whether people trade differently against robots, and whether nudges are effective in such environments.
Fer O’Neil is looking at the effectiveness of data breach notifications, and exploring the spectrum between technical and user-centred mechanisms. He’s trying to analyse the different genres of notification letter to make sense of them.
Stephen Cobb has been thinking about the huge difference in cyber-risk perception, and wondering whether the cultural theory of risk might help – , identity protective cognition, cultural cognition etc. It’s known that white males perceive less risk across a wide range of hazards; where does cyber-risk fall?
Richard Clayton explained how the Cambridge Cybercrime Centre now has lot of data available for cybercrime researchers – spam, phish, malware, botnet traffic and lots of specialised stuff. The goal is to see lots more cybercrime papers based on real data.
Dmitry Zhdanov ‘s research involves looking at dark markets and building cobweb models of price responses to supply and demand fluctuations as attack resources become exhausted with the emergence of countermeasures; these model the high price of zero-days and the variations in defender effort. Exploits can also be modeled as goods that go stale. He’s found that prices of exploits follow market cycles; the prices of exploits crashed in 2009. Finally, Dmitri announced that Georgia State will be hiring a number of cybersecurity faculty real soon now.
Thank you Ross for consistent and concise abstracts of each talk. Hope to make it next year.