I’m at the 24th security protocols workshop in Brno (no, not Borneo, as a friend misheard it, but in the Czech republic; a two-hour flight rather than a twenty-hour one). We ended up being bumped to an old chapel in the Mendel museum, a former monastery where the monk Gregor Mendel figured out genetics from the study of peas, and for the prosaic reason that the Canadian ambassador pre-empted our meeting room. As a result we had no wifi and I have had to liveblog from the pub, where we are having lunch. The session liveblogs will be in followups to this post, in the usual style.
Giampaolo Bella was the first speaker, wondering about how we could make security invisible. The obvious way is by integrating it with a useful system function; his first example was letting a waiter log on at any till in a restaurant, and his second was using the same password to log on to your laptop and to decrypt the hard disk. He then asked why we need boarding passes. The attendant matches (face, ID) for authentication, then (ID, BC) for authorisation then finally scans BC to see if you’re on the flight. What is the boarding card for? What’s wrong with just relying on the passenger’s electronic passport, with its built-in biometric? My view was that it’s about reassuring the passenger, and perhaps making it easier for the steward at the top of the steps from checking you’re on the right place; Frank Stajano’s that it was about stopping people swapping or reselling return tickets. In general, will making security invisible make it stronger?
Next Was Hugo Jonker from the Open Universiteit of the Netherlannds, who is interested in MITM attacks. Moxie Marlinspike’s SSLsniff of 2002 and SSLStrip on 2009 show it’s serious and contactless EMV is vulnerable; there’s analytic tool support such as mCRL, Scyther and Tamarin. There are tricks like certificate pinning that work in some applications. More complex attacks like Poodle, Freak and Logjam combine MITM with downgrade; Drown is more intricate still. That’s four major practical attacks in 18 months. Also, the cops use Stingray devices to tap cellphones, and there are others (Gossamer, triggerfish, Hailstorm). See the ACSAC 14 paper on detecting IMSI-catchers using non-security properties of the protocol such as looking for neighbouring cellphone towers; also an arxiv paper on distance bounding being flaky. It’s also significant that many of the exploited flaws were not covered by the protocol’s security claims; so they are not “attacks” so much as poorly specified security requirements. One property worth more attention is whether the two parties agree about the observed context. My remark was that in my thesis in 1994, and at the protocols workshop in 94 or 95, I argued that robust security was about explicitness: putting all the possibly relevant context on the face of the protocol. Hugo argued that careful verification of the context is also important; why don’t cellphones check signal strength and alarm if suspicious? After all, Gmail will alert if it thinks the local government is trying to MITM your logon. This led to a discussion about ambiguity; computing equipment tends to present the world as completely safe (as that’s in the supplier’s interest) while we have evolved to use multiple cues of threat and risk such as the tone of someone’s voice and the tone of the neighbourhood. Should our systems convey to us a sense of vague menace when this might be appropriate?
This led naturally into the morning’s last talk by Bruce Christianson, whose message was that we’re getting better and better at solving the authentication problems of the 1970s but ignoring the real-world problems of systems that may be insecure, non-binary belief in other propositions and the incremental growth of trust. In the real world, how authenticated you need to be depends on what you’re trying to do; anyone in the hospital can override medical privacy to save a life, but they’ll need lots of paperwork to buy a house. A better starting point might be insurance, considering the risk of “allow” versus the opportunity cost of “deny”. People buy insurance, and also lottery tickets, in both cases they are happy to accept a suboptimal tradeoff because of risk preference, liquidity or whatever. To what extent could people spend cash or pledge reputation to get exceptional access? Civilians already have to spend money to get exceptional access via the courts, but governments legislate zero-marginal-cost access for themselves; how can this failure be prevented? Perhaps we can point out that throttling mechanisms would have limited the data that Manning or Snowden could leak. Also, monetising access control such that you can pay for access, or less for sandboxed access, or whatever else the market comes up with, means you might have a more intuitive interface, track risk better and spot compromise faster. In fact, Caspar Bowden once recommended a tax on personal information; if every record of an identifiable individual attracted a sales tax of 1p whenever it was sold, most of the abuses would end. Privacy violators would become tax evaders and go to jail.
After lunch, Virgil Gligor turned to DDoS, of which we get maybe 20,000 per day costing over $1m, according to arbor networks’ atlas. Traditional attacks were on endpoint servers; non-traditional versions include link-flooding attacks such as Coremelt, where bots talk to each other through an AS core. The first big one in the wild was on Spamhaus in 2013, where the adversary adapted to a move to Cloudflare by attacking its IXPs; then there was one on ProtonMail in 2015 where a dozen links feeding into ProtonMail were attacked. Recovery took a week and required collaboration with multiple ISPs; the adversary adapted, conducting in effect the first “moving target attack”. By 2014 46% of ISPs have experienced link flooding of one type or another. With power grids, cellular services and emergency services depending ever more on the Internet, a ProtonMail style attack is a real worry. Virgil studied crossfire, a single-link version of Coremelt, in a 2013 paper; this can be used to blog a small set of target servers by looking for routing bottlenecks. For example you could cut connectivity to a typical US state by half just by flooding 15 or 20 links. The min-cut might be 1000 links, but maybe ten of them carry 70% of the traffic; there’s a Zipf-Mandelbrot distribution. The defender’s dilemma is that you can choose any two or security, cost-performance and compatibility; he collected statistics and analysed them. Attack is an order of magnitude cheaper than defence here. He has a technical proposal for a defence that can use the temporary bandwidth extensions possible in some flavours of SDN. This will force the attacker to either pay a lot more, or compromise their anonymity.
Olgierd Pieczul noted that while we’re beginning to see more and more papers doing statistics on vulnerabilities. However, qualitative analysis is also needed to understand what’s actually going on. He’s been studying Apache Struts, which didn’t handle cookie data in a secure way from 2004 through 2015; a number of vulnerabilities allowed remote code execution. He looked at what code changes introduced which issue. Each successive vulnerability was a bit harder to exploit but no fix was perfect; each left an ever-diminishing gap for later abuse. And some of the fixes were horrible (such as removing whitespace to stop people using constructors, and incomprehensible regular expressions that could not be properly maintained). This sort of research can teach us about developers’ blind spots. He also found that severity metrics such as CVSS are often wrongly assigned; vulnerabilities tend to get worse over time for a particular program as the code and its potential become better understood.
Jeff Yan started Thursday’s last session with a talk on camera fingerprints. CCD or CMOS camera sensors suffer from photoresponse nonuniformity (PRNU) and have systematic artefacts. Various tricks can be used to extract noise fingerprints for source camera identification, device linkage and image forgery detection; yet no-one knew which of the algorithms were good, as the experimental configurations and comparison metrics were different. This led to a controversial paper at IH&MMSec15. Now: as cameras are deployed everywhere, can we start building this into security protocols? What sort of leakage might occur as people post photos online? Perhaps you only allow people to enrol new cameras and have system mechanisms to confound PRNU for published photos. This could give good security and usability, but at the cost of perhaps needing software modifications; on the other hand, there’s the risk of Internet-scale privacy compromise from mass photo harvesting and fingerprint extraction (but note that for privacy reasons you need to scrub the metadata anyway before posting a photo online). In any case, photo fingerprinting can be used to track careless anonymous posters, as evidence in revenge porn cases, and more generally to complement stylometry in forensic investigations.
Brian Kidney was Thursday’s last speaker, and has been working on a novel “covert” channel, namely predictive text. Can a leaker be given away by the personalisation of the autocorrect function in her mobile phone or PC? The newer systems learn from their users and get to know a lot about them. Brian started off with a chosen-message attack inspired by IND-CPA game mechanics, measuring how many text snippets had to be entered into two predictors before you could tell them apart. He trained a recogniser on Twain and Austen from Project Gutenberg, to represent British vs American English, and “predicted” authors such as Kipling, Dumas, Doyle and Fitzgerald. This is just a proof-of-concept experiment and it turns out that once you have over about 1Mb of text to train the prediction engine, then you can get 80% accuracy with 256K of sample text. So it’s hard work, but there is some potentially exploitable information there, and the amount of information varies with the text (Twain was more recognisable than Austen). Also, Brian used simple HMMs, and perhaps deep learning might do significantly better.
Khaled Baqer and I opened the batting on Friday with a talk on short message authentication protocols. We are building a prototype system, DigiTally, to extend the mobile-phone payment systems common in less developed countries to areas where the network is flaky or non-existent. As this has to work with the simplest phones, it’s done by copying short authentication codes from one phone to another. Our initial protocol adds a six-digit challenge and a seven-digit response to the transaction flow familiar from systems such as M-Pesa and bKash. The first version was verified using the BAN logic but suffered a birthday attack that the logic didn’t pick up, as it doesn’t model entropy; this demonstrates the need for new verification techniques for short message protocols. Another design constraint is that universal shared secret keys can only be used for low value transactions; how can two users share a key for higher-value payments? It turns out that the shared-key Needham-Schroeder protocol is just what’s needed, and the “bug” in the protocol (which allows an arbitrary delay between messages 2 and 3) now becomes a feature. There are further interesting problems around how one can verify protocols for delay-tolerant authentication, as you can’t use freshness to separate all the different runs of the protocol.
Next was David Llewellyn-Jones discussing how to share passwords. Many services such as Netflix allow people to share credentials with family members, and most of the audience admitted password sharing. This is useful, intuitive and widespread; teenagers share passwords when in a relationship, as it provides trust acknowledgement. David has been working out how to adapt the Pico password management system to cope with delegation and understand the space of possible delegation systems. The constituent features of delegation have been discussed by Bauer et al; David is working on a user study and meanwhile has a framework for analysing usability, expressiveness, security, trust, accountability, revocability, and the principals’ involvement in creation and operation. Such systems could be built in future using configurable cookies. But would people actually use them? Or would formal delegation be a mark of distrust?
Frank Stajano talked on usable security for lost tokens. When he reported a lost bank token he was put through robot call centre hell, then found the token; he had to wait ten days for the replacement. How can we do this better? Rather than giving people a big red panic button that fixes the loss but causes pain, he suggests also giving people a yellow “worry” button that simply freezes the account, but which can be undone cheaply. The “buttons” could be printed nonces that you SMS to the bank, but some thought may be needed to find ways of making the orange button accessible and easy to press in realistic abuse cases. He’s working on ways of doing this for Pico (his pocket-based single-signon token) but this means the yellow button server talking to all the relevant services, which potentially tells the NSA which services I use. The original loss protection idea was a remote Picosibling network share; another option is an append-only encrypted public log. In any case, splitting loss reporting from revocation is worth thinking about.
Bill Roscoe started with a photo of someone in the dungeon of Brno castle, just up the hill. His theme was detecting failed attacks and he’s interested (like Khaled and me) in low-bandwidth channels for human users, which he models as b-bit strings; the intruder wins if the b bits from one run are the same as those from another. Optimal protocols limit his success probability close to 2^-b, and classic constructions for this use variants on hash commitment before knowledge; see for example Nguyen and Roscoe’s Symmetric HCBK protocol. Such protocols let the intruder attack silently and abort the attack if the strings don’t match, which can be suboptimal. Can we make attacks evident? Bill’s proposal is to make the protocol auditable, so that after an aborted protocol Alice and Bob can determine whether or not there was a man in the middle. One implementation is to replace all hashes with a fixed delay; another is to add an extra layer to the protocol. The existing literature on delay functions envisages long delays, of years; it would be nice to get dependable delays of seconds to minutes. He’s also working on auditable password authenticated key exchange protocols.
The last session was started by Andreas Happe, who’s been worrying about malicious clients in in secret-sharing networks doing practical Byzantine fault tolerance (PBFT). The BFT protocol can deal with flooding; by using verifiable secret sharing he can deal with invalid shares; for semantically corrupt data, we can use server-side versioning; any practical implementations might also consider client validation and auditing.
The last speaker at protocols 2016 was Radim Ostadal, who has been studying attacks on sensor networks. The secrecy amplification techniques that Adrian Perrig, Haowen Chan and I developed led to a number of others, some node-oriented and others group-oriented; Radim compared these protocols and found that the best can reduce the proportion of compromised links from 40% to 3%. They have moved from simple simulators to more complex ones and now to real TinyOS hardware, to ensure that the results are robust. The outstanding question is whether a real attacker would compromise nodes at random, as the models assume, or along an actual path, which would concentrate attacks more locally. Now he’s working on optimal strategies for real attackers both for initial compromise and for retaining access in the face of repair efforts. A simple strategy is to concentrate attack efforts in the centre of the network, but the details depend on the number of attackers, receiving range, and the attacker’s initial position ad speed. Curiously, attackers that move around a lot are less effective.