I’m at Financial Crypto 2018 and will try to liveblog some of the sessions in followups to this post.
9 thoughts on “Financial Cryptography 2018”
The keynote talk was by Yasser Nawaz, executive director of cybersecurity at JP Morgan, and his topic was “Blockchain and Cryptography at JPMorgan Chase”.
He’s in charge of crypto strategy and has to satisfy 60 regulators worldwide, so the blockchain platforms and the apps that run on them have to be compliant. In addition to KYC, he has to scrutinise every transaction. In 2015 they settled on Ethereum as the starting point for trying to code up and speed up the crufty old business logic behind settlement and reconciliation (d day for a trade, three days for a chqeue, two weeks for a syndicated loan). They also plumped for open source.
They helped set up the Enterprise Ethereum Alliance with Microsoft and others; their version is called Quorum. It’s permissioned, with governance (nodes and activity tied to real-world legal identities), confidentiality (of transactions) and security (in the sense of no trust assumed between nodes). Mostly they stay as close as they can to the public Ethereum codebase. ZSL are on-chain private tokens while Constellation consists of off-chain smart contracts; instead of proof-of-work they have RAFT for leadership election, and Istanbul for Byzantine fault tolerance. The goal is rapid settlement finality. Privacy is added via a Transaction Manager which is conceptually equivalent to a key server plus MTAs plus PGP, and an Enclave that manages keys, optionally using trusted hardware (it’s a virtual or actual HSM).
Applications written for public Ethereum can be dropped on to Quorum and will just run. But you can also create a transaction that’s private to n nodes. Each transaction is split between a Quorum node and a Constellation node; private transactions are passed to the latter for encryption and for distribution to the transacting parties. The transaction happens off-chain but its hash is confirmed on the chain like any other. The upshot is that privacy is decentralised (not dependent on an external app or service) and the bank can meet regulatory requirements about transaction residency. The regulator is party to all transactions, public and private, in its domain; this was demonstrated to the regulator in Singapore. The crypto uses Dan Bernstein’s libsodium library.
Consensus based on Raft has high throughout and low latency, but where circumstances permit they prefer Byzantine fault tolerance as it’s secure in adversarial settings. Permissioning uses manually defined whitelists; they are thinking of building membership rules for various applications into smart contracts. A couple of clients use permissionless implementations over VPNs, which provide the access control. Gas can be free or have a price for rate limiting. ZSL – zk-Snarks for the Enterprise – allow transfer of digital assets on a digital ledger without revealing the sender or receiver. It was done with the Zcash team. It’s complementary to Constellation; the main drawbacks are (1) that the proofs are really slow; (2) ZSL doesn’t let you keep your business logic private while Constellation does. There’s quite a lot of business logic from trading through clearing to settlement; their project Ubin lets you use Constellation for clearing and ZSL for settlement.
Lessons learned: most apps don’t need a blockchain but a forward-secure sealed log. Where a blockchain is good, they can’t use a public one; they want to decentralise but without harming usability. It’s got to be feasible for normal programmers to write smart contracts, and writing in Solidity is not easy! They have bundled Matt Suiche’s EVM decompiler so people can understand smart contracts offered to them, and they have integrated with Microsoft’s CoCo framework for SGX. Above all, blockchain apps must talk to legacy systems, must be no more likely to create appsec mistakes or usability hazards. A complementary project is tackling provable (transparent) security of cloud services; see AWS Glacier’s tree hash, their Key Managament Service by Shay Gueron and Matt Campagna and their CloudHSM. Overall, it’s an exciting time to be a cryptographer in the financial industry. Yasser believes that blockchain will have real impact on the financial industry but not in ways that will be visible to the public.
Questions started off from the problems of opacity in financial markets and its role in the crisis ten years ago. Yasser replied that for better or worse the industry’s driven by regulators, and their short-term focus is getting rid of inefficiencies. Longer term could things change radically? He doesn’t believe that banks will cease to exist, but their roles may change; the network layer, the data layer and the advice layer might evolve in different ways.
The first regular paper was by Laura Roberts who’s been studying Anomalous keys in Tor relays. In 2012, Lenstra and Henninger discovered that many TLS, SSH and PGP public keys shared prime factors, making them vulnerable. Now Tor has been archiving identity and onion keys for a decade, providing a rich dataset; there are also fingerprints of onion services. So Laura set out to do the same job for Tor keys, collecting 3.7m keys over 11 years and using Henninger and Haldemen’s fastgcd tool on her keys and the 129m they had already screened. She found 3,557 (0.6%) with shared prime factors out of 588,945; all but two these turned out to be relays run by a single research group and the others were on a relay called DisasterBlaster. There were also ten relays with shared moduli but using different nonstandard exponents; the operators seem to have been iterating over the exponent to get target places in the DHT ring (there is a tool called scallion that lets you do this), so as to get close to a target service. The Tor Project had already noticed ad blocked them. She then looked for other relays with nonstandard exponents, finding 112 more. The onion services targeted by these attacks were identified; in the case of four services, both replicas were covered. One was Silk Road, targeted in 2013, but by a measurement experiment that was published.
Søren Dubois was next, discussing Compliance under the GDPR. Topics included the purpose-limitation principle, data minimisation, and consent. He argued that we need much better systems to support audits; the hard things will be defining purpose, and the requirement to delete things. It was hard enough with the cloud systems of a few years ago, but now that everyone’s putting stuff on immutable data structures such as blockchains it looks impossible. Perhaps consent would have to be more granular for automatic audit, but would that be usable? He suggests decomposing compliance documentation according to some formal model.
Nicholas Hopper’s been working on A More Efficient Private Presence Protocol. He wants to hide his social graph and his updates from a service provider while letting friends verify updates and supporting plausibly deniable friendship revocation, under strong threat assumptions. His proposal MP3 is an improvement of Borisov’s DP5, which use private information retrieval and two databases: a short-term one from content and a longer-term one for keys. Nicholas uses a dynamic broadcast encryption scheme instead to get better scalability of revocation.
Hovav Shacham started the afternoon session talking about signatures with specific properties. Two that seem to be in tension are uniqueness, and a security reduction that is tight, so that we can select short parameters. Last year at Crypto, Guo et al presented a unique signature scheme with logarithmic security loss but extremely long signatures; Hovav has found a scheme where signatures consist of only two group elements. Assuming a 128-bit security level and an opponent who can do 2^40 signature queries and 2^80 hash queries, Hovav’s new signature takes 4000 bits rather than 200,000 for Guo’s. It’s the shortest unique signature based on RSA that’s known.
Marten van Dijk was next, on Weak-Unforgeable Tags for Secure Supply Chain Management. Is it possible to authenticate objects in a supply chain using cheap RFID tags but without enabling supply-chain participants to work out competitor relationships. The RFID tags are verified not just by an online OEM database but also by local supplier systems for resilience. The suppliers store event information on the tag’s nonvolatile memory; for computational reasons we use HMACs rather than signatures; and we want counterfeits to get through with probability at most 2^16. With ten firms in the supply chain, and a naive 80-bit HMAC, you need about 3kbit NVRAM; but this is open to tag tracking and cloning attacks. The scheme he’s been trying to improve loads random one-time HMAC keys into NVM and have a pointer moving each time the tag is interrogated; it uses about 1300 gates and the Photon hash function. It turns out that simpler, provably-secure, hash functions may be able to do the job better.
Ian Martiny presented the session’s third paper, on Proof-of-Censorship: Enabling centralized censorship-resistant content providers. He wants to make selective censorship, which courts might order content providers to impose, incapable of being hidden. The idea is to use PIR to force servers to not just respond to queries but attest to the queries they have received. Servers have to respond to all queries or none. However, query size and reply get time grow rather quickly.
Jens Grossklags presented An Economic Study of the Effect of Android Platform Fragmentation on Security Updates, as the lead author Sadegh Farhang could not get a visa. They’ve been building an economic model of the app fragmentation reported by Wu et al, analysing differentiation both spatially and by consumer type using a linear city model which assumes that Google is strategically neutral. Vendors choose their security level and consumers may be naive or sophisticated about security; a Nash equilibrium always exists, and vendors always invest zero for naive consumers. In a more sophisticated model, vendors invest only in those security features that are visible to consumers. A fine proportional to market share would, in a static case, be passed on to consumers by the firm that was fined, and other firms would raise prices. In a dynamic analysis, though, the fines have to be internalised and convergence towards a common security level causes prices to fall. In conclusion, fining vendors who ship phones with poor security can lead to higher security and lower prices too.
Aron Laszka’s been studying The Rules of Engagement for Bug Bounty Programs. Netscape created the first of these in 1995, and they’re now widespread, giving access to a wide range of testers and signaling a firm’s commitment to security. The problem is false positives; Facebook claims 95% of claims are invalid, while for Google it’s 87% invalid and 8% duplicate. Managing the noise can involve platforms such as hackerone, bugcrowd and cobalt, which get the proportion of valid reports up from single figures to 20% or more; but there are fewer hackers and you have to compete for their attention. Zhao et al had already studied price elasticity of bug supply on such platforms, but what non-monetary factors might help? Aron studied 11 public programs on hackerone, of which 77 had a full history, and analysed clients’ rules on scope, access, prohibitions, participation restrictions, public disclosure guideines and reward conditions. Various things were noted, such as that offers with a lot of text got more bug reports, as did offers allowing public disclosure. Providing staging sites or test accounts for hackers to check out their ideas helped; providing source code didn’t help anything like as much. Questions started on how to suppress noise: running an invitation-only project on a platform is quite feasible, but more expensive. And such programs can only measure success, not effort.
Monday’s last paper had two presenters, Sanchari Das and Andrew Dingman, who’ve been exploring Why Johnny Doesn’t Use Two Factor. They’ve been looking at Yubico security keys whose usability sounds pretty simple: put it in the USB port, press the button and you’re done. You don’t even need to charge it. However user studies indicated that avoiding it gave people twice the utility; why? They did a two-phase study over a year of people recruited from a security class. 20 male and 7 female students, all experts, were randomised between instructions from Yubico or Google and went through a think-aloud protocol to register. They were then questioned on how they’d deal with lost keys, how they could remove a key from their account and so on (nobody had any clue). Participants were given the $20 keys but most discarded them. The experimenters analysed the dat for halt points (a third couldn’t register unaided), confusion points (whether their key worked, whether they’d actually registered after using the tryout link, thinking the device had a fingerprint reader so could be left in the laptop, going to Chrome settings rather than Gmail settings) and value points. People don’t read a lot, and so instructions don’t help much! A second study followed a year later after the university had introduced it (20m, 8f) which went better; perhaps more important was the fact that Google had hidden the demo. It’s also crucial to show people how they can use the system without making dangerous errors, such as being locked out of their email forever. On questions, Andrew explained that while Google has engineered recovery options, these are not at all obvious to users as they enrol.
Tuesday’s session on attacks started with Maxime Meyer describing attacks on next-generation SIM cards. SIM cards started in 1991, evolving from full-zed smartcards down to nanoSIMs by 2010. Since 2000, M2M SIMs have been embedded in devices and are hard to change, leading to a requirement for remote provisioning. Now, eUICCs (as they’re called) have a bootstrap key shared with a subscription manager (SM), and a profile to let it access the network. The provisioning profile is replaced by successive service profiles as the customer accepts service offers. Maxime found vulnerabilities in the spec for profile download and installation; in particular the creation of a profile container doesn’t have robust error handling. By dropping selected protocol messages, you can fill up the eUICC memory with empty profile containers, breaking the device.
Loïc Ferreira was next, who is Rescuing LoRaWAN 1.0. The LoRa Alliance has standards for IoT networks for environmental monitoring, including alarm systems. MACs protect tags which tell devices which radio channels to use; master keys are used to derive MAC keys and confidentiality keys, where the former protect traffic to the network server and the latter all the way to the application server. Encryption is AES CCM with frame counters; a similar block is used in a CMAC calculation. Add some low-entropy parameters and other parameters selected by one end only, and we get attacks on end devices involving device disconnection via session key replay, and to decrypt data by forcing keystream reuse by a more complex active protocol attack.
The last speaker was Jordan Holland, on Not So Predictable Mining Pools. Predictable Solo Mining (PSM) doesn’t divide the reward among miners, but gives it all to the miner who contributed the most shares. This turns out to be not incentive compatible, and things are even worse where shares aren’t authenticated. He did experiments with a minority of malicious miners.
The anonymity session on Wednesday was kicked off by Daniel Arce, on Pricing Anonymity. Daniel’s an economist and notes that coinjoin transactions have one taker and many makers; it’s a problem of coalition formation where only one member is paying, technically a nontransferable utility (NTU) cooperative game whose payoff is a vector for every member in the anonymity set. There are existing institutions such as JoinMarket that let people price coinjoins with visible prices (see the 2016 analysis by Möser and Böhme). Daniel derives the game’s characteristic function, which can be solved in Shapley value; he argues that this is the general solution for an anonymity fee, although there might be different markets for players with different ethics.
Rei Safavi-Naini then took A New Look at Refund Mechanism in a Bitcoin Payment Protocol. She’s been studying the BIP 70 bitcoin extension which supports refunds; McCorry and others came up with refund attacks; for example, as a customer can specify a refund to a different address, he can get a refund sent to a drug trader to do an illegal purchase. Rei proposes to fix these using multisignature refund addresses, which both customer and merchant have to sign, or time locks.
Ali El Kaafarani has been working on Anonymous Reputation Systems from Lattices. He’s responding to a challenge by Zhai et al about what kinds of anonymous reputation systems are possible, and has been building group signatures: the idea is that a reviewer can prove she’s a group member, without disclosing which one. The full scheme is a group of group signatures plus a tag scheme.
The blockchain session started with Danny Huang Measuring Profitability of Alternative Crypto-currencies. He looked at bitcoin forks. While conventional currencies have a volatility of a few percent, Bitcoin’s if 106% and Auroracoin is over 500%. Altcoins are perhaps best seen as a game between developers, miners and speculators. He models miners on the assumption that the opportunity cost of mining altcoins should be the same as the revenue from doing the same work mining bitcoin. This turns out to be well supported by market data; the price of altcoins closely tracks the oportunity cost of mining bitcoin. A profitability analysis suggests that miners who start early on new coins get the high returns.
Roman Matzutt has been studying the impact of arbitrary blockchain content on bitcoin. Embedding things like photos bloats the blockchain; but what about illegal content? Child sex abuse material is a real risk; a single image can lead to prosecution in Germany under s 11, 184b/c of the criminal code. Roman set out to classify extra content; some mechanisms such as satoshi uploader were used just in a burst, but others are used steadily. So far he’s found 1600 files with 1407 text messages and 146 images. He did a manual classification of objectionable content, finding some doxing, cablegate leak material, attempts to seed material unacceptable to the governments of China and Turkey, objectionable religious texts, a borderline image of a girl of 14 and the hiddenwiki page with 240 child pornography links to hidden services. He concludes there’s nothing clearly illegal right now (though there are issues for full node operators in some countries) but this could easily change. His suggested countermeasure is mandatory minimum transaction fees, or a bitcoin fork to change the design.
The session’s last speaker was Soumya Basu who has been trying to define and measure Decentralization in Bitcoin and Ethereum Networks. He connected to all publicly-accessible bitcoin and ethereum nodes, collected 4Gb or data, operated a full-scale relay network, and tried to persuade miners to connect to him directly so he could measure what’s actually going on. Relay nodes do full block validation while their peers; his system (sitting on Falcon) outran this by just validating the header and had eighteen vantage points worldwide. He observed peak bandwidth per node of 100Mbit/s for 75% of nodes, while median bandwidth increased by 1.7 times from 2016 to 2017 when it reached 56Mbit/sec. He estimated peer clustering and found Ethereum to be significantly more spread out than Bitcoin; maybe 28% of the former, and 56% of the latter, are at ASes with dedicated hosting services. Self-reported mining power distribution confirms that neither is very distributed; 90% of the hashpower is controlled by 16 entities in Bitcoin and 11 in Ethereum while the magic 50% threshold is controlled by four miners in Bitcoin and three in Ethereum. How successful are they? A power utilisation study revealed that in Bitcoin it was usually over 99% while in Ethereum it was typically 94%. He concludes that Ethereum could benefit from a relay network. The variance is higher on Bitcoin though: 1% of the mining power gets you 1.44 blocks/day on Bitcoin but 72 on Ethereum. The overall takeaways are that neither Bitcoin nor Ethereum is really decentralised; and that decentralisation needs to be measured. A questioner asked why he didn’t use economic measures such as market share or the Herfindahl–Hirschman Index.
Thursday’s first talk was given remotely by Huang Zhang on Anonymous Post-Quantum Cryptocash. If quantum computers ever work, can we just replace ECDSA with a post-quantum signature? Huang has been working on lattice-based signatures and investigating whether one can get ring signatures for Monero-type anonymous blockchains. He proposed a linkable ring signature from ideal lattices, based on the work of Groth and Kohlweiss, and discussed the mechanics of generating stealth addresses.
The first real speaker of the day was Sunoo Park, who wants to save the energy cost of bitcoin and presented SpaceMint: A Cryptocurrency Based on Proofs of Space. The idea is that the prover can show that they dedicated a certain amount of hard disk space to the protocol; practical problems include that proof of space has to be made non-interactive, and that making mining cheap means we need new mechanisms for leader election. Possible attacks include block grinding and mining multiple chains; Sunoo proposes penalty mechanisms to to penalise miners who don’t commit to mining a single chain and to ensure that people don’t nurse long private chains in the hope of taking over later. Some attempts to deploy similar ideas include Burstcoin and Chia. In questions, I asked whether proof of space actually burns less carbon than proof-of-work, given the high embedded energy costs of semiconductor memory.
Bernardo David was next with Kaleidoscope: An Efficient Poker Protocol. Mental poker has been a research topic since the 1980s; since 2014 people have been using the blockchain and then smart contracts. The field is hand-wavey; Bernardo has been trying to tie down proper definitions of security and as a result has found issues with some previous proposals, such as an inability to recover from cheating. He then presented a new mental poker system that also uses thousands fewer exponentiations than previous ones.
The first talk in the last session was by Anastasia Mavridou talking on Designing Secure Ethereum Smart Contracts. She’s interested in reducing the large number of security vulnerabilities and other bugs in smart contracts. Some are hard to analyse, such as malicious callees who exploit re-entrancy and malicious sellers who create transaction ordering dependencies. In general, there’s a semantic gap between the assumptions developers make and the actual execution semantics. This requires a correctness-by-construction approach and she proposes a model-based design methodology. Her tool lets developers click on buttons to add standard security patterns such as locks, and throws extensive error messages with a model checker verifying safety, deadlock freedom and liveness.
The last talk of FC 2018 was by Stefano Lande, presenting A formal model of Bitcoin transactions. It turns out to be nontrivial to model bitcoin’s scripting language in such a way as to capture time locks, commitments and other protocols constructed out of its advanced features. People have invented oracles, lotteries, crowdfunding and other applications, all of which his model can cope with. He has a process calculus based on CCS and is working on a domain-specific language to help programmers write error-free scripts.
The keynote talk was by Yasser Nawaz, executive director of cybersecurity at JP Morgan, and his topic was “Blockchain and Cryptography at JPMorgan Chase”.
He’s in charge of crypto strategy and has to satisfy 60 regulators worldwide, so the blockchain platforms and the apps that run on them have to be compliant. In addition to KYC, he has to scrutinise every transaction. In 2015 they settled on Ethereum as the starting point for trying to code up and speed up the crufty old business logic behind settlement and reconciliation (d day for a trade, three days for a chqeue, two weeks for a syndicated loan). They also plumped for open source.
They helped set up the Enterprise Ethereum Alliance with Microsoft and others; their version is called Quorum. It’s permissioned, with governance (nodes and activity tied to real-world legal identities), confidentiality (of transactions) and security (in the sense of no trust assumed between nodes). Mostly they stay as close as they can to the public Ethereum codebase. ZSL are on-chain private tokens while Constellation consists of off-chain smart contracts; instead of proof-of-work they have RAFT for leadership election, and Istanbul for Byzantine fault tolerance. The goal is rapid settlement finality. Privacy is added via a Transaction Manager which is conceptually equivalent to a key server plus MTAs plus PGP, and an Enclave that manages keys, optionally using trusted hardware (it’s a virtual or actual HSM).
Applications written for public Ethereum can be dropped on to Quorum and will just run. But you can also create a transaction that’s private to n nodes. Each transaction is split between a Quorum node and a Constellation node; private transactions are passed to the latter for encryption and for distribution to the transacting parties. The transaction happens off-chain but its hash is confirmed on the chain like any other. The upshot is that privacy is decentralised (not dependent on an external app or service) and the bank can meet regulatory requirements about transaction residency. The regulator is party to all transactions, public and private, in its domain; this was demonstrated to the regulator in Singapore. The crypto uses Dan Bernstein’s libsodium library.
Consensus based on Raft has high throughout and low latency, but where circumstances permit they prefer Byzantine fault tolerance as it’s secure in adversarial settings. Permissioning uses manually defined whitelists; they are thinking of building membership rules for various applications into smart contracts. A couple of clients use permissionless implementations over VPNs, which provide the access control. Gas can be free or have a price for rate limiting. ZSL – zk-Snarks for the Enterprise – allow transfer of digital assets on a digital ledger without revealing the sender or receiver. It was done with the Zcash team. It’s complementary to Constellation; the main drawbacks are (1) that the proofs are really slow; (2) ZSL doesn’t let you keep your business logic private while Constellation does. There’s quite a lot of business logic from trading through clearing to settlement; their project Ubin lets you use Constellation for clearing and ZSL for settlement.
Lessons learned: most apps don’t need a blockchain but a forward-secure sealed log. Where a blockchain is good, they can’t use a public one; they want to decentralise but without harming usability. It’s got to be feasible for normal programmers to write smart contracts, and writing in Solidity is not easy! They have bundled Matt Suiche’s EVM decompiler so people can understand smart contracts offered to them, and they have integrated with Microsoft’s CoCo framework for SGX. Above all, blockchain apps must talk to legacy systems, must be no more likely to create appsec mistakes or usability hazards. A complementary project is tackling provable (transparent) security of cloud services; see AWS Glacier’s tree hash, their Key Managament Service by Shay Gueron and Matt Campagna and their CloudHSM. Overall, it’s an exciting time to be a cryptographer in the financial industry. Yasser believes that blockchain will have real impact on the financial industry but not in ways that will be visible to the public.
Questions started off from the problems of opacity in financial markets and its role in the crisis ten years ago. Yasser replied that for better or worse the industry’s driven by regulators, and their short-term focus is getting rid of inefficiencies. Longer term could things change radically? He doesn’t believe that banks will cease to exist, but their roles may change; the network layer, the data layer and the advice layer might evolve in different ways.
The first regular paper was by Laura Roberts who’s been studying Anomalous keys in Tor relays. In 2012, Lenstra and Henninger discovered that many TLS, SSH and PGP public keys shared prime factors, making them vulnerable. Now Tor has been archiving identity and onion keys for a decade, providing a rich dataset; there are also fingerprints of onion services. So Laura set out to do the same job for Tor keys, collecting 3.7m keys over 11 years and using Henninger and Haldemen’s fastgcd tool on her keys and the 129m they had already screened. She found 3,557 (0.6%) with shared prime factors out of 588,945; all but two these turned out to be relays run by a single research group and the others were on a relay called DisasterBlaster. There were also ten relays with shared moduli but using different nonstandard exponents; the operators seem to have been iterating over the exponent to get target places in the DHT ring (there is a tool called scallion that lets you do this), so as to get close to a target service. The Tor Project had already noticed ad blocked them. She then looked for other relays with nonstandard exponents, finding 112 more. The onion services targeted by these attacks were identified; in the case of four services, both replicas were covered. One was Silk Road, targeted in 2013, but by a measurement experiment that was published.
Søren Dubois was next, discussing Compliance under the GDPR. Topics included the purpose-limitation principle, data minimisation, and consent. He argued that we need much better systems to support audits; the hard things will be defining purpose, and the requirement to delete things. It was hard enough with the cloud systems of a few years ago, but now that everyone’s putting stuff on immutable data structures such as blockchains it looks impossible. Perhaps consent would have to be more granular for automatic audit, but would that be usable? He suggests decomposing compliance documentation according to some formal model.
Nicholas Hopper’s been working on A More Efficient Private Presence Protocol. He wants to hide his social graph and his updates from a service provider while letting friends verify updates and supporting plausibly deniable friendship revocation, under strong threat assumptions. His proposal MP3 is an improvement of Borisov’s DP5, which use private information retrieval and two databases: a short-term one from content and a longer-term one for keys. Nicholas uses a dynamic broadcast encryption scheme instead to get better scalability of revocation.
Hovav Shacham started the afternoon session talking about signatures with specific properties. Two that seem to be in tension are uniqueness, and a security reduction that is tight, so that we can select short parameters. Last year at Crypto, Guo et al presented a unique signature scheme with logarithmic security loss but extremely long signatures; Hovav has found a scheme where signatures consist of only two group elements. Assuming a 128-bit security level and an opponent who can do 2^40 signature queries and 2^80 hash queries, Hovav’s new signature takes 4000 bits rather than 200,000 for Guo’s. It’s the shortest unique signature based on RSA that’s known.
Marten van Dijk was next, on Weak-Unforgeable Tags for Secure Supply Chain Management. Is it possible to authenticate objects in a supply chain using cheap RFID tags but without enabling supply-chain participants to work out competitor relationships. The RFID tags are verified not just by an online OEM database but also by local supplier systems for resilience. The suppliers store event information on the tag’s nonvolatile memory; for computational reasons we use HMACs rather than signatures; and we want counterfeits to get through with probability at most 2^16. With ten firms in the supply chain, and a naive 80-bit HMAC, you need about 3kbit NVRAM; but this is open to tag tracking and cloning attacks. The scheme he’s been trying to improve loads random one-time HMAC keys into NVM and have a pointer moving each time the tag is interrogated; it uses about 1300 gates and the Photon hash function. It turns out that simpler, provably-secure, hash functions may be able to do the job better.
Ian Martiny presented the session’s third paper, on Proof-of-Censorship: Enabling centralized censorship-resistant content providers. He wants to make selective censorship, which courts might order content providers to impose, incapable of being hidden. The idea is to use PIR to force servers to not just respond to queries but attest to the queries they have received. Servers have to respond to all queries or none. However, query size and reply get time grow rather quickly.
Jens Grossklags presented An Economic Study of the Effect of Android Platform Fragmentation on Security Updates, as the lead author Sadegh Farhang could not get a visa. They’ve been building an economic model of the app fragmentation reported by Wu et al, analysing differentiation both spatially and by consumer type using a linear city model which assumes that Google is strategically neutral. Vendors choose their security level and consumers may be naive or sophisticated about security; a Nash equilibrium always exists, and vendors always invest zero for naive consumers. In a more sophisticated model, vendors invest only in those security features that are visible to consumers. A fine proportional to market share would, in a static case, be passed on to consumers by the firm that was fined, and other firms would raise prices. In a dynamic analysis, though, the fines have to be internalised and convergence towards a common security level causes prices to fall. In conclusion, fining vendors who ship phones with poor security can lead to higher security and lower prices too.
Aron Laszka’s been studying The Rules of Engagement for Bug Bounty Programs. Netscape created the first of these in 1995, and they’re now widespread, giving access to a wide range of testers and signaling a firm’s commitment to security. The problem is false positives; Facebook claims 95% of claims are invalid, while for Google it’s 87% invalid and 8% duplicate. Managing the noise can involve platforms such as hackerone, bugcrowd and cobalt, which get the proportion of valid reports up from single figures to 20% or more; but there are fewer hackers and you have to compete for their attention. Zhao et al had already studied price elasticity of bug supply on such platforms, but what non-monetary factors might help? Aron studied 11 public programs on hackerone, of which 77 had a full history, and analysed clients’ rules on scope, access, prohibitions, participation restrictions, public disclosure guideines and reward conditions. Various things were noted, such as that offers with a lot of text got more bug reports, as did offers allowing public disclosure. Providing staging sites or test accounts for hackers to check out their ideas helped; providing source code didn’t help anything like as much. Questions started on how to suppress noise: running an invitation-only project on a platform is quite feasible, but more expensive. And such programs can only measure success, not effort.
Monday’s last paper had two presenters, Sanchari Das and Andrew Dingman, who’ve been exploring Why Johnny Doesn’t Use Two Factor. They’ve been looking at Yubico security keys whose usability sounds pretty simple: put it in the USB port, press the button and you’re done. You don’t even need to charge it. However user studies indicated that avoiding it gave people twice the utility; why? They did a two-phase study over a year of people recruited from a security class. 20 male and 7 female students, all experts, were randomised between instructions from Yubico or Google and went through a think-aloud protocol to register. They were then questioned on how they’d deal with lost keys, how they could remove a key from their account and so on (nobody had any clue). Participants were given the $20 keys but most discarded them. The experimenters analysed the dat for halt points (a third couldn’t register unaided), confusion points (whether their key worked, whether they’d actually registered after using the tryout link, thinking the device had a fingerprint reader so could be left in the laptop, going to Chrome settings rather than Gmail settings) and value points. People don’t read a lot, and so instructions don’t help much! A second study followed a year later after the university had introduced it (20m, 8f) which went better; perhaps more important was the fact that Google had hidden the demo. It’s also crucial to show people how they can use the system without making dangerous errors, such as being locked out of their email forever. On questions, Andrew explained that while Google has engineered recovery options, these are not at all obvious to users as they enrol.
Tuesday’s session on attacks started with Maxime Meyer describing attacks on next-generation SIM cards. SIM cards started in 1991, evolving from full-zed smartcards down to nanoSIMs by 2010. Since 2000, M2M SIMs have been embedded in devices and are hard to change, leading to a requirement for remote provisioning. Now, eUICCs (as they’re called) have a bootstrap key shared with a subscription manager (SM), and a profile to let it access the network. The provisioning profile is replaced by successive service profiles as the customer accepts service offers. Maxime found vulnerabilities in the spec for profile download and installation; in particular the creation of a profile container doesn’t have robust error handling. By dropping selected protocol messages, you can fill up the eUICC memory with empty profile containers, breaking the device.
Loïc Ferreira was next, who is Rescuing LoRaWAN 1.0. The LoRa Alliance has standards for IoT networks for environmental monitoring, including alarm systems. MACs protect tags which tell devices which radio channels to use; master keys are used to derive MAC keys and confidentiality keys, where the former protect traffic to the network server and the latter all the way to the application server. Encryption is AES CCM with frame counters; a similar block is used in a CMAC calculation. Add some low-entropy parameters and other parameters selected by one end only, and we get attacks on end devices involving device disconnection via session key replay, and to decrypt data by forcing keystream reuse by a more complex active protocol attack.
The last speaker was Jordan Holland, on Not So Predictable Mining Pools. Predictable Solo Mining (PSM) doesn’t divide the reward among miners, but gives it all to the miner who contributed the most shares. This turns out to be not incentive compatible, and things are even worse where shares aren’t authenticated. He did experiments with a minority of malicious miners.
The anonymity session on Wednesday was kicked off by Daniel Arce, on Pricing Anonymity. Daniel’s an economist and notes that coinjoin transactions have one taker and many makers; it’s a problem of coalition formation where only one member is paying, technically a nontransferable utility (NTU) cooperative game whose payoff is a vector for every member in the anonymity set. There are existing institutions such as JoinMarket that let people price coinjoins with visible prices (see the 2016 analysis by Möser and Böhme). Daniel derives the game’s characteristic function, which can be solved in Shapley value; he argues that this is the general solution for an anonymity fee, although there might be different markets for players with different ethics.
Rei Safavi-Naini then took A New Look at Refund Mechanism in a Bitcoin Payment Protocol. She’s been studying the BIP 70 bitcoin extension which supports refunds; McCorry and others came up with refund attacks; for example, as a customer can specify a refund to a different address, he can get a refund sent to a drug trader to do an illegal purchase. Rei proposes to fix these using multisignature refund addresses, which both customer and merchant have to sign, or time locks.
Ali El Kaafarani has been working on Anonymous Reputation Systems from Lattices. He’s responding to a challenge by Zhai et al about what kinds of anonymous reputation systems are possible, and has been building group signatures: the idea is that a reviewer can prove she’s a group member, without disclosing which one. The full scheme is a group of group signatures plus a tag scheme.
The blockchain session started with Danny Huang Measuring Profitability of Alternative Crypto-currencies. He looked at bitcoin forks. While conventional currencies have a volatility of a few percent, Bitcoin’s if 106% and Auroracoin is over 500%. Altcoins are perhaps best seen as a game between developers, miners and speculators. He models miners on the assumption that the opportunity cost of mining altcoins should be the same as the revenue from doing the same work mining bitcoin. This turns out to be well supported by market data; the price of altcoins closely tracks the oportunity cost of mining bitcoin. A profitability analysis suggests that miners who start early on new coins get the high returns.
Roman Matzutt has been studying the impact of arbitrary blockchain content on bitcoin. Embedding things like photos bloats the blockchain; but what about illegal content? Child sex abuse material is a real risk; a single image can lead to prosecution in Germany under s 11, 184b/c of the criminal code. Roman set out to classify extra content; some mechanisms such as satoshi uploader were used just in a burst, but others are used steadily. So far he’s found 1600 files with 1407 text messages and 146 images. He did a manual classification of objectionable content, finding some doxing, cablegate leak material, attempts to seed material unacceptable to the governments of China and Turkey, objectionable religious texts, a borderline image of a girl of 14 and the hiddenwiki page with 240 child pornography links to hidden services. He concludes there’s nothing clearly illegal right now (though there are issues for full node operators in some countries) but this could easily change. His suggested countermeasure is mandatory minimum transaction fees, or a bitcoin fork to change the design.
The session’s last speaker was Soumya Basu who has been trying to define and measure Decentralization in Bitcoin and Ethereum Networks. He connected to all publicly-accessible bitcoin and ethereum nodes, collected 4Gb or data, operated a full-scale relay network, and tried to persuade miners to connect to him directly so he could measure what’s actually going on. Relay nodes do full block validation while their peers; his system (sitting on Falcon) outran this by just validating the header and had eighteen vantage points worldwide. He observed peak bandwidth per node of 100Mbit/s for 75% of nodes, while median bandwidth increased by 1.7 times from 2016 to 2017 when it reached 56Mbit/sec. He estimated peer clustering and found Ethereum to be significantly more spread out than Bitcoin; maybe 28% of the former, and 56% of the latter, are at ASes with dedicated hosting services. Self-reported mining power distribution confirms that neither is very distributed; 90% of the hashpower is controlled by 16 entities in Bitcoin and 11 in Ethereum while the magic 50% threshold is controlled by four miners in Bitcoin and three in Ethereum. How successful are they? A power utilisation study revealed that in Bitcoin it was usually over 99% while in Ethereum it was typically 94%. He concludes that Ethereum could benefit from a relay network. The variance is higher on Bitcoin though: 1% of the mining power gets you 1.44 blocks/day on Bitcoin but 72 on Ethereum. The overall takeaways are that neither Bitcoin nor Ethereum is really decentralised; and that decentralisation needs to be measured. A questioner asked why he didn’t use economic measures such as market share or the Herfindahl–Hirschman Index.
Thursday’s first talk was given remotely by Huang Zhang on Anonymous Post-Quantum Cryptocash. If quantum computers ever work, can we just replace ECDSA with a post-quantum signature? Huang has been working on lattice-based signatures and investigating whether one can get ring signatures for Monero-type anonymous blockchains. He proposed a linkable ring signature from ideal lattices, based on the work of Groth and Kohlweiss, and discussed the mechanics of generating stealth addresses.
The first real speaker of the day was Sunoo Park, who wants to save the energy cost of bitcoin and presented SpaceMint: A Cryptocurrency Based on Proofs of Space. The idea is that the prover can show that they dedicated a certain amount of hard disk space to the protocol; practical problems include that proof of space has to be made non-interactive, and that making mining cheap means we need new mechanisms for leader election. Possible attacks include block grinding and mining multiple chains; Sunoo proposes penalty mechanisms to to penalise miners who don’t commit to mining a single chain and to ensure that people don’t nurse long private chains in the hope of taking over later. Some attempts to deploy similar ideas include Burstcoin and Chia. In questions, I asked whether proof of space actually burns less carbon than proof-of-work, given the high embedded energy costs of semiconductor memory.
Bernardo David was next with Kaleidoscope: An Efficient Poker Protocol. Mental poker has been a research topic since the 1980s; since 2014 people have been using the blockchain and then smart contracts. The field is hand-wavey; Bernardo has been trying to tie down proper definitions of security and as a result has found issues with some previous proposals, such as an inability to recover from cheating. He then presented a new mental poker system that also uses thousands fewer exponentiations than previous ones.
The first talk in the last session was by Anastasia Mavridou talking on Designing Secure Ethereum Smart Contracts. She’s interested in reducing the large number of security vulnerabilities and other bugs in smart contracts. Some are hard to analyse, such as malicious callees who exploit re-entrancy and malicious sellers who create transaction ordering dependencies. In general, there’s a semantic gap between the assumptions developers make and the actual execution semantics. This requires a correctness-by-construction approach and she proposes a model-based design methodology. Her tool lets developers click on buttons to add standard security patterns such as locks, and throws extensive error messages with a model checker verifying safety, deadlock freedom and liveness.
The last talk of FC 2018 was by Stefano Lande, presenting A formal model of Bitcoin transactions. It turns out to be nontrivial to model bitcoin’s scripting language in such a way as to capture time locks, commitments and other protocols constructed out of its advanced features. People have invented oracles, lotteries, crowdfunding and other applications, all of which his model can cope with. He has a process calculus based on CCS and is working on a domain-specific language to help programmers write error-free scripts.