I’m at Financial Crypto 2019 and will try to liveblog some of the sessions in followups to this post.
9 thoughts on “Financial Cryptography 2019”
As our keynote speaker’s plane was delayed, FC started with refereed talks. Vincent Haupert kicked off with analysis of How to attack PSD2 Internet banking, inspired by a talk on the same topic by Adham et al at FC13. Many banks deal with man-in-the-browser attacks by 2FA, sending transaction details plus an auth code to the customer’s phone. The new PSD2 regulation now makes this mandatory for EU banks; so what weak points remain? His threat model is a compromised PC and normal phone and a customer who wants to make a wire transfer which the attacker wants to redirect. His paper analyses five attacks: clipboard hijacking (everyone can read and write this, and the MetaMask Android malware already hijacks cryptocoins – should be easy to copy the IBAN), SMS autofill (iOS 12 looks for 6-digit codes and offers to copy them), stealthy transaction manipulation via misleading instructions on screen, invoice manipulation and transfer templates (as supported by many banks’ payment systems). Overall PSD2 is positive but care is still needed in implementation and use.
Pascal Lafourcade was second, talking about how to play online spades with cheaters. Spades is an online card game, similar to bridge; the paper generalises to trick-taking games where each player plays one card at each round and one of them wins the trick. Do we have to trust the game server to enforce the rules and not leak information except for the cards playes? Pascal has devised crypto protocols to enable the game to be played in a distributed way with mechanisms for card shuffling, commitment and so on.
Jochen Rill was third with a talk title Your money or your life. It’s a formal model for payment protocols such as EMV. Real-world protocols are often too complex to verify, with their backwards compatibility features, but formal models can point out the hidden assumptions – such as EMV’s assumptions that neither POS devices, now their communication with the card, can be manipulated. His model allows engineers to reason about trusted interfaces by providing a way of simultaneously analysing confirmation channels and authenticated communication channels, and also one-of-two security where you assume that at most one of your laptop and your phone is compromised.
The last talk before coffee was by Nadia Henninger and Joachim Breitner, on Biased Nonce Sense: Lattice Attacks against Weak ECDSA Signatures in Cryptocurrencies. Assume your bitcoin signing key is d and the public key is dG; a signature is r,s where r=xkG and s=k^{-1}h+dr mod n. So if you re-use the nonce k you leak the signing key (and this has been done; people instantly steal money from wallets that do this. Less obvious is that if k is small, and you have enough signatures, you can use lattice basis reduction to find the key. Given shared prefixes or shared suffixes, some elementary arithmetic transforms the problem into the small-k case. The blockchain now contains billions of keys and signatures (though it’s a bit fiddly to parse). A billion signatures, 60m public keys and 50 CPU years later, they found 6000 signatures from 300 keys with a total at risk of BTC 0.008. They found various other weird things: when cleaning up dust people use nonces of (n-1)/2 to make the signatures shorter, and there are quite a few of those on the blockchain. The Bitpay multisig disaster happened when people used short nonces because of a bug and some didn’t upgrade their software when the patch came out. And there was some memory unsafe code that re-used half the secret key as a nonce. The paper has further examples of how people screwed up. The takeaway lesson is to generate nonces deterministically, as the major libraries already now do.
Elaine Shi started the second morning session with a talk on proof of stake. Bitcoin is too expensive in energy terms, and the cost of breaking a system should increase in line with its economic value; hence the interest in proof of stake. Snow White is the first such with a formal security proof. Such systems need three things: a permissioned consensus protocol (in Snow White “sleepy consensus” is like Nakamoto consensus but without the proof-of-work), a robust means of committee reconfiguration (currency holders register their interest, and then a random seed is generated to select a miner to mine the next block), and incentives. Her mechanism has been adopted and modified by others.
Giula Fanti has been wondering what happens when people move to proof of stake while keeping the same block rewards as in proof-of-work systems. Quite simply, the rich get richer! She’s been modeling this in more detail, looking at how stake distributions evolve. Could you end up headed for a 51% attack? This depends on the block reward as a fraction of the total pool size, so too small an initial stake pool might be hazardous. Given that, geometrical rather than constant rewards may be optimal.
Sanket Kanjalkar was the last speaker before lunch, talking on Resource Exhaustion Attacks on PoS. He’s been exploring attacks that exploit already-spent coins for stake, which are difficult to block when side-chain transactions are permitted. Over 25 currencies were affected by two specific vulnerabilities he discovered. The coordinated vulnerability disclosure process was tricky; in the end he informed only currencies with a market cap over $10m ($2.3bn worth of cc was affected and $1.4bn worth is now patched). So what’s the fate of cryptocoins that no longer have developer teams?
Sarah Meiklejohn has been studying code diversity among cryptocurrencies. Last week there were 2065 cryptocurrencies listed on coinmarketcap, a lot of which are completely fraudulent in one way or another. Last summer there were 1664 and she trawled them for links to source code, fetching it from 13,694 github repositories then ending up with 2,354 that appear to be the functioning code base. How can we identify genuine projects? By whether it offers any distinct contribution. She tried various things including just whether a coin’s code has more than 70% of the files identical to another’s; for example, Akuya coin (a bitcoin clone) has spawned 75 apparent derivatives as it had a number of bitcoin files missing and so did they. BumbaCoin has dozens that sort-of copy zerocoin, an academic coin created for benchmarking which had lots of warnings in the code (eventually the zerocoin folks took away they makefile). She then looked at ERC20 tokens, which were 52% of the listed currencies (886 in total) and operate as ethereum smart contracts; there was more diversity there, with 246 distinct types.
Till Neudecker has been studying blockchain forks. He’s been monitoring the peer-to-peer network since 2015 and has seen 231 forks since then. A block’s precise timing has an effect on its probability of main-chain inclusion and the interval between a forking block and the subsequent one. One miner was responsible for 37 forks, with 11 of them in under 100 seconds; there is only a 2% probability that this was caused by chance. This may indicate an attempt at selfish mining, though the evidence is mixed.
Michael Fröwis has been studying how to detect token systems by analysing bytecode and gas consumption. EVM code is particularly suitable for binary analysis with symbolic execution. He scraped the top 612 tokens for ground truth, and also implemented signature-based detection that looks for exported labels for the methods needed for a standards conformant ERC20 token. Then he implemented behavior-based detection looking for a sender balance check, followed by a matching addition and subtraction. It turns out to be reasonably accurate, with explainable misses; but many tokens don’t implement ERC20 properly.
Friedhelm Victor has been measuring ERC20 token networks. Many people have analysed transaction structure in bitcoin graphs using techniques such as address clustering; but so far there’s been little work on ethereum tokens, which are basically an overlay network. He extracted all relevant contract creations up till September 2018; ERC20 contracts have exploded since the middle of 2017 and are over half of it, with over 97m transfer events on 64,000 token contracts. The top 1000 contracts account for 85% of transfers and the top 100 for almost 50%; EOS is the biggest with 3m edges. There doesn’t appear to be a power law in force so most attachment isn’t organic; exchanges account for this. Users don’t seem to know each other, and the main function appears to be trading.
Zuoxia Yu has been studying traceability in cryptonote-style blockchains. Cryptonote is a open protocol that supports sender anonymity, and is used by a number of coins; however some inference attacks are possible by parties who have partial knowledge of cascades of linked transactions and correlate inputs carefully with outputs. Security against such “closed-set attacks” would mean that a brute-force attack is best; analysing them requires better clustering algorithms.
Abraham Hinteregger has been looking at Monero Cross-Chain Traceability. Known methods include zero mixin removal (ZMR), its generalisation intersection removal, the output merging heuristic and the guest newest heuristic (GNH). Monero has been improved to make ZMR harder by fixing the ring size and also sets the mixin distributions to stop GNH. One remaining problem is what happens with a fork, such as XMO versus XMV; attempts to spend old coins on each of the new coins leak information, but not enough to be a systemic threat.
Karl Wüst has been working on PRCash, a coin he hopes central banks might issue to provide payments that are private up to some limit and are visible to the regulator above that; they are based on mimblewimble, a scheme for using coinjoin-type mechanisms in a bitcoin sidechain with blinding. In questions he admitted he had no support for interest rates.
Sinisa Matetic has been designing clients for Zcash using trusted execution. At present only 6% of Zcash transactions are shielded, and the rest transparent; it has about 5,000 transactions per day; and significant deployment issues are that the client has to process the whole blockchain to do a shielded transaction, and the absence of a mobile client. He’d like to use TEE to support something like bitcoin’s simplified payment verification (SPV). His design has full nodes using SGX and lightweight clients that interact with them. It’s designed to withstand a malicious full node that performs side-channel attacks on other enclaves.
Ioana Boureanu started Tuesday’s sessions with a talk on Making Contactless EMV Payments Robust Against Rogue Readers Colluding With Relay Attackers. Mastercard has added relay protection to its contactless payment standard by measuring the round-trip time; is this robust? It may fail if the two readers are colluding; only the readers can check timestamps as the card has no clock. How can collusive relaying be mitigated? We need to get anotherv party in EMV to check that the round-trip time (RTT) measurement was done right. She suggests using a TPM or other TEE in each terminal, and has three possible architectures.
The keynote speaker, Neha Narula, was next, and her talk was on preventing catastrophic cryptocurrency attacks. Cryptocurrency is a new kind of problem for the security community: unproven protocols and latent software bugs present all sorts of hazards; cryptocurrency maintainers have widely varying levels of skill; and attackers can easily exploit vulnerabilities for financial gain. She described three vulnerabilities: iota’s signature bug; the bitcoin cash chainsplit double-spend attack; and the DoS inflation vulnerability of bitcoin. In each case developers deployed mitigations in time. Iota has an $800m market cap and has partnered with VW and Bosch; it has weird stuff like trits and trytes instead of bits and bytes. They also wrote their own hash function, curl, which Ethan Hellman broke; Eve can get Alice to sign a payment to Bob whose hash also works for a payment to her. When the bug was disclosed, the developers claimed the bug was deliberate – a copy-protection backdoor that they could use against people who stole their (open-source) code, and claimed there would be a new generation of microprocessors based on ternary logic. Neha has learned to expect wildly different responses from cryptocurrency developers when she discloses bugs; there may be odd reactions, and the developers may not speak your language. If you’re a researcher, either hire a good lawyer or disclose anonymously; when people lose millions of dollars they can get very angry, and disclosers can gain a lot financially by shorting the coin they’ve broken (though the publicity may even cause the coin to go up in price). So developers should be able to deal with anonymous disclosure, and should consider offering bounties commensurate with the value at risk. Everyone should read Cory Fields on this. Open questions include how we coordinate disclosure across multiple cryptocurrencies, given that we can’t tell everyone all at once and so the disclosure may be seen to be unfair and it drops a zero-day on a lot of copycat coins; and whether security matters that much, given the lack of correlation between market cap and vulnerabilities in a small irrational market. She runs a digital currency initiative at MIT and has set up an ad-hoc cryptocurrency security working group to try and move things to a better place. In questions, she remarked that on one occasion maintainers admitted one bugfix in order to cover up a more serious one. A speaker confirmed that it can take a month to chase up developers to their preferred telegram or other means of communication, and one clearing house charges money and does nothing. Someone suggested using CERT mechanisms; I suggested disclosure via regulators, as we do when we find vulnerabilities in protocols like EMV; someone else wondered whether we should even be looking.
Mohammed Aamir Ali has an analysis of the 3d Secure 2.0 protocol. The original 3D Secure protocol was designed to authenticate card-not-present (CNP) transactions by allowing the user to register a static password; merchants hated it, and there were serious issues with registering cardholders. Yet CNP fraud is 70% of it, and regulators demanded universal 2fa via PSD2, so the industry came up with 3D Secure 2.0 which supports SMS-based second factor authentication for riskier transactions and issuer-specific ‘frictionless’ authentication for the rest. Mohammed set up a man-in-the-middle proxy to reverse engineer the protocol. It turns out that the authentication server uses javascript to profile the customer’s browser (OS, fonts, plugins, fonts, etc) as well as setting a cookie, to establish that the customer’s using a known machine. This was tested with UK cards in both the UK and Germany. Implementations varied by country and across card issuers, with some issuers obfuscating the device fingerprints and others relying on cookies and http headers only. However there’s no authentication of the fingerprint, so an attacker can impersonate a trusted device during a man-in-the-browser attack or a cross-site cookie-stealing attack.
Rafael Dowsley presented a framework for universally composable card games, in the tradition of previous work on mental poker. Blockchains have enabled more complex and realistic distributed games; we can now do reasonable poker, but adapting the protocols to other card games is not straightforward and security proofs are hard. He proposes a toolkit based on threshold signatures and zero knowledge, using mixnets for shuffling.
Vanessa Teague has been working on instant run-off voting. What’s the best way of getting a publicly verifiable result from many inputs submitted non-interactively from lots of people to a small number of potentially unreliable parties who run an election? Fully homomorphic encryption works but is too slow; levelled homomorphic encryption is slightly better; and there are many secret-sharing protocols that are messy in practice (e.g. if a trustee goes on holiday the election stalls). Vanessa describes her scheme as “somewhat homomorphic”; it’s for instant-runoff voting, a multiple-choice system whereby at each round the candidate with the fewest votes is eliminated and their ballots are “shifted left” so that their next preference is used in the next round.
Kartik Nayak has been working on making Byzantine agreement more efficient for permissioned blockchains. There, you need some kind of PKI and synchronous communications to tolerate more than one-third minority corruption. Kartik has a protocol with optimal resilience of up to n/2, with 10-16 rounds depending on the adversary’s ability to corrupt the minority of her choice, and O(n^2) communication (which is necessary).
Esha Ghosh has been working on the deduplication of encrypted data. If two clients and a server have encrypted versions of the same data, then key management can be simplified by convergent encryption, where some keys depend on a hash of the file, but this can be problematic for low-entropy files. Other systems use independent key servers. Esha discusses the possible threat models and proposes a new oblivious key generation mechanism to mitigate them.
Brent Waters’ topic was constrained pseudorandom functions, an idea going back to 2013 whereby you can let someone calculate PRFs on inputs that satisfy some circuit C, as a primitive for building broadcast encryption or traitor tracing schemes. He extends previous constructions to work where C is arbitrary using attribute-based encryption, and his protocols can be implemented using either bilinear maps or learning with errors.
Rachid El Bansarkhani worries that all ciphertexts from now on will be stored to be broken one day once quantum computers exist; what systems will be at risk? LARA is his post-quantum crypto proposal; he discussed lattice-based encryption primitives and how they can be transformed into more complex primitives. It would be great is we could use some of the error bits for other purposes. He therefore presents a variant of GPV which injects a message into the error term in such a way that it can be recovered by looking at the relevant cosets; it’s called ring-ALWE.
Marcella Hastings wants a less wasteful way to mine crypto coins, and her proposal is to compute discrete logs instead, using the distinguished point method for Pollard’s rho. This has been used to set most of the recent records for elliptic discrete log. The problem is efficient verification, which might be fixed using a SNARK.
Kosta Beznosov has been working with a large-scale service provider to forecast suspicious account activity. He trained a classifier to identify those users who are most likely to be vulnerable to large-scale social engineering attacks such as phishing, from their behaviour patterns around login events. The hope is that the vulnerable subset is small enough for it to be worth paying attention to it somehow. Their ground truth was accounts flagged as suspicious by the provider, and he claims a false positive rate of 0.5%.
The second talk, by Cheng Wang, was given by video as Cheng is from the PRC and St Kitts recognises Taiwan. Entitled Thinking Like A Fraudster: Detecting Fraudulent Transactions Via Statistical Sequential Features, it’s based on real bank data. Almost all the frauds were B2C, are not made from frequently-used IP addresses (so stolen accounts dominate deception) and present as sequences (so a second transaction from A to B is suspicious); short time differences and amount differences between consecutive transactions are an even stronger fraud signal. You can intercept a lot of fraud by blocking second transactions within 30 seconds or $20 in value of the first transaction. Finally, fraudulent transaction values have weird peaks, unlike genuine ones which have a power-law distribution. For example fraudsters may try $1000 then $2000 then another $2000, thinking this is more likely to work than $5000. They built a fraud engine with a sliding window for performance, and found that XGboost and random forest worked best.
Alex Sangers has been working on a Secure multiparty PageRank algorithm for collaborative fraud detection. Some transaction graph metrics are used in fraud detection, including shortest-path and strongly connected components; and pagerank is pretty good as a measure of centrality. But could it be computed collaboratively between banks who cannot share identifiable customer data with each other? Alex has worked out how to do this with homomorphic encryption, exploiting the local graph knowledge of each participating bank. He did a trial implementation, and validated it against a real transaction graph to understand scalability; like vanilla pagerank it scales as O(n) and is highly parallelisable.
The last session of FC19 was kicked off by Ahmad Ibrahim, talking about Healing & attestation for low-end embedded devices. Given that behavioural malware detection on embedded devices is hard, remote attestation may sometimes be needed; but how do you react to attestation failures? He proposes an attestation protocol that enables the compromised region of memory to be identified and repaired.
Lanyang Zhao wants to make one-time programs practical, to enforce various kinds of intellectual property rights. This had been proposed in the past with various cryptographic mechanisms, such as garbled circuits, but TEEs now enable a low-cost and general-purpose engineering approach, which Lianyang has been investigating, both in a straightforward way and in combination with a garbled-circuit compiler.
Bingsheng Zhang’s talk was on statement voting. He’s interested in whether electronic voting enables us to do more than just re-implement what we do with paper ballots, and achieve more effective democracy. He proposes that people should express their views as statements rather than as ranking choices, and discusses technical mechanisms for dealing with large corpora of statements while making it hard to attribute them to identifiable voters. The ideas are inspired by Brian Ford’s on delegative democracy; the idea is to make Swiss ideas of direct democracy scale up by enabling people to delegate decisions recursively to other people whose judgment on the issue they trust. Questions touched on the fact that in existing implementations, such as California’s law for ballot initiatives, it’s usually only the rich who can afford to collect enough signatures.
Muslum Ozgur Ozmen gave the last talk of FC19, on fast authentication from aggregate signatures. A number of fast signature schemes combine signatures precomputed for substrings of the hash output, without using additional randomisation; he presents an attack on an NTRU-based scheme invented by Bansarkhani and Buchman and later made provably secure by Hoffstein et al. He notes that the attack may generalise to some other aggregate schemes, and proposes a fix.
As our keynote speaker’s plane was delayed, FC started with refereed talks. Vincent Haupert kicked off with analysis of How to attack PSD2 Internet banking, inspired by a talk on the same topic by Adham et al at FC13. Many banks deal with man-in-the-browser attacks by 2FA, sending transaction details plus an auth code to the customer’s phone. The new PSD2 regulation now makes this mandatory for EU banks; so what weak points remain? His threat model is a compromised PC and normal phone and a customer who wants to make a wire transfer which the attacker wants to redirect. His paper analyses five attacks: clipboard hijacking (everyone can read and write this, and the MetaMask Android malware already hijacks cryptocoins – should be easy to copy the IBAN), SMS autofill (iOS 12 looks for 6-digit codes and offers to copy them), stealthy transaction manipulation via misleading instructions on screen, invoice manipulation and transfer templates (as supported by many banks’ payment systems). Overall PSD2 is positive but care is still needed in implementation and use.
Pascal Lafourcade was second, talking about how to play online spades with cheaters. Spades is an online card game, similar to bridge; the paper generalises to trick-taking games where each player plays one card at each round and one of them wins the trick. Do we have to trust the game server to enforce the rules and not leak information except for the cards playes? Pascal has devised crypto protocols to enable the game to be played in a distributed way with mechanisms for card shuffling, commitment and so on.
Jochen Rill was third with a talk title Your money or your life. It’s a formal model for payment protocols such as EMV. Real-world protocols are often too complex to verify, with their backwards compatibility features, but formal models can point out the hidden assumptions – such as EMV’s assumptions that neither POS devices, now their communication with the card, can be manipulated. His model allows engineers to reason about trusted interfaces by providing a way of simultaneously analysing confirmation channels and authenticated communication channels, and also one-of-two security where you assume that at most one of your laptop and your phone is compromised.
The last talk before coffee was by Nadia Henninger and Joachim Breitner, on Biased Nonce Sense: Lattice Attacks against Weak ECDSA Signatures in Cryptocurrencies. Assume your bitcoin signing key is d and the public key is dG; a signature is r,s where r=xkG and s=k^{-1}h+dr mod n. So if you re-use the nonce k you leak the signing key (and this has been done; people instantly steal money from wallets that do this. Less obvious is that if k is small, and you have enough signatures, you can use lattice basis reduction to find the key. Given shared prefixes or shared suffixes, some elementary arithmetic transforms the problem into the small-k case. The blockchain now contains billions of keys and signatures (though it’s a bit fiddly to parse). A billion signatures, 60m public keys and 50 CPU years later, they found 6000 signatures from 300 keys with a total at risk of BTC 0.008. They found various other weird things: when cleaning up dust people use nonces of (n-1)/2 to make the signatures shorter, and there are quite a few of those on the blockchain. The Bitpay multisig disaster happened when people used short nonces because of a bug and some didn’t upgrade their software when the patch came out. And there was some memory unsafe code that re-used half the secret key as a nonce. The paper has further examples of how people screwed up. The takeaway lesson is to generate nonces deterministically, as the major libraries already now do.
Elaine Shi started the second morning session with a talk on proof of stake. Bitcoin is too expensive in energy terms, and the cost of breaking a system should increase in line with its economic value; hence the interest in proof of stake. Snow White is the first such with a formal security proof. Such systems need three things: a permissioned consensus protocol (in Snow White “sleepy consensus” is like Nakamoto consensus but without the proof-of-work), a robust means of committee reconfiguration (currency holders register their interest, and then a random seed is generated to select a miner to mine the next block), and incentives. Her mechanism has been adopted and modified by others.
Giula Fanti has been wondering what happens when people move to proof of stake while keeping the same block rewards as in proof-of-work systems. Quite simply, the rich get richer! She’s been modeling this in more detail, looking at how stake distributions evolve. Could you end up headed for a 51% attack? This depends on the block reward as a fraction of the total pool size, so too small an initial stake pool might be hazardous. Given that, geometrical rather than constant rewards may be optimal.
Sanket Kanjalkar was the last speaker before lunch, talking on Resource Exhaustion Attacks on PoS. He’s been exploring attacks that exploit already-spent coins for stake, which are difficult to block when side-chain transactions are permitted. Over 25 currencies were affected by two specific vulnerabilities he discovered. The coordinated vulnerability disclosure process was tricky; in the end he informed only currencies with a market cap over $10m ($2.3bn worth of cc was affected and $1.4bn worth is now patched). So what’s the fate of cryptocoins that no longer have developer teams?
Sarah Meiklejohn has been studying code diversity among cryptocurrencies. Last week there were 2065 cryptocurrencies listed on coinmarketcap, a lot of which are completely fraudulent in one way or another. Last summer there were 1664 and she trawled them for links to source code, fetching it from 13,694 github repositories then ending up with 2,354 that appear to be the functioning code base. How can we identify genuine projects? By whether it offers any distinct contribution. She tried various things including just whether a coin’s code has more than 70% of the files identical to another’s; for example, Akuya coin (a bitcoin clone) has spawned 75 apparent derivatives as it had a number of bitcoin files missing and so did they. BumbaCoin has dozens that sort-of copy zerocoin, an academic coin created for benchmarking which had lots of warnings in the code (eventually the zerocoin folks took away they makefile). She then looked at ERC20 tokens, which were 52% of the listed currencies (886 in total) and operate as ethereum smart contracts; there was more diversity there, with 246 distinct types.
Till Neudecker has been studying blockchain forks. He’s been monitoring the peer-to-peer network since 2015 and has seen 231 forks since then. A block’s precise timing has an effect on its probability of main-chain inclusion and the interval between a forking block and the subsequent one. One miner was responsible for 37 forks, with 11 of them in under 100 seconds; there is only a 2% probability that this was caused by chance. This may indicate an attempt at selfish mining, though the evidence is mixed.
Michael Fröwis has been studying how to detect token systems by analysing bytecode and gas consumption. EVM code is particularly suitable for binary analysis with symbolic execution. He scraped the top 612 tokens for ground truth, and also implemented signature-based detection that looks for exported labels for the methods needed for a standards conformant ERC20 token. Then he implemented behavior-based detection looking for a sender balance check, followed by a matching addition and subtraction. It turns out to be reasonably accurate, with explainable misses; but many tokens don’t implement ERC20 properly.
Friedhelm Victor has been measuring ERC20 token networks. Many people have analysed transaction structure in bitcoin graphs using techniques such as address clustering; but so far there’s been little work on ethereum tokens, which are basically an overlay network. He extracted all relevant contract creations up till September 2018; ERC20 contracts have exploded since the middle of 2017 and are over half of it, with over 97m transfer events on 64,000 token contracts. The top 1000 contracts account for 85% of transfers and the top 100 for almost 50%; EOS is the biggest with 3m edges. There doesn’t appear to be a power law in force so most attachment isn’t organic; exchanges account for this. Users don’t seem to know each other, and the main function appears to be trading.
Zuoxia Yu has been studying traceability in cryptonote-style blockchains. Cryptonote is a open protocol that supports sender anonymity, and is used by a number of coins; however some inference attacks are possible by parties who have partial knowledge of cascades of linked transactions and correlate inputs carefully with outputs. Security against such “closed-set attacks” would mean that a brute-force attack is best; analysing them requires better clustering algorithms.
Abraham Hinteregger has been looking at Monero Cross-Chain Traceability. Known methods include zero mixin removal (ZMR), its generalisation intersection removal, the output merging heuristic and the guest newest heuristic (GNH). Monero has been improved to make ZMR harder by fixing the ring size and also sets the mixin distributions to stop GNH. One remaining problem is what happens with a fork, such as XMO versus XMV; attempts to spend old coins on each of the new coins leak information, but not enough to be a systemic threat.
Karl Wüst has been working on PRCash, a coin he hopes central banks might issue to provide payments that are private up to some limit and are visible to the regulator above that; they are based on mimblewimble, a scheme for using coinjoin-type mechanisms in a bitcoin sidechain with blinding. In questions he admitted he had no support for interest rates.
Sinisa Matetic has been designing clients for Zcash using trusted execution. At present only 6% of Zcash transactions are shielded, and the rest transparent; it has about 5,000 transactions per day; and significant deployment issues are that the client has to process the whole blockchain to do a shielded transaction, and the absence of a mobile client. He’d like to use TEE to support something like bitcoin’s simplified payment verification (SPV). His design has full nodes using SGX and lightweight clients that interact with them. It’s designed to withstand a malicious full node that performs side-channel attacks on other enclaves.
Ioana Boureanu started Tuesday’s sessions with a talk on Making Contactless EMV Payments Robust Against Rogue Readers Colluding With Relay Attackers. Mastercard has added relay protection to its contactless payment standard by measuring the round-trip time; is this robust? It may fail if the two readers are colluding; only the readers can check timestamps as the card has no clock. How can collusive relaying be mitigated? We need to get anotherv party in EMV to check that the round-trip time (RTT) measurement was done right. She suggests using a TPM or other TEE in each terminal, and has three possible architectures.
The keynote speaker, Neha Narula, was next, and her talk was on preventing catastrophic cryptocurrency attacks. Cryptocurrency is a new kind of problem for the security community: unproven protocols and latent software bugs present all sorts of hazards; cryptocurrency maintainers have widely varying levels of skill; and attackers can easily exploit vulnerabilities for financial gain. She described three vulnerabilities: iota’s signature bug; the bitcoin cash chainsplit double-spend attack; and the DoS inflation vulnerability of bitcoin. In each case developers deployed mitigations in time. Iota has an $800m market cap and has partnered with VW and Bosch; it has weird stuff like trits and trytes instead of bits and bytes. They also wrote their own hash function, curl, which Ethan Hellman broke; Eve can get Alice to sign a payment to Bob whose hash also works for a payment to her. When the bug was disclosed, the developers claimed the bug was deliberate – a copy-protection backdoor that they could use against people who stole their (open-source) code, and claimed there would be a new generation of microprocessors based on ternary logic. Neha has learned to expect wildly different responses from cryptocurrency developers when she discloses bugs; there may be odd reactions, and the developers may not speak your language. If you’re a researcher, either hire a good lawyer or disclose anonymously; when people lose millions of dollars they can get very angry, and disclosers can gain a lot financially by shorting the coin they’ve broken (though the publicity may even cause the coin to go up in price). So developers should be able to deal with anonymous disclosure, and should consider offering bounties commensurate with the value at risk. Everyone should read Cory Fields on this. Open questions include how we coordinate disclosure across multiple cryptocurrencies, given that we can’t tell everyone all at once and so the disclosure may be seen to be unfair and it drops a zero-day on a lot of copycat coins; and whether security matters that much, given the lack of correlation between market cap and vulnerabilities in a small irrational market. She runs a digital currency initiative at MIT and has set up an ad-hoc cryptocurrency security working group to try and move things to a better place. In questions, she remarked that on one occasion maintainers admitted one bugfix in order to cover up a more serious one. A speaker confirmed that it can take a month to chase up developers to their preferred telegram or other means of communication, and one clearing house charges money and does nothing. Someone suggested using CERT mechanisms; I suggested disclosure via regulators, as we do when we find vulnerabilities in protocols like EMV; someone else wondered whether we should even be looking.
Mohammed Aamir Ali has an analysis of the 3d Secure 2.0 protocol. The original 3D Secure protocol was designed to authenticate card-not-present (CNP) transactions by allowing the user to register a static password; merchants hated it, and there were serious issues with registering cardholders. Yet CNP fraud is 70% of it, and regulators demanded universal 2fa via PSD2, so the industry came up with 3D Secure 2.0 which supports SMS-based second factor authentication for riskier transactions and issuer-specific ‘frictionless’ authentication for the rest. Mohammed set up a man-in-the-middle proxy to reverse engineer the protocol. It turns out that the authentication server uses javascript to profile the customer’s browser (OS, fonts, plugins, fonts, etc) as well as setting a cookie, to establish that the customer’s using a known machine. This was tested with UK cards in both the UK and Germany. Implementations varied by country and across card issuers, with some issuers obfuscating the device fingerprints and others relying on cookies and http headers only. However there’s no authentication of the fingerprint, so an attacker can impersonate a trusted device during a man-in-the-browser attack or a cross-site cookie-stealing attack.
Rafael Dowsley presented a framework for universally composable card games, in the tradition of previous work on mental poker. Blockchains have enabled more complex and realistic distributed games; we can now do reasonable poker, but adapting the protocols to other card games is not straightforward and security proofs are hard. He proposes a toolkit based on threshold signatures and zero knowledge, using mixnets for shuffling.
Vanessa Teague has been working on instant run-off voting. What’s the best way of getting a publicly verifiable result from many inputs submitted non-interactively from lots of people to a small number of potentially unreliable parties who run an election? Fully homomorphic encryption works but is too slow; levelled homomorphic encryption is slightly better; and there are many secret-sharing protocols that are messy in practice (e.g. if a trustee goes on holiday the election stalls). Vanessa describes her scheme as “somewhat homomorphic”; it’s for instant-runoff voting, a multiple-choice system whereby at each round the candidate with the fewest votes is eliminated and their ballots are “shifted left” so that their next preference is used in the next round.
Kartik Nayak has been working on making Byzantine agreement more efficient for permissioned blockchains. There, you need some kind of PKI and synchronous communications to tolerate more than one-third minority corruption. Kartik has a protocol with optimal resilience of up to n/2, with 10-16 rounds depending on the adversary’s ability to corrupt the minority of her choice, and O(n^2) communication (which is necessary).
Esha Ghosh has been working on the deduplication of encrypted data. If two clients and a server have encrypted versions of the same data, then key management can be simplified by convergent encryption, where some keys depend on a hash of the file, but this can be problematic for low-entropy files. Other systems use independent key servers. Esha discusses the possible threat models and proposes a new oblivious key generation mechanism to mitigate them.
Brent Waters’ topic was constrained pseudorandom functions, an idea going back to 2013 whereby you can let someone calculate PRFs on inputs that satisfy some circuit C, as a primitive for building broadcast encryption or traitor tracing schemes. He extends previous constructions to work where C is arbitrary using attribute-based encryption, and his protocols can be implemented using either bilinear maps or learning with errors.
Rachid El Bansarkhani worries that all ciphertexts from now on will be stored to be broken one day once quantum computers exist; what systems will be at risk? LARA is his post-quantum crypto proposal; he discussed lattice-based encryption primitives and how they can be transformed into more complex primitives. It would be great is we could use some of the error bits for other purposes. He therefore presents a variant of GPV which injects a message into the error term in such a way that it can be recovered by looking at the relevant cosets; it’s called ring-ALWE.
Marcella Hastings wants a less wasteful way to mine crypto coins, and her proposal is to compute discrete logs instead, using the distinguished point method for Pollard’s rho. This has been used to set most of the recent records for elliptic discrete log. The problem is efficient verification, which might be fixed using a SNARK.
Kosta Beznosov has been working with a large-scale service provider to forecast suspicious account activity. He trained a classifier to identify those users who are most likely to be vulnerable to large-scale social engineering attacks such as phishing, from their behaviour patterns around login events. The hope is that the vulnerable subset is small enough for it to be worth paying attention to it somehow. Their ground truth was accounts flagged as suspicious by the provider, and he claims a false positive rate of 0.5%.
The second talk, by Cheng Wang, was given by video as Cheng is from the PRC and St Kitts recognises Taiwan. Entitled Thinking Like A Fraudster: Detecting Fraudulent Transactions Via Statistical Sequential Features, it’s based on real bank data. Almost all the frauds were B2C, are not made from frequently-used IP addresses (so stolen accounts dominate deception) and present as sequences (so a second transaction from A to B is suspicious); short time differences and amount differences between consecutive transactions are an even stronger fraud signal. You can intercept a lot of fraud by blocking second transactions within 30 seconds or $20 in value of the first transaction. Finally, fraudulent transaction values have weird peaks, unlike genuine ones which have a power-law distribution. For example fraudsters may try $1000 then $2000 then another $2000, thinking this is more likely to work than $5000. They built a fraud engine with a sliding window for performance, and found that XGboost and random forest worked best.
Alex Sangers has been working on a Secure multiparty PageRank algorithm for collaborative fraud detection. Some transaction graph metrics are used in fraud detection, including shortest-path and strongly connected components; and pagerank is pretty good as a measure of centrality. But could it be computed collaboratively between banks who cannot share identifiable customer data with each other? Alex has worked out how to do this with homomorphic encryption, exploiting the local graph knowledge of each participating bank. He did a trial implementation, and validated it against a real transaction graph to understand scalability; like vanilla pagerank it scales as O(n) and is highly parallelisable.
The last session of FC19 was kicked off by Ahmad Ibrahim, talking about Healing & attestation for low-end embedded devices. Given that behavioural malware detection on embedded devices is hard, remote attestation may sometimes be needed; but how do you react to attestation failures? He proposes an attestation protocol that enables the compromised region of memory to be identified and repaired.
Lanyang Zhao wants to make one-time programs practical, to enforce various kinds of intellectual property rights. This had been proposed in the past with various cryptographic mechanisms, such as garbled circuits, but TEEs now enable a low-cost and general-purpose engineering approach, which Lianyang has been investigating, both in a straightforward way and in combination with a garbled-circuit compiler.
Bingsheng Zhang’s talk was on statement voting. He’s interested in whether electronic voting enables us to do more than just re-implement what we do with paper ballots, and achieve more effective democracy. He proposes that people should express their views as statements rather than as ranking choices, and discusses technical mechanisms for dealing with large corpora of statements while making it hard to attribute them to identifiable voters. The ideas are inspired by Brian Ford’s on delegative democracy; the idea is to make Swiss ideas of direct democracy scale up by enabling people to delegate decisions recursively to other people whose judgment on the issue they trust. Questions touched on the fact that in existing implementations, such as California’s law for ballot initiatives, it’s usually only the rich who can afford to collect enough signatures.
Muslum Ozgur Ozmen gave the last talk of FC19, on fast authentication from aggregate signatures. A number of fast signature schemes combine signatures precomputed for substrings of the hash output, without using additional randomisation; he presents an attack on an NTRU-based scheme invented by Bansarkhani and Buchman and later made provably secure by Hoffstein et al. He notes that the attack may generalise to some other aggregate schemes, and proposes a fix.