Category Archives: Security engineering

Bad security, good security, case studies, lessons learned

Thinking of selling your old phone? Watch out!

Today we unveil two papers describing serious and widespread vulnerabilities in Android mobile phones. The first presents a Security Analysis of Factory Resets. Now that hundreds of millions of people buy and sell smartphones secondhand and use them for everything from banking to dating, it’s important to able to sanitize your phone. You need to clean it when you buy it, so you don’t get caught by malware; and even more when you sell it, so you don’t give away your bank credentials or other personal information. So does the factory reset function actually work? We bought a couple of dozen second-hand Android phones and tested them to find out.

The news is not at all good. We were able to retrieve the Google master cookie from the great majority of phones, which means that we could have logged on to the previous owner’s gmail account. The reasons for failure are complex; new phones are generally better than old ones, and Google’s own brand phones are better than the OEM offerings. However the vendors need to do a fair bit of work, and users need to take a fair amount of care.

Attacks on a sold phone that could not be properly sanitized are one example of what we call a “user-not-present” attack. Another is when your phone is stolen. Many security software vendors offer a facility to lock or wipe your phone remotely when this happens, and it’s a standard feature with mobile antivirus products. Do these ‘solutions’ work?

You guessed it. Antivirus software that relies on a faulty factory reset can only go so far, and there’s only so much you can do with a user process. The AV vendors have struggled with a number of design tradeoffs, but the results are not that impressive. See Security Analysis of Consumer-Grade Anti-Theft Solutions Provided by Android Mobile Anti-Virus Apps for the gory details. These failings mean that staff at firms which handle lots of second-hand phones (whether lost, stolen, sold or given to charity) could launch some truly industrial-scale attacks. These papers appear today at the Mobile Security Technology workshop at IEEE Security and Privacy.

Meeting Snowden in Princeton

I’m at Princeton where Ed Snowden is due to speak by live video link in a few minutes, and have a discussion with Bart Gellmann.

Yesterday he spent four hours with a group of cryptographers from industry and academia, of which I was privileged to be one. The topic was the possible and likely countermeasures, both legal and technical, against state surveillance. Ed attended as the “Snobot”, a telepresence robot that let him speak to us, listen and move round the room, from a studio in Moscow. As well as over a dozen cryptographers there was at least one lawyer and at least one journalist familiar with the leaked documents. Yesterday’s meeting was under the Chatham House rule, so I may not say who said what; any new disclosures may have been made by Snowden, or by one of the journalists, or by one of the cryptographers who has assisted journalists with the material. Although most of what was discussed has probably appeared already in one place or another, as a matter of prudence I’m publishing these notes on the blog while I’m enjoying US first-amendment rights, and will sanitise them from my laptop before coming back through UK customs.

The problem of state surveillance is a global one rather than an NSA issue, and has been growing for years, along with public awareness of it. But we learned a lot from the leaks; for example, wiretaps on the communications between data centres were something nobody thought of; and it might do no harm to think a bit more about the backhaul in CDNs. (A website that runs TLS to a CDN and then bareback to the main server is actually worse than nothing, as we lose the ability to shame them.) Of course the agencies will go for the low-hanging fruit. Second, we also got some reassurance; for example, TLS works, unless the agencies have managed to steal or coerce the private keys, or hack the end systems. (This is a complex discussion given CDNs, problems with the CA ecology and bugs like Heartbleed.) And it’s a matter of record that Ed trusted his life to Tor, because he saw from the other side that it worked.

Third, the leaks give us a clear view of an intelligence analyst’s workflow. She will mainly look in Xkeyscore which is the Google of 5eyes comint; it’s a federated system hoovering up masses of stuff not just from 5eyes own assets but from other countries where the NSA cooperates or pays for access. Data are “ingested” into a vast rolling buffer; an analyst can run a federated search, using a selector (such as an IP address) or fingerprint (something that can be matched against the traffic). There are other such systems: “Dancing oasis” is the middle eastern version. Some xkeyscore assets are actually compromised third-party systems; there are multiple cases of rooted SMS servers that are queried in place and the results exfiltrated. Others involve vast infrastructure, like Tempora. If data in Xkeyscore are marked as of interest, they’re moved to Pinwale to be memorialised for 5+ years. This is one function of the MDRs (massive data repositories, now more tactfully renamed mission data repositories) like Utah. At present storage is behind ingestion. Xkeyscore buffer times just depend on volumes and what storage they managed to install, plus what they manage to filter out.

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a “stolen cert”, presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can’t. There’s no evidence of a “wow” cryptanalysis; it was key theft, or an implant, or a predicted RNG or supply-chain interference. Cryptanalysis has been seen of RC4, but not of elliptic curve crypto, and there’s no sign of exploits against other commonly used algorithms. Of course, the vendors of some products have been coopted, notably skype. Homegrown crypto is routinely problematic, but properly implemented crypto keeps the agency out; gpg ciphertexts with RSA 1024 were returned as fails.

With IKE the NSA were interested in getting the original handshakes, harvesting them all systematically worldwide. These are databased and indexed. The quantum type attacks were common against non-crypto traffic; it’s easy to spam a poisoned link. However there is no evidence at all of active attacks on cryptographic protocols, or of any break-and-poison attack on crypto links. It is however possible that the hacking crew can use your cryptography to go after your end system rather than the content, if for example your crypto software has a buffer overflow.

What else might we learn from the disclosures when designing and implementing crypto? Well, read the disclosures and use your brain. Why did GCHQ bother stealing all the SIM card keys for Iceland from Gemalto, unless they have access to the local GSM radio links? Just look at the roof panels on US or UK embassies, that look like concrete but are actually transparent to RF. So when designing a protocol ask yourself whether a local listener is a serious consideration.

In addition to the Gemalto case, Belgacom is another case of hacking X to get at Y. The kind of attack here is now completely routine: you look for the HR spreadsheet in corporate email traffic, use this to identify the sysadmins, then chain your way in. Companies need to have some clue if they’re to stop attacks like this succeeding almost trivially. By routinely hacking companies of interest, the agencies are comprehensively undermining the security of critical infrastructure, and claim it’s a “nobody but us” capability. however that’s not going to last; other countries will catch up.

Would opportunistic encryption help, such as using unauthenticated Diffie-Hellman everwhere? Quite probably; but governments might then simply compel the big service forms to make the seeds predictable. At present, key theft is probably more common than key compulsion in US operations (though other countries may be different). If the US government ever does use compelled certs, it’s more likely to be the FBI than the NSA, because of the latter’s focus on foreign targets. The FBI will occasionally buy hacked servers to run in place as honeypots, but Stuxnet and Flame used stolen certs. Bear in mind that anyone outside the USA has zero rights under US law.

Is it sensible to use medium-security systems such as Skype to hide traffic, even though they will give law enforcement access? For example, an NGO contacting people in one of the Stans might not want to incriminate them by using cryptography. The problem with this is that systems like Skype will give access not just to the FBI but to all sorts of really unsavoury police forces.

FBI operations can be opaque because of the care they take with parallel construction; the Lavabit case was maybe an example. It could have been easy to steal the key, but then how would the intercepted content have been used in court? In practice, there are tons of convictions made on the basis of cargo manifests, travel plans, calendars and other such plaintext data about which a suitable story can be told. The FBI considers it to be good practice to just grab all traffic data and memorialise it forever.

The NSA is even more cautious than the FBI, and won’t use top exploits against clueful targets unless it really matters. Intelligence services are at least aware of the risk of losing a capability, unlike vanilla law enforcement, who once they have a tool will use it against absolutely everybody.

Using network intrusion detection against bad actors is very much like the attack / defence evolution seen in the anti-virus business. A system called Tutelage uses Xkeyscore infrastructure and matches network traffic against signatures, just like AV, but it has the same weaknesses. Script kiddies are easily identifiable from their script signatures via Xkeyscore, but the real bad actors know how to change network signatures, just as modern malware uses packers to become highly polymorphic.

Cooperation with companies on network intrusion detection is tied up with liability games. DDoS attacks from Iran spooked US banks, which invited the government in to snoop on their networks, but above all wanted liability protection.

Usability is critical. Lots of good crypto never got widely adopted as it was too hard to use; think of PGP. On the other hand, Tails is horrifically vulnerable to traditional endpoint attacks, but you can give it as a package to journalists to use so they won’t make so many mistakes. The source has to think “How can I protect myself?” which makes it really hard, especially for a source without a crypto and security background. You just can’t trust random journalists to be clueful about everything from scripting to airgaps. Come to think of it, a naive source shouldn’t trust their life to securedrop; he should use gpg before he sends stuff to it but he won’t figure out that it’s a good idea to suppress key IDs. Engineers who design stuff for whistleblowers and journalists must be really thoughtful and careful if they want to ensure their users won’t die when they screw up. The goal should be that no single error should be fatal, and so long as their failures aren’t compounded the users will stay alive. Bear in mind that non-roman-language countries use numeric passwords, and often just 8 digits. And being a target can really change the way you operate. For example, password managers are great, but not for someone like Ed, as they put too many of the eggs in one basket. If you’re a target, create a memory castle, or a token that can be destroyed on short notice. If you’re a target like Ed, you have to compartmentalise.

On the policy front, one of the eye-openers was the scale of intelligence sharing – it’s not just 5 eyes, but 15 or 35 or even 65 once you count all the countries sharing stuff with the NSA. So how does governance work? Quite simply, the NSA doesn’t care about policy. Their OGC has 100 lawyers whose job is to “enable the mission”; to figure out loopholes or new interpretations of the law that let stuff get done. How do you restrain this? Could you use courts in other countries, that have stronger human-rights law? The precedents are not encouraging. New Zealand’s GCSB was sharing intel with Bangladesh agencies while the NZ government was investigating them for human-rights abuses. Ramstein in Germany is involved in all the drone killings, as fibre is needed to keep latency down low enough for remote vehicle pilots. The problem is that the intelligence agencies figure out ways to shield the authorities from culpability, and this should not happen.

Jurisdiction is a big soft spot. When will CDNs get tapped on the shoulder by local law enforcement in dodgy countries? Can you lock stuff out of particular jurisdictions, so your stuff doesn’t end up in Egypt just for load-balancing reasons? Can the NSA force data to be rehomed in a friendly jurisdiction, e.g. by a light DoS? Then they “request” stuff from a partner rather than “collecting” it.

The spooks’ lawyers play games saying for example that they dumped content, but if you know IP address and file size you often have it; and IP address is a good enough pseudonym for most intel / LE use. They deny that they outsource to do legal arbitrage (e.g. NSA spies on Brits and GCHQ returns the favour by spying on Americans). Are they telling the truth? In theory there will be an MOU between NSA and the partner agency stipulating respect for each others’ laws, but there can be caveats, such as a classified version which says “this is not a binding legal document”. The sad fact is that law and legislators are losing the capability to hold people in the intelligence world to account, and also losing the appetite for it.

The deepest problem is that the system architecture that has evolved in recent years holds masses of information on many people with no intelligence value, but with vast potential for political abuse.

Traditional law enforcement worked on individualised suspicion; end-system compromise is better than mass search. Ed is on the record as leaving to the journalists all decisions about what targeted attacks to talk about, as many of them are against real bad people, and as a matter of principle we don’t want to stop targeted attacks.

Interference with crypto in academia and industry is longstanding. People who intern with a clearance get a “lifetime obligation” when they go through indoctrination (yes, that’s what it’s called), and this includes pre-publication review of anything relevant they write. The prepublication review board (PRB) at the CIA is notoriously unresponsive and you have to litigate to write a book. There are also specific programmes to recruit cryptographers, with a view to having friendly insiders in companies that might use or deploy crypto.

The export control mechanisms are also used as an early warning mechanism, to tip off the agency that kit X will be shipped to country Y on date Z. Then the technicians can insert an implant without anyone at the exporting company knowing a thing. This is usually much better than getting stuff Trojanned by the vendor.

Western governments are foolish to think they can develop NOBUS (no-one but us) technology and press the stop button when things go wrong, as this might not be true for ever. Stuxnet was highly targeted and carefully delivered but it ended up in Indonesia too. Developing countries talk of our first-mover advantage in carbon industrialisation, and push back when we ask them to burn less coal. They will make the same security arguments as our governments and use the same techniques, but without the same standards of care. Bear in mind, on the equities issue, that attack is way way easier than defence. So is cyber-war plausible? Politically no, but at the expert level it might eventually be so. Eventually something scary will happen, and then infrastructure companies will care more, but it’s doubtful that anyone will do a sufficiently coordinated attack on enough diverse plant through different firewalls and so on to pose a major threat to life.

How can we push back on the poisoning of the crypto/security community? We have to accept that some people are pro-NSA while others are pro-humanity. Some researchers do responsible disclosure while others devise zero-days and sell them to the NSA or Vupen. We can push back a bit by blocking papers from conferences or otherwise denying academic credit where researchers prefer cash or patriotism to responsible disclosure, but that only goes so far. People who can pay for a new kitchen with their first exploit sale can get very patriotic; NSA contractors have a higher standard of living than academics. It’s best to develop a culture where people with and without clearances agree that crypto must be open and robust. The FREAK attack was based on export crypto of the 1990s.

We must also strengthen post-national norms in academia, while in the software world we need transparency, not just in the sense of open source but of business relationships too. Open source makes it harder for security companies to sell different versions of the product to people we like and people we hate. And the NSA may have thought dual-EC was OK because they were so close to RSA; a sceptical purchaser should have observed how many government speakers help them out at the RSA conference!

Secret laws are pure poison; government lawyers claim authority and act on it, and we don’t know about it. Transparency about what governments can and can’t do is vital.

On the technical front, we can’t replace the existing infrastructure, so it won’t be possible in the short term to give people mobile phones that can’t be tracked. However it is possible to layer new communications systems on top of what already exists, as with the new generation of messaging apps that support end-to-end crypto with no key escrow. As for whether such systems take off on a large enough scale to make a difference, ultimately it will all be about incentives.

Security Protocols 2015

I’m at the 23rd Security Protocols Workshop, whose theme this year is is information security in fiction and in fact. Engineering is often inspired by fiction, and vice versa; what might we learn from this?

I will try to liveblog the talks in followups to this post.

There exists a classical model of the photon after all

Many people assume that quantum mechanics cannot emerge from classical phenomena, because no-one has so far been able to think of a classical model of light that is consistent with Maxwell’s equations and reproduces the Bell test results quantitatively.

Today Robert Brady and I unveil just such a model. It turns out that the solution was almost in plain sight, in James Clerk Maxwell’s 1861 paper On Phyiscal Lines of Force in which he derived Maxwell’s equations, on the assumption that magnetic lines of force were vortices in a fluid. Updating this with modern knowledge of quantised magnetic flux, we show that if you model a flux tube as a phase vortex in an inviscid compressible fluid, then wavepackets sent down this vortex obey Maxwell’s equations to first order; that they can have linear or circular polarisation; and that the correlation measured between the polarisation of two cogenerated wavepackets is exactly the same as is predicted by quantum mechanics and measured in the Bell tests.

This follows work last year in which we explained Yves Couder’s beautiful bouncing-droplet experiments. There, a completely classical system is able to exhibit quantum-mechanical behaviour as the wavefunction ψ appears as a modulation on the driving oscillation, which provides coherence across the system. Similarly, in the phase vortex model, the magnetic field provides the long-range order and the photon is a modulation of it.

We presented this work yesterday at the 2015 Symposium of the Trinity Mathematical Society. Our talk slides are here and there is an audio recording here.

If our sums add up, the consequences could be profound. First, it will explain why quantum computers don’t work, and blow away the security ‘proofs’ for entanglement-based quantum cryptosystems (we already wrote about that here and here). Second, if the fundamental particles are just quasiparticles in a superfluid quantum vacuum, there is real hope that we can eventually work out where all the mysterious constants in the Standard Model come from. And third, there is no longer any reason to believe in multiple universes, or effects that propagate faster than light or backward in time – indeed the whole ‘spooky action at a distance’ to which Einstein took such exception. He believed that action in physics was local and causal, as most people do; our paper shows that the main empirical argument against classical models of reality is unsound.

Financial Cryptography 2015

I will be trying to liveblog Financial Cryptography 2015.

The opening keynote was by Gavin Andresen, chief scientist of the Bitcoin Foundation, and his title was “What Satoshi didn’t know.” The main unknown six years ago when bitcoin launched was whether it would bootstrap; Satoshi thought it might be used as a spam filter or a practical hashcash. In reality it was someone buying a couple of pizzas for 10,000 bitcoins. Another unknown when Gavin got involved in 2010 was whether it was legal; if you’d asked the SEC then they might have classified it as a Ponzi scheme, but now their alerts are about bitcoin being used in Ponzi schemes. The third thing was how annoying people can be on the Internet; people will abuse your system for fun if it’s popular. An example was penny flooding, where you send coins back and forth between your sybils all day long. Gavin invented “proof of stake”; in its early form it meant prioritising payers who turn over coins less frequently. The idea was that scarcity plus utility equals value; in addition to the bitcoins themselves, another scarce resources emerges as the old, unspent transaction outputs (UTXOs). Perhaps these could be used for further DoS attack prevention or a pseudonymous identity anchor.

It’s not even clear that Satoshi is or was a cryptographer; he used only ECC / ECDSA, hashes and SSL (naively), he didn’t bother compressing public keys, and comments suggest he wasn’t up on the latest crypto research. In addition, the rules for letting transactions into the chain are simple; there’s no subtlety about transaction meaning, which is mixed up with validation and transaction fees; a programming-languages guru would have done things differently. Bitcoin now allows hashes of redemption scripts, so that the script doesn’t have to be disclosed upfront. Another recent innovation is using invertible Bloom lookup tables (IBLTs) to transmit expected differences rather than transmitting all transactions over the network twice. Also, since 2009 we have FHE, NIZLPs and SNARKs from the crypto research folks; the things on which we still need more research include pseudonymous identity, practical privacy, mining scalability, probabilistic transaction checking, and whether we can use streaming algorithms. In questions, Gavin remarked that regulators rather like the idea that there was a public record of all transactions; they might be more negative if it were completely anonymous. In the future, only recent transactions will be universally available; if you want the old stuff you’ll have to store it. Upgrading is hard though; Gavin’s big task this year is to increase the block size. Getting everyone in the world to update their software at once is not trivial. People say: “Why do you have to fix the software? Isn’t bitcoin done?”

I’ll try to blog the refereed talks in comments to this post.

Launch of security economics MOOC

TU Delft has just launched a massively open online course on security economics to which three current group members (Sophie van der Zee, David Modoc and I) have contributed lectures, along with one alumnus (Tyler Moore). Michel van Eeten of Delft is running the course (Delft does MOOCs while Cambridge doesn’t yet), and there are also talks from Rainer Boehme. This was pre-announced here by Tyler in November.

The videos will be available for free in April; if you want to take the course now, I’m afraid it costs $250. The deal is that EdX paid for the production and will sell it as a professional course to security managers in industry and government; once that’s happened we’ll make it free to all. This is the same basic approach as with my book: rope in a commercial publisher to help produce first-class content that then becomes free to all. But if your employer is thinking of giving you some security education, you could do a lot worse than to support the project and enrol here.

Curfew tags – the gory details

In previous posts I told the story of how Britain’s curfew tagging system can fail. Some prisoners are released early provided they wear a tag to enforce a curfew, which typically means that they have to stay home from 7pm to 7am; some petty offenders get a curfew instead of a prison sentence; and some people accused of serious crimes are tagged while on bail. In dozens of cases, curfewees had been accused of tampering with their tags, but had denied doing so. In a series of these cases, colleagues and I were engaged as experts, but when we demanded tags for testing, the prosecution was withdrawn and the case collapsed. In the most famous case, three men accused of terrorist offences were released; although one has since absconded, the other two are now free in the UK.

This year, a case finally came to trial. Our client, to whom we must refer simply as “Special Z”, was accused of tag tampering, which he denied vigorously. I was instructed as an expert along with my colleague Dr James Dean of Materials Science. Here is my expert report, together with James’s report and addendum, as well as a video of a tag being removed using much less than the amount of force required by the system specification.

The judge was not ready to set a precedent that could have thrown the UK tagging system into chaos. However, I understand our client has now been released on other grounds. Although the court did order us to hand back all the tags, and fragments of broken tags, so as to protect G4S’s intellectual property, it did not make a secrecy order on our expert reports. We publish them here in the hope that they might provide useful guidance to defendants in similar cases in the future, and to policymakers when tagging contracts come up for renewal, whether in the UK or overseas.

Why password managers (sometimes) fail

We are asked to remember far too many passwords. This problem is most acute on the web. And thus, unsurprisingly, it is on the web that technical solutions have had most success in replacing users’ ad hoc coping strategies. One of the longest established and most widely adopted technical solutions is a password manager: software that remembers passwords and submits them on the user’s behalf. But this isn’t as straightforward as it sounds. In our recent work on bootstrapping adoption of the Pico system [1], we’ve come to appreciate just how hard life is for developers and maintainers of password managers.

In a paper we are about to present at the Passwords 2014 conference in Trondheim, we introduce our proposal for Password Manager Friendly (PMF) semantics [2]. PMF semantics are designed to give developers and maintainers of password managers a bit of a break and, more importantly, to improve the user experience.

Continue reading Why password managers (sometimes) fail

Nikka – Digital Strongbox (Crypto as Service)

Imagine, somewhere in the internet that no-one trusts, there is a piece of hardware, a small computer, that works just for you. You can trust it. You can depend on it. Things may get rough but it will stay there to get you through. That is Nikka, it is the fixed point on which you can build your security and trust. [Now as a Kickstarter project]

You may remember our proof-of-concept implementation of a password protection for servers – Hardware Scrambling (published here in March). The password scrambler was a small dongle that could be plugged to a Linux computer (we used Raspberry Pi). Its only purpose was to provide a simple API for encrypting passwords (but it could be credit cards or anything else up to 32 bytes of length). The beginning of something big?

It received some attention (Ars Technica, Slashdot, LWN, …), certainly more than we expected at the time. Following discussions have also taught us a couple of lessons about how people (mostly geeks in this contexts) view security – particularly about the default distrust expressed by those who discussed articles describing our password scrambler.

We eventually decided to build a proper hardware cryptographic platform that could be used for cloud applications. Our requirements were simple. We wanted something fast, “secure” (CC EAL5+ or even FIPS140-2 certified), scalable, easy to use (no complicated API, just one function call) and to be provided as a service so no-one has to pay upfront the price of an HSM if they just want to have a go at using proper cryptography for their new or old application. That was the beginning of Nikka.

nikka_setup

This is our concept: Nikka comprises a set of powerful servers installed in secure data centres. These servers can create clusters delivering high-availability and scalability for their clients. Secure hardware forms the backbone of each server that provides an interface for simple use. The second part of Nikka are user applications, plugins, and libraries for easy deployment and everyday “invisible” use. Operational procedures, processes, policies, and audit logs then guarantee that what we say is actually being done.

2014-07-04 08.17.35We have been building it for a few months now and the scalable cryptographic core seems to work. We have managed to run long-term tests of 150 HMAC transactions per second (HMAC & RNG for password scrambling) on a small development platform while fully utilising available secure hardware. The server is hosted at ideaSpace and we use it to run functional, configuration and load tests.

We have never before designed a system with so many independent processes – the core is completely asynchronous (starting with Netty for a TCP interface) and we have quickly started to appreciate detailed trace logging we’ve implemented from the very beginning. Each time we start digging we find something interesting. Real-time visualisation of the performance is quite nice as well.
real_time_monitoring

Nikka is basically a general purpose cryptographic engine with middleware layer for easy integration. The password HMAC is this time used only as one of test applications. Users can share or reserve processing units that have Common Criteria evaluations or even FIPS140-2 certification – with possible physical hardware separation of users.

If you like what you have read so far, you can keep reading, watching, supporting at Kickstarter. It has been great fun so far and we want to turn it into something useful in 2015. If it sounds interesting – maybe you would like to test it early next year, let us know! @DanCvrcek

Pico part IV – Somethings you have

A light-hearted look at the ideas presented a couple of weeks ago in a paper at the UPSIDE workshop in Seattle.

USS Enterprise going even more boldly than usual

One of the problems inherent in boldly going where no man has gone before is that, more often than you might imagine, you may be required to blow up the spaceship on which you’re travelling. James T Kirk was forced to do this at least once, and he and his successors came perilously close to it on several other occasions.

But who gets to make this decision? Given the likelihood that some members of the crew may be possessed by alien life-forms at the time, it seems unwise to leave such decisions to any single individual, even if he’s the captain. And you also can’t require any specific other staff-member to back him up, since they may have had to sacrifice themselves earlier for the good of the many. Auto-destruct sequences, therefore, can typically only be initiated when any three senior officers agree and give their secret passwords.

It is a wonderful sign of the times that, when the first Star Trek episode in which this happens was broadcast in 1969, the passwords required to detonate a spaceship with 400 passengers on board were noticeably weaker than those now required to log in to a typical Kickstarter project.

Secret sharing – background

Some of the most beautiful examples, I believe, of using a simple, readily-understandable idea to solve a complex problem can be found in the secret-sharing systems proposed almost simultaneously by Adi Shamir and George Blakley, way back in 1979. These are just the sort of thing you need to need to control self-destruct sequences, and are certainly better than the four-character passwords used by Kirk and Scotty.

Most readers of LBT will be familiar with this, but in case you aren’t, the underlying concept is to encode the secret you need to protect – for example, the auto-destruct code – as the coefficients of a particular quadratic equation. If you know any three points on the curve, you can deduce the underlying equation. So all you have to do is give each of Kirk, Scotty, Spock and Bones the coordinates of a point, and any three of these ’shares’ can then unlock the secret which will trigger the detonation. If you want to insist on four or more officers, you use a higher-order polynomial. This is Shamir’s algorithm; Blakley’s is similar, but whichever you use, there is, I think, a real elegance in solving a difficult technical problem using a concept that can be explained in a paragraph to anyone with high-school level maths.

A somewhat more down-to-earth example of its use can be found in the DNSSEC system used to secure the core servers of the DNS, on which so much of the internet depends. The master key needed to reset this, in the event of some global calamity, is divided up between seven keyholders based in different countries. One of them told me over a beer recently that it required any five of them to get together in the US to be able to reconstruct the system.

PICO

In the Pico project (see earlier posts), we’re exploring the same secret-sharing concepts but at a personal rather than a global or galactic level.

The idea is that your Pico device, which provides authentication on your behalf to a wide variety of systems, should only be able to do so when it is confident that you are present. There are several ways it might detect that – biometrics perhaps being the most obvious – but we wanted a system that would be completely automatic, continuous and non-intrusive: it wouldn’t require you to re-scan your retina every few minutes, for example.

So the Pico assumes that you are present only if it can detect, nearby, a sufficient number of other devices that you normally carry. These ‘Picosiblings’ may be special devices dedicated to this purpose – bluetooth-enabled cufflinks or earrings, for example – or the Picosibling functionality may be built into phones, smart watches, laptops and car key-fobs. But, together, a sufficient number of them constitute an ‘aura’ of safety in which the Pico feels comfortable about releasing your credentials when you use it to log in.

You could just program the Pico to make this decision, but a more secure approach is to use secret-sharing. Your Pico stores your credentials in encrypted form, and the Picosiblings actually enable it by each giving up a share of the secret used to do the encryption. Since this information is not cached, even the Pico cannot decide to expose the information when you are not around.

Some challenges

This basic Picosibling concept is not bad, but can we improve it? How well can we model real-world user behaviour using these ideas? Where might they fall down, and do we need to invent anything new to deal with the edge cases? And can it help us make better decisions about when to blow up a starship?

Let’s consider some scenarios.

1. My Precious

My car keys are with me about 30% of the time, and may occasionally be with someone else. My wedding ring, on the other hand, has been a 100% reliable indicator of my presence for the last 23 years. (It’s too bad my wife didn’t give me a Bluetooth-enabled one.) We all have different possessions which may be more or less reliable indicators that we are around, and we can represent this by giving them different numbers of shares. Most of my Picosiblings might have one share, but my wallet and phone might have two, and my wedding ring four. If my ring is absent, you’ll need significantly more confirmation from other sources, in the same way that you might need disproportionately more senior officers if the captain is absent.

2. Smarter Picosiblings

Is my watch a good indicator of my presence? Well, yes, when it’s with me, but not when it’s sitting on my bedside table. There may be occasions when Picosiblings are more or less confident that you are present. So we can give each sibling a certain number of shares, but it may not choose to release them to the Pico in all situations. A sibling that detects and recognises your heartbeat might give out differing numbers of its shares based on how confident it is about its recognition. Something you normally wear or carry might include an accelerometer, and give out fewer shares if it hasn’t moved in the last few minutes. The number of shares might decay over time, as the device gets further from the moment at which it was confident of your presence, so a fingerprint sensor might be sufficient to unlock your Pico on its own, if you’ve used it in the last ten seconds. Another metric might be proximity: if the Picosibling could detect the Pico was close by, it might be willing to hand over more shares than if it were on the other side of the room.

3. The trouble with Klingons

But suppose we need something more sophisticated than simply the raw number of shares to unlock the secret? If your starship has a large number of Klingon officers, who place a high value on dying honourably in battle, you might wish to insist that at least two races were involved in any major decision like this.

Fortunately we don’t need a new mechanism: we can just use our current system twice over. We split the core secret into several shares: one for humans, one for Vulcans, one for Klingons, one for Betazoids. Each of those shares can then be subdivided in the same way for the individuals concerned. Two or more Vulcans are needed to reconstruct the Vulcan share, and two or more such species actually to blow up the ship.

In the Pico world, suppose your shoes are all Picosiblings. If you have many pairs of shoes, someone entering your bedroom may find they have plenty of shares even if you aren’t around. We can use this method of creating ‘categories of shares’ to limit the effect that a particular class of device can have, to ensure that shoes alone can never be sufficient to do the unlocking.

4. Greater than the sum of its parts

Suppose my my car, my car keys, and my house keys are all Picosiblings, and they each have one share. Anyone inside my car is likely to have my car keys, including a thief who has just found them on the pavement. But we could reasonably argue that someone who is inside my car and has my house keys is more likely to be me; that a stranger is less likely to have this particular combination of Picosiblings, and so their conjunction should be worth more than might be indicated by a simple addition of their values.

We can achieve this in various ways; a simple approach is just to cut a share in half and give one half to each sibling. These would be sent to the Pico along with the complete shares, and if it receives matching halves from two devices, it would be able to construct extra shares which correspond to the value of their relationship.

5. You can’t take that away from me

Here’s a challenge for readers. Can you think of a good way to implement negative shares? Let me explain…

Suppose you are in an airport check-in queue and a significant number of your worldly possessions are in the suitcase beside you. Someone who pinches your Pico from your pocket will have plenty of shares accessible either by standing behind you, or by getting close to your suitcase at some later point in the baggage-handling process. Travelling is one of those situations when you might feel a bit vulnerable and so require more confirmation of your presence than you would in your home country. If your suitcase could emit negative shares, then it would raise the threshold of affirmation needed by your Pico when it was around.

There are many challenges here, both technical and practical: for example, anyone who knows that there is a negative influence nearby may be able to remove it, e.g. by tipping the items out of the suitcase. But it would at least deter subtle unobserved attacks in the check-in queue. It would be good if observers, and ideally the Pico itself, could not distinguish positive from negative shares except with regards to their final effect. And so on. As I said: a challenge for the reader!

Sharing options

So we’ve introduced several ideas which can help with some real-life scenarios, yet they’re all based on the same simple secret-sharing concept:

  • Variable numbers of shares per device (based on how reliable each device is as an indicator)
  • Dynamically-changing number of shares (allowing devices to indicate their confidence in your or the Pico’s presence)
  • Categories of shares (allowing us to restrict the influence of a single class of devices)
  • Split shares (allowing combinations of devices to contribute more than simply the sum of their shares)
  • Negative shares (to allow some devices to indicate situations of vulnerability)

Extending the tree

Let me finish with one last question. To what extent are Picos and Picosiblings fundamentally different? Picos gain confidence to submit your credentials to a service based on the reassurance they get from Picosiblings. But we’ve also discussed the idea of Picosiblings getting confidence to submit their shares to a Pico based on other factors in the environment. Perhaps we could extend this hierarchy in the other direction, too.

A company might need k-of-n shares from the major investors’ Picos to appoint their representative director to the board. The company Pico might then need k-of-n of the directors’ Picos to authorise a major decision or transaction. And a commercial building might need k-of-n of the companies’ boards to agree to the resurfacing of the parking lot. This could be extended to political, national, even global decisions.

Perhaps I’m pushing the simple secret-sharing concept too far. But at the very least it’s an interesting thought experiment to consider how much we can represent real-world situations with this one idea.

After all, we need a simple, reliable and secure way to make these decisions in our own back yard, before we start interacting with the rest of the universe.