Category Archives: Security engineering

Bad security, good security, case studies, lessons learned

Boom! Headshot!

It’s about time I confessed to playing online tactical shooters. These are warfare-themed multiplayer games that pit two teams of 16 — 32 players against each other, to capture territory and score points. These things can be great fun if you are so inclined, but what’s more, they’re big money these days. A big online shooter like Counterstrike might sell maybe 2 million copies, and tactical shooter Battlefield 2 has almost 500,000 accounts on the main roster. The modern blockbuster game grosses several times more than a blockbuster movie!

In this post, I consider a vital problem for game designers — maintaining the quality of experience for the casual gamer — and it turns out this is somewhat a security problem. People gravitate towards these games because the thrill of competition against real humans (instead of AI) is tangible, but this is a double-edged sword. Put simply, the enemy of the casual gamer is the gross unfairness in online play. But why do these games feel so unfair? Why do you shake the hand of a squash player, or clap your footie opponent on his back after losing good game, but walk away from a loss in a tactical shooter angry, frustrated and maybe even abusive? Is it just the nature of the clientele?

This post introduces a draft paper I’ve written called “Boom! Headshot! (Building Neo-Tactics on Network-Level Anomalies in Online Tactical First-Person Shooters)”, (named after this movie clip”), which may hold some of the answers to understanding why there is so much perceived unfairness. If the paper is a little dense, try the accompanying presentation instead.

Do you want to know more? If so, read on…
Continue reading Boom! Headshot!

Random isn't always useful

It’s common to think of random numbers as being an essential building block in security systems. Cryptographic session keys are chosen at random, then shared with the remote party. Security protocols use “nonces” for “freshness”. In addition, randomness can slow down information gathering attacks, although here they are seldom a panacea. However, as George Danezis and I recently explained in “Route Fingerprinting in Anonymous Communications” randomness can lead to uniqueness — exactly the property you don’t want in an anonymity system.
Continue reading Random isn't always useful

Which services should remain offline?

Yesterday I gave a talk on confidentiality at the EMIS annual conference. I gained yet more insights into Britain’s disaster-prone health computerisation project. Why, for example, will this cost eleven figures, when EMIS writes the software used by 60% of England’s GPs to manage their practices with an annual development budget of only 25m?

On the consent front, it turns out that patients who exercise even the mildest form of opt-out from the national database (having their addresses stop-noted, which is the equivalent of going ex-directory — designed for celebs and people in witness protection) will not be able to use many of the swish new features we’re promised, such as automatic repeat prescriptions. There are concerns that providing a degraded health service to people who tick the privacy box might undermine the validity of consent to information sharing.

On the confidentiality front, people are starting to wrestle with the implications of allowing patients online access ot their records. Vulnerable patients — for example, under-age girls who have had pregancy terminations without telling their parents — could be at risk if they can access sensitive data online. They may be coerced into accessing it, or their passwords may become known to friends and family. So there’s talk of a two-tier online record — in effect introducing multilevel security into record access. Patients would be asked whether they wanted some, all, or none of their records to be available to them online. I don’t think the Department of Health understands the difficulties of multilevel security. I can’t help wondering whether online patient access is needed at all. Very few patients ever exercise their right to view and get a copy of their records; making all records available online seems more and more like a political gimmick to get people to accept the agenda of central data collection.

We don’t seem to have good ways of deciding what services should be kept offline. There’s been much debate about elections, and here’s an interesting case from healthcare. What else will come up, and are there any general principles we’re missing?

How many Security Officers? (reloaded)

Some years ago I wrote a subsection in my thesis (sec 8.4.3, p. 154), entitled “How Many Security Officers are Best?”, where I reviewed over the various operating procedures I’d seen for Hardware Security Modules, and pondered why some people chose to use two separate parties to oversee a critical action and some chose to use three. Occasionally a single person is even deliberately entrusted with great power and responsibility, because there can be no question where to lay the blame if something goes wrong. So, “one, two, or three?”, I said to myself.

In the end I plumped for three… with some logic excerpted from my thesis below:

But three security officers does tighten security: a corrupt officer will be outnumbered, and deceiving two people in different locations simultaneously is next to impossible. The politics of negotiating a three-way collusion is also much harder: the two bad officers will have to agree on their perceptions of the third before approaching him. Forging agreement on character judgements when the stakes are high is very difficult. So while it may be unrealistic to have three people sitting in on a long-haul reconfiguration of the system, where the officers duties are short and clearly defined, three keyholders provides that extra protection.

Some time later, I mentioned the subject with Ross, and he berated me for my over-complicated logic. His general line of argument was along these lines “The real threat for Security Officers is not that they blackmail, bribe or coerce one another, it’s that they help! Here, Bob, you go home early mate; I know you’ve got to pack for your business trip, and I’ll finish off installing the software on the key loading PC. That sort of thing. Having three key custodians makes ‘helping’ and such friendly tactics much harder – the bent officer must co-ordinate on two fronts.”

But recently my new job has exposed me to a number of real dual control and split knowledge systems. I was looking over some source code for a key loading HSM command in fact, and I spotted code that took a byte array of key material, and split it into three components each with odd parity. It generates two fresh totally random components with odd parity, and then XORs these onto the third. Hmmm, I thought, so the third component would contain the parity information of the original key, dangerous — a leakage of information preferentially to the third key component holder! But wrong… because the parity of the original key is known anyway in the case of a DES key… it’s always odd.

I chatted to our chief technical bod about this, and he casually dropped a bombshell — that shed new light on why three is best, an argument so simple and elegant that it must be true, yet faintly depressing to now believe that no-one agonised over the human psychology of the security officer numbers issue as I did. When keys are exchanged a Key Check Value (KCV) is calculated for each component, by encrypting a string of binary zeroes with the component value. Old-fashioned DES implementations only accepted keys with odd parity, so to calculate KCVs on these components, each must have odd parity as well as the final key itself. For the final key to retain odd parity from odd parity components, there must be an odd number of components (the parity of keys could be adjusted, but this takes more lines of code, and is less elegant than just tweaking a counter in the ‘for’ loop). Now the smallest odd integer greater than one is three. This is why the most valuable keys are exchanged in three components, and not in two!

So, the motto of the story for me is to make sure to apply Occam’s Razor more thoroughly when I try to deduce the logic behind the status quo, but I still think there are some interesting questions raised about how we share responsibility for critical actions. There still seems to be to me a very marked and qualitative difference in the dynamics of how three people interact versus two, whatever the situation: be it security officers entering keys, pilots flying an aircraft, or even a ménage à trois! Just like the magnitude of the difference between 2D and 3D space.

If one, two and three are all magical numbers, qualitatively different, are there any other qualitative boundaries higher in the cardinal numbers, and if so, what are they? In a security-critical process such as an election, can ten people adjudicate effectively in a way that thirty could not? Is there underlying logic or just mysticism behind the jury of twelve? Or, to take the jury example, and my own tendency to over-complicate, was it simply that in the first proper court room built back a few hundred years ago, there happened only to be space for twelve men on the benches on the right hand side!

Hot or Not: Revealing Hidden Services by their Clock Skew

Next month I will be presenting my paper “Hot or Not: Revealing Hidden Services by their Clock Skew” at the 13th ACM Conference on Computer and Communications Security (CCS) held in Alexandria, Virginia.

It is well known that quartz crystals, as used for controlling system clocks of computers, change speed when their temperature is altered. The paper shows how to use this effect to attack anonymity systems. One such attack is to observe timestamps from a PC connected to the Internet and watch how the frequency of the system clock changes.

Absolute clock skew has been previously used to tell whether two apparently different machines are in fact running on the same hardware. My paper adds that because the skew depends on temperature, in principle, a PC can be located by finding out when the day starts and how long it is, or just observing that the pattern is the same as a computer in a known location.

However, the paper is centered around hidden services. This is a feature of Tor which allows servers to be run without giving away the identity of the operator. These can be attacked by repeatedly connecting to the hidden service, causing its CPU load, hence temperature, to increase and so change the clockskew. Then the attacker requests timestamps from all candidate servers and finds the one demonstrating the expected clockskew pattern. I tested this with a private Tor network and it works surprisingly well.

In the graph below, the temperature (orange circles) is modulated by either exercising the hidden service or not. This in turn alters the measured clock skew (blue triangles). The induced load pattern is clear in the clock skew and an attacker could use this to de-anonymise a hidden service. More details can be found in the paper (PDF 1.5M).

Clock skew graph

I happened upon this effect in a lucky accident, while trying to improve upon the results of the paper “Remote physical device fingerprinting“. A previous paper of mine, “Embedding Covert Channels into TCP/IP” showed how to extract high-precision timestamps from the Linux TCP initial sequence number generator. When I tested this hypothesis it did indeed improve the accuracy of clock skew measurement, to the extent that I noticed an unusual peak at about the time cron caused the hard disk on my test machine to spin-up. Eventually I realised the potential for this effect and ran the necessary further experiments to write the paper.

With a single bound it was free!

My book on Security Engineering is now available online for free download here.

I have two main reasons. First, I want to reach the widest possible audience, especially among poor students. Second, I am a pragmatic libertarian on free culture and free software issues; I believe many publishers (especially of music and software) are too defensive of copyright. I don’t expect to lose money by making this book available for free: more people will read it, and those of you who find it useful will hopefully buy a copy. After all, a proper book is half the size and weight of 300-odd sheets of laser-printed paper in a ring binder.

I’d been discussing this with my publishers for a while. They have been persuaded by the experience of authors like David MacKay, who found that putting his excellent book on coding theory online actually helped its sales. So book publishers are now learning that freedom and profit are not really in conflict; how long will it take the music industry?

Protocol design is hard — Flaws in ScatterChat

At the recent HOPE conference, the “secure instant messaging (IM) client”, ScatterChat, was released in a blaze of publicity. It was designed by J. Salvatore Testa II to allow human rights and democracy activists to securely communicate while under surveillance. It uses cryptography to protect confidentiality and authenticity, and integrates Tor to provide anonymity and is bundled with an easy to use user interface. Sadly not everything is as good as it sounds.

When I first started supervising undergraduates at Cambridge, Richard Clayton explained that the real purpose of the security course was to teach students not to invent the following (in increasing order of importance): protocols, hash functions, block ciphers and modes of operation. Academic literature is scattered with the bones of flawed proposals for all of these, despite being designed by very capable and experienced cryptographers. Instead, wherever possible, implementors should use peer-reviewed building blocks, as normally there is already a solution which can do the job, but has withstood more analysis and so is more likely to be secure.

Unfortunately, ScatterChat uses both a custom protocol and mode of operation, neither which are as secure as hoped. While looking at the developer documentation I found a few problems and reported them to the author. As always, there is the question of whether such vulnerabilities should be disclosed. It is likely that these problems would be discovered eventually, so it is better for them to be caught early and users allowed to take precautions, rather than attackers who independently find the weaknesses being able to exploit them with impunity. Also, I hope this will serve as a cautionary tale, reminding software designers that cryptography and protocol design is fraught with difficulties so is better managed through open peer-review.

The most serious of the three vulnerabilities was published today in an advisory (technical version), assigned CVE-2006-4021, from the ScatterChat author, but I also found two lesser ones. The three vulnerabilities are as follows (in increasing order of severity): Continue reading Protocol design is hard — Flaws in ScatterChat

Security Theater at the Grand Coulee Dam

Security theater” is the term that Bruce Schneier uses to describe systems that look very exciting and dramatic (and make people feel better) but entirely miss the point in delivering any actual real world security. The world is full of systems like this and since 9/11 they’ve been multiplying.

Bruce also recently ran a competition for a “movie plot” security threat — the winner described an operation to fly planes laden with explosives into Grand Coulee Dam.

As it happens, I was recently actually at Grand Coulee Dam as a tourist — one of the many places I visited as I filled in the time between the SRUTI and CEAS academic conferences. Because this is a Federal site, provision was made from the beginning for visitors to see how their tax dollars were spent, and you can go on tours of the “3rd Power House” (an extra part of the dam, added between 1966 and 1974, and housing six of the largest hydroelectric generators ever made).

Until 9/11 you could park on top of the dam itself and wander around on a self-guided tour. Now, since the site is of such immense economic significance, you have to park outside the site and go on guided tours, of limited capacity. You walk in for about 800 yards (a big deal for Americans I understand) and must then go through an airport style metal detector. You are not allowed to take in backpacks or pointy things — you can however keep your shoes on. The tour is very interesting and I recommend it. You get to appreciate the huge scale of the place, the tiny looking blue generators are 33 feet across!, and you go up close to one of the generators as it spins in front of you, powering most of the NorthWest and a fair bit of California as well.

The security measures make some sense; although doubtless the place the bad guys would really like to damage is the control center and that isn’t on the tour. However….

… on the other side of the valley, a quarter of a mile from the dam itself, is a “visitor arrival center“. This contains a number of displays about the history of the dam and its construction, and if you have the time, there’s films to watch as well. On summer nights they project a massive laser light show from there (a little tacky in places, but they run white water over the dam to project onto, which is deeply impressive). You don’t have to go through any security screening to get into the center. However, and that’s the security theater I promised — you cannot take in any camera bags, backpacks etc!

No purses, backpacks, bags, fannypacks, camera cases or packages of any kind allowed in the visitor center.

What’s the threat here? I went to a dozen other visitor centers (in National Parks such as Yellowstone, Grand Teton, Glacier, Mt. Rainier and Crater Lake) that were generally far more busy than this one. Terrorists don’t usually blow up museums, and if, deity forbid, they blew up this one, it’s only the laser lights that would go out.

New card security problem?

Yesterday my wife received through the post a pre-approved unsolicited gold mastercard with a credit limit of over a thousand pounds. The issuer was Debenhams and the rationale was that she has a store card anyway – if she doesn’t want to use the credit card she is invited to cut the credit card in half and throw it away. (Although US banks do this all the time and UK banks aren’t supposed to, I’ll leave to the lawyers whether their marketing tactics test the limits of banking regulation.)

My point is this: the average customer has no idea how to ‘cut up’ a card now that it’s got a chip in it. Bisecting the plastic using scissors leaves the chip functional, so someone who fishes it out of the trash might use a yescard to clone it, even if they don’t know the PIN. (Of course the PIN mailer might be in the same bin.)

Here at the Lab we do have access to the means to destroy chips (HNO3, HF) but you really don’t want that stuff at home. Putting 240V through it will stop it working – but as this melts the bonding wires, an able attacker might depackage and rebond the chip.

My own suggestion would be to bisect the whole chip package using a pair of tin snips. If you don’t have those in your toolbox a hacksaw should do. This isn’t foolproof as there exist labs that can retrieve data from chip fragments, but it’s probably good enough to keep out the hackers.

It does seem a bit off, though, that card issuers now put people to the trouble of devising a means of the secure disposal of electronic waste, when consumers mostly have neither the knowledge nor the tools to do so properly

Protecting software distribution with a cryptographic build process

At the rump session of PET 2006 I presented a simple idea on how to defend against a targeted attacks on software distribution. There were some misunderstandings after my 5 minute description, so I thought it would help to put the idea down in writing and I also hope to attract more discussion and a larger audience.

Consider a security-critical open source application; here I will use the example of Tor. The source-code is signed with the developer’s private key and users have the ability to verify the signature and build the application with a trustworthy compiler. I will also assume that if a backdoor is introduced in a deployed version, someone will notice, following from Linus’s law — “given enough eyeballs, all bugs are shallow”. These assumptions are debatable, for example the threats of compiler backdoors have been known for some time and subtle security vulnerabilities are hard to find. However a backdoor in the Linux kernel was discovered, and the anonymous reporter of a flaw in Tor’s Diffie-Hellman implementation probably found it through examining the source, so I think my assumptions are at least partially valid.

The developer’s signature protects against an attacker mounting a man-in-the-middle attack and modifying a particular user’s download. If the developer’s key (or the developer) is compromised then a backdoor could be inserted, but from the above assumptions, if this version is widely distributed, someone will discover the flaw and raise the alarm. However, there is no mechanism that protects against an attacker with access to the developer’s key singling out a user and adding a backdoor to only the version they download. Even if that user is diligent, the signature will check out fine. As the backdoor is only present in one version, the “many eyeballs” do not help. To defend against this attack, a user needs to find out if the version they download is the same as what other people receive and have the opportunity to verify.

My proposal is that the application build process should first calculate the hash of the source code, embed it in the binary and make it remotely accessible. Tor already has a mechanism for the last step, because each server publishes a directory descriptor which could include this hash. Multiple directory servers collect these and allow them to be downloaded by a web browser. Then when a user downloads the Tor source code, he can use the operating system provided hashing utility to check that the package he has matches a commonly deployed one.

If a particular version claims to have been deployed for some time, but no server displays a matching hash, then the user knows that there is a problem. The verification must be performed manually for now, but an operating system provider could produce a trusted tool for automating this. Note that server operators need to perform no extra work (the build process is automated) and only users who believe they may be targeted need perform the extra verification steps.

This might seem similar to the remote-attestation feature of Trusted Computing. Here, computers are fitted with special hardware, a Trusted Platform Module (TPM), which can produce a hard to forge proof of the software currently running. Because it is implemented in tamper-resistant hardware, even the owner of the computer cannot produce a fake statement, without breaking the TPM’s defences. This feature is needed for applications including DRM, but comes with added risks.

The use of TPMs to protect anonymity networks has been suggested, but the important difference between my proposal and TPM remote-attestation is that I assume most servers are honest, so will not lie about the software they are running. They have no incentive to do so, unless they want to harm the anonymity of the users, and if enough servers are malicious then there is no need to modify the client users are running, as the network is broken already. So there is no need for special hardware to implement my proposal, although if it is present, it could be used.

I hope this makes my scheme clearer and I am happy to receive comments and suggestions. I am particularly interested in whether there are any flaws in the design, whether the threat model is reasonable and if the effort in deployment and use is worth the increased resistance to attack.