Category Archives: Security engineering

Bad security, good security, case studies, lessons learned

Call for papers: Workshop on Adaptive Host and Network Security

Stu Wagner, Bob Laddaga, and I are pleased to announce the call for papers for a new Workshop on Adaptive Host and Network Security, to take place at the Sixth IEEE Conference on Self-Adaptive and Self-Organizing Systems in September 2012 in Lyon, France.

Over the past decade the threat of cyber attacks on critical commercial and government infrastructure has been growing at an alarming rate to a point where it is now considered to be a major threat in the world. Current approaches to cyber security involve building fast-growing multi-million line systems that attempt to detect and remove attacking software. Meanwhile, cyber exploits continue to multiply in number, but their size continues to be a couple of hundred lines of code. This disparity of effort means that the current defensive approaches to cyber security can at best fight a holding action. The workshop is intended to explore game-changing approaches to cyber security that focus on adaptation. There is a clear need to develop systems at both the host level and the network level to actively adapt to cyber attacks and to provide greater protection for networked computation at all levels. Topic of interest include:

  • Protecting the host
  • New OS models for secure hosts
  • Combining proof, model checking and dynamic monitoring techniques for host security
  • Meta-level control and monitoring of networks
  • Use of feedback mechanisms in network operations
  • Self-monitoring and self-explaining network systems
  • Self-adaptive and autonomic networking
  • Centralized versus distributed network control
  • Measurement of network properties in support of self evaluation
  • Programming language abstractions to support security
  • Computational models of network security
  • Self healing networks
  • Learning in adaptive networks
  • Dynamically reprogrammable switches
  • The use of a Policy-based Network Management system to build self-adaptively secure networks

Continue reading Call for papers: Workshop on Adaptive Host and Network Security

Of contraseñas, סיסמאות, and 密码

Over a year ago, we blogged about a bug at Gawker which replaced all non-ASCII characters in passwords with ‘?’ prior to checking. Along with Rubin Xu and others I’ve investigated issues surrounding passwords, languages, and character encoding throughout the past year. This should be easy: websites using UTF-8 can accept any password and hash it into a standard format regardless of the writing system being used. Instead though, as we report a new paper which I presented last week at the Web 2.0 Security and Privacy workshop in San Francisco, passwords still localise poorly both because websites are buggy and users have been trained to type ASCII passwords only. This has broad implications for passwords’ role as a “universal” authentication mechanism. Continue reading Of contraseñas, סיסמאות, and 密码

The science of password guessing

I’ve written quite a few posts about passwords, mainly focusing on poor implementations, bugs and leaks from large websites. I’ve also written on the difficulty of guessing PINs, multi-word phrases and personal knowledge questions. How hard are passwords to guess? How does guessing difficulty compare between different groups of users? How does it compare to potential replacement technologies? I’ve been working on the answers to these questions for much of the past two years, culminating in my PhD dissertation on the subject and a new paper at this year’s IEEE Symposium on Security and Privacy (Oakland) which I presented yesterday. My approach is simple: don’t assume any semantic model for the distribution of passwords (Markov models and probabilistic context-free-grammars have been proposed, amongst others), but instead learn the distribution of passwords with lots of data and use this to estimate the efficiency of an hypothetical guesser with perfect knowledge. It’s been a long effort requiring new mathematical techniques and the largest corpus of passwords ever collected for research. My results provide some new insight on the nature of password selection and a good framework for future research on authentication using human-chosen distributions of secrets. Continue reading The science of password guessing

The quest to replace passwords

As any computer user already knows, passwords are a usability disaster: you are basically told to “pick something you can’t remember, then don’t write it down“, which is worse than impossible if you must also use a different password for every account. Moreover, security-wise, passwords can be shoulder-surfed, keylogged, eavesdropped, brute-forced and phished. Notable industry insiders have long predicted their demise. Over the past couple of decades, dozens of alternative schemes have been proposed. Yet here we are in 2012, still using more and more password-protected accounts every year. Why? Can’t we do any better? Don’t the suggested replacements offer any improvements?

The paper I am about to present at the IEEE Symposium on Security and Privacy in San Francisco (Oakland 2012), grown out of the “related work” section of my earlier Pico paper and written with coauthors Joe Bonneau, Cormac Herley and Paul van Oorschot, offers a structured and well-researched answer that, according to peer review, “should have considerable influence on the research community”. It offers, as its subtitle says, a framework for comparative evaluation of password replacement schemes.

We build a large 2D matrix. Across the columns we define a broad spectrum of 25 benefits that a password replacement scheme might potentially offer, starting with USABILITY benefits, such as being easy to learn, or not requiring a memory effort from the user, and SECURITY benefits, such as resilience to shoulder-surfing or to phishing. These two broad categories, and the tension between them, are relatively well-understood: it’s easy to provide more usability by offering less security and vice versa. But we also introduce a third category, DEPLOYABILITY, that measures how easy it would be to deploy the scheme on a global scale, taking into account such benefits as cost per user, compatibility with deployed web infrastructure and accessibility to people with disabilities.

Next, in the rows, we identify 35 representative schemes covering 11 broad categories, from password managers through federated authentication to hardware tokens and biometric schemes. We then carefully rate each scheme individually, with various cross-checks to preserve accuracy and consistency, assessing for each benefit whether the given scheme offers, almost offers or does not offer the benefit. The resulting colourful matrix allows readers to compare features at a glance and to recognize general patterns that would otherwise be easily missed.

Contrary to the optimistic claims of scheme authors, who often completely ignore some evaluation criteria when they assert that their scheme is a definite improvement, none of the examined schemes does better than passwords on every benefit when rated on all 25 benefits of this objective benchmark.

From the concise overview offered by the summary matrix we distil key high level insights, such as why we are still using passwords in 2012 and are probably likely to continue to do so for quite a while.

How can we make progress? It has been observed that many people repeat the mistakes of history because they didn’t understand the history book. In the field of password replacements, it looks like a good history book still needed to be written! As pointed out during peer review, our work will be a foundational starting point for further research in the area and a useful sanity check for future password replacement proposals.

An extended version of the paper is available as a tech report.

I'm from the Government and I'm here to help

Two years ago, Hyoungshick Kim, Jun Ho Huh and I wrote a paper On the Security of Internet banking in South Korea in which we discussed an IT security policy that had gone horribly wrong. The Government of Korea had tried in 1998 to secure electronic commerce by getting all the banks to use an officially-approved AciveX plugin, effectively locking most Koreans into IE. We argued in 2010 that this provided less security than it seemed, and imposed high usability and compatibility costs. Hyoungshick presented our paper at a special conference, and the government withdrew the ActiveX mandate.

It’s now apparent that the problem is still there. The bureaucracy created a procedure to approve alternative technologies, and (surprise) still hasn’t approved any. Korean web businesses remain trapped in the bubble, and fall farther and farther behind. This may well come to be seen as a warning to other governments to adopt true open standards, if they want to avoid a similar fate. The Cabinet Office should take note – and don’t forget to respond to their consultation!

Three paper Thursday: Shamir x3 at Eurocrypt

For the past 4 days Cambridge has been hosting Eurocrypt 2012.

There were many talks, probably interesting, but I will only comment on 3 talks given by Adi Shamir, 1 during the official conference and 2 during the rump session.
Among the other sessions I mention that the best paper was given to this paper by Antoine Joux and Vanessa Vitse for the enhancement of index calculus to break elliptic curves.

Official Talk: Minimalism in cryptography, the Even-Mansour scheme revisited

In this work, Adi et al. presented an analysis on the Even-Mansour scheme:

E(P) = F(P ⊕ K1) ⊕ K2

Such scheme, some times referred to as key whitening, is used in the DESX construction and in the AES-XTS mode of operation (just a few examples).

Adi et al. shown a new slide attack, called SLIDEX, which has been used to prove a tight bound on the security of the Even-Mansour scheme.

Even more, they show that using K1 = K2 you can achieve the same security.

Rump talk 1: security of multiple key encryption

Here Adi considered the case of encrypting data multiple times with multiple keys, as in 3DES:
data -> c1 = E_k1(data) ->  c2 = E_k2(c1) -> c3 = E_k3(c2) -> c4 = E_k3(c3) …. and so on.

The general approach to break a scheme where a key is used 2 times or 3 times (2DES, 3DES for e.g.) is the meet-in-the-middle attack, where you encrypt from one side and then decrypt from the other side, and by storing a table of the size of the key space (say n bits) you can eventually find the keys used in a scheme using only a few pairs of plaintext/ciphertext. For 2 keys such an attack would require 2^{n} time, for 3 keys 2^{2n}. Therefore some people may assume that increasing the number of keys by 1 (i.e. to use 4 keys) may increase the security of this scheme. This is in fact not true.

Adi shown that once we go beyond 3 keys (e.g. 4, 5, 6, etc…) the security only increases once every few keys. If you think of it, using 4 keys you can just apply the meet-in-the-middle attack in 2^{2n} time to the left 2 encryptions and also in 2^{2n} time to the right 2 decryptions. After this, he shown how to use the meet-in-the-middle attack to solve the knapsack problem and proposed the idea of using such an algorithm to solve other problems as well.

Rump talk 2: the cryptography of John Nash

Apparently John Nash, member of MIT during the 1950s, wrote some letters to the NSA in 1955 explaining the implications of computational complexity for security (this wasn’t known at the time).

John Nach also sent a proposal for an encryption scheme that is similar with today’s stream ciphers. However the NSA’s replied saying that the scheme didn’t match the security requirements of the US.
Adi Shamir and Ron Rivest then analysed the scheme and found that in the known plaintext model it would require something like 2^{sqrt(n)} time to break (which John Nach considered not to be a polynomial time, and therefore assumed would be secure).

The letters are now declassified. This blog also comments on the story.

Three Paper Thursday: full disk encryption

Information is often an important asset and today’s information is commonly stored as digital data (bytes). We store this data in our computers local hard disks and in our laptops disks. Many organisations wish to keep the data stored in their computers and laptops confidential. Therefore a natural desire is that a stolen disk or laptop should not be readable by an external person (an attacker in general terms). For this reason we use encryption.

A hard disk is commonly logically organised in multiple sections, often referred to as either partitions or volumes. These volumes can be used for various purposes, and they are often structured according to a file system format (e.g. NTFS, FAT, HFS, etc.). It is possible to have a single disk with 3 volumes, where the first volume is formatted with NTFS and contains a Windows operating system, the second volume is formatted with EXT3 file system and contains an installation of a Linux distribution, while the third volume is formatted with FAT file system and only contains data (no operating system).

Volume encryption is a mechanism used to encrypt the contents of an entire volume. This is sometimes referred as “full disk encryption”, which is misleading, since a physical disk can actually contain multiple volumes, each encrypted independently.  However, since the term has become very popular, I will continue to refer to this kind of encryption as “full disk encryption” but the reader should keep the above distinction in mind.

There are several products that offer full disk encryption, e.g. PGP Whole Disk Encryption, TrueCrypt, Sophos SafeGuard, or Check Point FDE. Bitlocker is the full disk encryption integrated with the Windows OS and Apple has recently introduced FileVault 2 as full disk encryption from MAC OS X 10.7.

There are several limitations that affect the encryption of an entire disk. These have to do with 3 important aspects among others: a) encryption must be fast (a user should not notice any extra latency); b) the operating system is encrypted as well (so there must be some way of bootstrapping the decryption process when the computer boots)  c) the encryption mechanism should not reduce the available storage space noticeable (that is, we cannot use an extra block of data for every few encrypted blocks).

The following 3 papers explain in detail these limitations. Two of them relate to currently deployed full disk encryption systems.

Continue reading Three Paper Thursday: full disk encryption

Risk and privacy in payment systems

I’ve just given a talk on Risk and privacy implications of consumer payment innovation (slides) at the Federal Reserve Bank’s payments conference. There are many more attendees this year; who’d have believed that payment systems would ever become sexy? Yet there’s a lot of innovation, and regulators are starting to wonder. Payment systems now contain many non-bank players, from insiders like First Data, FICO and Experian to service firms like PayPal and Google. I describe a number of competitive developments and argue that although fraud may increase, so will welfare, so there’s no reason to panic. For now, bank supervisors should work on collecting better fraud statistics, so that if there ever is a crisis the response can be well-informed.

Three Paper Thursday: BGP and its security

BGP security was a hot topic a few years ago, but is not as much studied these years. However, with technologies such as IPv6 and DNSSEC, BGP security is making a comeback, especially in the industry. We academics also have much to contribute in this space. In today’s Three Paper Thursday, I will highlight three recent work related to BGP security. It is also a good starting point to catch up in BGP security for those whose last memories of BGP security involve proposals such as S-BGP and SoBGP.