Category Archives: Security engineering

Bad security, good security, case studies, lessons learned

Job ad: pre- and post-doctoral posts in processor, operating system, and compiler security

The CTSRD Project is advertising two posts in processor, operating system, and compiler security. The first is a research assistant position, suitable for candidates who may not have a research background, and the second is a post-doctoral research associate position suitable for candidates who have completed (or will shortly complete) a PhD in computer science or a related field.

The CTSRD Project is investigating fundamental improvements to CPU architecture, operating system (OS) design, and programming language structure in support of computer security. The project is a collaboration between the University of Cambridge and SRI International, and part of the DARPA CRASH research programme on clean-slate computer system design.

These positions will be integral parts of an international team of researchers spanning multiple institutions across academia and industry. Successful candidate will provide support for the larger research effort by contributing to low-level hardware and system-software implementation and experimentation. Responsibilities will include extending Bluespec-based CHERI processor designs, modifying operating system kernels and compiler suites, administering test and development systems, as well as performing performance measurements. The position will also support and engage with early adopter communities for our open-source research platform in the UK and abroad.

Candidates should have strong experience with at least one of Bluespec HDL, OS kernel development (FreeBSD preferred), or compiler internals (LLVM preferred); strong experience with the C programming language and use of revision control in large, collaborative projects is essential. Some experience with computer security and formal methods is also recommended.

Further details on the two posts may be found in job ads NR27772 and NR27782. E-mail queries may be sent directly to Dr Robert N. M. Watson.

Both posts are intended to start on 8 July 2013; applications must be received by 9 May 2013.

Fixing popping/clicking audio on Raspberry Pi

I’m working on a security-related project with the Raspberry Pi and have encountered an annoying problem with the on-board sound output. I’ve managed to work around this, so thought it might be helpful the share my experiences with others in the same situation.

The problem manifests itself as a loud pop or click, just before sound is output and just after sound output is stopped. This is because a PWM output of the BCM2835 CPU is being used, rather than a standard DAC. When the PWM function is activated, there’s a jump in output voltage which results in the popping sound.

Until there’s a driver modification, the work-around suggested (other than using the HDMI sound output or an external USB sound card) is to run PulseAudio on top of ALSA and keep the driver active even when no sound is being output. This is achieved by disabling the module-suspend-on-idle PulseAudio module, then configuring applications to use PulseAudio rather than ALSA. Daniel Bader describes this work-around and how to configure MPD, in a blog post. However, when I tried this approach, the work-around didn’t work.

Continue reading Fixing popping/clicking audio on Raspberry Pi

How Certification Systems Fail: Lessons from the Ware Report

Research in the Security Group has uncovered various flaws in systems, despite them being certified as secure. Sometimes the certification criteria have been inadequate and sometimes the certification process has been subverted. Not only do these failures affect the owners of the system but when evidence of certification comes up in court, the impact can be much wider.

There’s a variety of approaches to certification, ranging from extremely generic (such as Common Criteria) to highly specific (such as EMV), but all are (at least partially) descendants of a report by Willis H. Ware – “Security Controls for Computer Systems”. There’s much that can be learned from this report, particularly the rationale for why certification systems are set up as the way they are. The differences between how Ware envisaged certification and how certification is now performed is also informative, whether these differences are for good or for ill.

Along with Mike Bond and Ross Anderson, I have written an article for the “Lost Treasures” edition of IEEE Security & Privacy where we discuss what can be learned, about how today’s certifications work and should work, from the Ware report. In particular, we explore how the failure to follow the recommendations in the Ware report can explain why flaws in certified banking systems were not detected earlier. Our article, “How Certification Systems Fail: Lessons from the Ware Report” is available open-access in the version submitted to the IEEE. The edited version, as appearing in the print edition (IEEE Security & Privacy, volume 10, issue 6, pages 40–44, Nov‐Dec 2012. DOI:10.1109/MSP.2012.89) is only available to IEEE subscribers.

"Security Engineering" now available free online

I’m delighted to announce that my book Security Engineering – A Guide to Building Dependable Distributed Systems is now available free online in its entirety. You may download any or all of the chapters from the book’s web page.

I’ve long been an advocate of open science and open publishing; all my scientific papers go online and I no longer even referee for publications that sit behind a paywall. But some people think books are different. I don’t agree.

The first edition of my book was also put online four years after publication by agreement with the publishers. That took some argument but we found that sales actually increased; for serious books, free online copies and paid-for paper copies can be complements, not substitutes. We are all grateful to authors like David MacKay for pioneering this. So when I wrote the second edition I agreed with Wiley that we’d treat it the same way, and here it is. Enjoy!

CACM: A decade of OS access-control extensibility

Operating-system access control technology has undergone a remarkable transformation over the last fifteen years as appliance, embedded, and mobile device vendors transitioned from dedicated “embedded operating systems” to general-purpose ones — often based on open-source UNIX and Linux variants. Device vendors look to upstream operating system authors to provide the critical low-level software foundations for their products: network stacks, UI frameworks, application frameworks, etc. Increasingly, those expectations include security functionality — initially, features to prevent device bricking, but also to constrain potentially malicious code from third-party applications, which engages features from digital signatures to access control and sandboxing.

In a February 2013 Communications of the ACM article, A decade of OS access-control extensibility, I reflect on the central role of kernel access-control extensibility frameworks in supporting security localisation, the adaptation of operating-system security models to site-local or product-specific requirements. Similar to device driver stacks of the virtual file system (VFS), the goal is to allow third-party developers or integrators to extend base operating system security models without being exposed to unstable programming interfaces or the risks associated with less integrated techniques such as system-call interposition.

Case in point is the TrustedBSD MAC Framework, developed and deployed over the 2000s with support from DARPA and the US Navy, in collaboration with several industrial partners. In the article, I consider our original motivations, context, and design principles, but also track the transition process, which relied heavily on open source methodology and community, to a number of widely used products, including the open-source FreeBSD operating system, Apple’s Mac OS X and iOS operating systems, Juniper’s Junos router operating system, and nCircle’s IP360 product. I draw conclusions on things we got right (common infrastructure spanning models; tight integration with OS concurrency model) and wrong (omitting OS privilege model extension; not providing an application author identity model).

Throughout, the diversity of approaches and models suggests an argument for domain-specific policy models that respond to local tradeoffs between performance, functionality, complexity, and security, rather than a single policy model to rule them all. I also emphasise the importance of planning for long-term sustainability for research products — critical to adoption, especially via open source, but also frequently overlooked in academic research.

An open-access (and slightly extended) version of the article can be found on ACM Queue.

Yet more banking industry censorship

Yesterday, banking security vendor Thales sent this DMCA takedown request to John Young who runs the excellent Cryptome archive. Thales want him to remove an equipment manual that has been online since 2003 and which was valuable raw material in research we did on API security.

Banks use hardware security modules (HSMs) to manage the cryptographic keys and PINs used to authenticate bank card transactions. These used to be thought to be secure. But their application programming interfaces (APIs) had become unmanageably complex, and in the early 2000s Mike Bond, Jolyon Clulow and I found that by sending sequences of commands to the machine that its designers hadn’t anticipated, it was often possible to break the device spectacularly. This became a thriving field of security research.

But while API security has been a goldmine for security researchers, it’s been an embarrassment for the industry, in which Thales is one of two dominant players. Hence the attempt to close down our mine. As you’d expect, the smaller firms in the industry, such as Utimaco, would prefer HSM APIs to be open (indeed, Utimaco sent two senior people to a Dagstuhl workshop on APIs that we held a couple of months ago). Even more ironically, Thales’s HSM business used to be the Cambridge startup nCipher, which helped our research by giving us samples of their competitors’ products to break.

If this case ever comes to court, the judge might perhaps consider the Lexmark case. Lexmark sued Static Control Components (SCC) for DMCA infringement in order to curtail competition. The court found this abusive and threw out the case. I am not a lawyer, and John Young must clearly take advice. However this particular case of internet censorship serves no public interest (as with previous attempts by the banking industry to censor security research).

Interviews on the clean-slate design argument

Over the past two years, Peter G. Neumann and I, along with a host of collaborators at SRI International and the University of Cambridge Computer Laboratory, have been pursuing CTSRD, a joint computer-security research project exploring fundamental revisions to CPU design, operating systems, and application program structure. Recently we’ve been talking about the social, economic, and technical context for that work in a series of media interviews, including one with ACM Queue on research into the hardware-software interface posted previously.

A key aspect to our argument is that the computer industry has been pursuing a strategy of hill climbing with respect to security; if we were willing to take a step back and revisit some of our more fundamental design choices, learning from longer-term security research over the last forty years, then we might be able to break aspects of the asymmetry driving the current arms race between attackers and defenders. This clean-slate argument doesn’t mean we need to throw everything away, but does suggest that more radical change is required than is being widely considered, as we explore in two further interviews:

Authentication is machine learning

Last week, I gave a talk at the Center for Information Technology Policy at Princeton. My goal was to expand my usual research talk on passwords with broader predictions about where authentication is going. From the reaction and discussion afterwards one point I made stood out: authenticating humans is becoming a machine learning problem.

Problems with passwords are well-documented. They’re easy to guess, they can be sniffed in transit, stolen by malware, phished or leaked. This has led to loads of academic research seeking to replace passwords with something, anything, that fixes these “obvious” problems. There’s also a smaller sub-field of papers attempting to explain why passwords have survived. We’ve made the point well that network economics heavily favor passwords as the incumbent, but underestimated how effectively the risks of passwords can be managed in practice by good machine learning.

From my brief time at Google, my internship at Yahoo!, and conversations with other companies doing web authentication at scale, I’ve observed that as authentication systems develop they gradually merge with other abuse-fighting systems dealing with various forms of spam (email, account creation, link, etc.) and phishing. Authentication eventually loses its binary nature and becomes a fuzzy classification problem. Continue reading Authentication is machine learning

CFP: Runtime Environments, Systems, Layering and Virtualized Environments (RESoLVE 2013)

This year, we presented two papers at RESoLVE 2012 relating to the structure of operating systems and hardware, one focused on CPU instruction set security features out of our CTSRD project, and another on efficient and reconfigurable communications in data centres out of our MRC2 project.

I’m pleased to announce the Call for Papers for RESoLVE 2013, a workshop (co-located with ASPLOS 2013) that brings together researchers in both the OS and language level virtual machine communities to exchange ideas and experiences, and to discuss how these separate layers can take advantage of each others’ services. This has a particular interest to the security community, who both want to build, and build on, security properties spanning hardware protection (e.g., VMs) and language-level protection.

Runtime Environments, Systems, Layering and Virtualized Environments
(RESoLVE 2013)

ASPLOS 2013 Workshop, Houston, Texas, USA
March 16, 2013

Introduction

Today’s applications typically target high-level runtime systems and frameworks. At the same time, the operating systems on which they run are themselves increasingly being deployed on top of (hardware) virtual machines. These trends are enabling applications to be written, tested, and deployed more quickly, while simplifying tasks such as checkpointing, providing fault-tolerance, enabling data and computation migration, and making better, more power-efficient use of hardware infrastructure.

However, much current work on virtualization still focuses on running unmodified legacy systems and most higher-level runtime systems ignore the fact that they are deployed in virtual environments. The workshop on Runtime Environments, Systems, Layering, and Virtualized Environments (RESoLVE 2013) aims to brings together researchers in both the OS and language level virtual machine communities to exchange ideas and experiences and to discuss how these separate layers can take advantage of each others’ services.

Continue reading CFP: Runtime Environments, Systems, Layering and Virtualized Environments (RESoLVE 2013)

Virgin Money sends email helping phishers

It’s not unusual for banks to send emails which are confusingly similar to phishing, but this recent one I received from Virgin Money is exceptionally bad. It tells customers that the bank (Northern Rock) is changing domain names from their usual one (northernrock.co.uk) to virginmoney.com and customers should use their usual security credentials to log into the new domain name. Mail clients will often be helpful and change the virginmoney.com into a link.

This message is exactly what phishers would like customers to fall for. While this email was legitimate (albeit very unwise), a criminal could follow up with an email saying that savings customers should access their account at virginsavings.net (which is currently available for registration). Virgin Money have trained their customers to accept such emails as legitimate, which is a very dangerous lesson to teach.

It would have been safer to not do the rebranding, but if that’s considered essential for commercial reasons, then customers should have been told to continue accessing the site at their usual domain name, and redirected them (via HTTPS) to the new site. It would mean keeping hold of the Northern Rock domain names for the foreseeable future, but that is almost certainly what Virgin Money are planning anyway.


[larger version]