Category Archives: Security engineering

Bad security, good security, case studies, lessons learned

Euro S&P

I am at the IEEE Euro Security and Privacy Conference in London.

The keynote talk was by Sunny Consolvo, who runs Google’s security and privacy UX team, and her topic was user-facing threats to privacy and security. Her first theme was browser warnings, which try to stop users doing what they want to; it’s an interruption, it’s technical and there’s no obvious way forward other than clicking through the warning. In 2013 their SSL warning had a clickthrough rate of 68% while their more explicit and graphic malware warning had only 23% clickthrough. Mozilla’s SSL warning had a much lower 33%, with an icon of a policeman and more explicit tests. After four years of experimenting with watching eyes, corporate styling / branding and extra steps – none of which worked very well – they tried a strategy of clear instruction, attractive preferred choice, and unattractive alternative. The text had less jargon, a low reading level, brevity, specifics, an illustration and colour. Her CHI15 paper shows that the new design did much better, from 69% CTR to 41%. It turns out that many factors are at play; a strong signal is site quality, but this leads many people to continue anyway to sites they have come to trust. The malware clickthrough rate is now down to 5%, and SSL to 21%. That cost five years of a huge team effort, with back-end stuff too as well as UX. It involved huge internal fights, such as with a product manager who wanted the warning to say “this site contains malware” rather than “the site you’re trying to get to contains malware” as it was shorter. Her recent papers are here, here, and here.

A second thread of work is a longitudonal survey of public opinion on privacy ranging from government surveillance to cyber-bullying. This has run since 2015 in sixteen countries. 84% of respondents thought limiting access to online but not public data is very or extremely important. 84% were concerned about hackers vs 55% worried about governments and 53% companies. 20% of Germans are very angry about government access to personal data versus 10% of Brits. Most people believe national security justifies data access (except in South Korea) while no country’s people believes the government should have access to police non-violent crime. Most people everywhere support targeted monitoring but nowhere is there majority support for bulk surveillance. In Germany 53% believed everyone should have the right to send anonymous encrypted email while in the UK it’s 39%. Germans were pessimistic about technology with only 4% believing it was possible to be completely anonymous online. Over 88% believe that freedom of expression is very or extremely important and less than 1% unimportant; but over 70% didn’t believe that cyberbullying should be allowed. Opinions are more varied on extremist religious content, with 10.9% agreeing it should be allowed and 21% saying “it depends”.

Her third thread was intimate partner abuse, which has been experienced by 27% of women and 11% of men. There are typically three phases: a physical control phase where the abuser has access to the survivor’s device and may install malware, or even destroy devices; an escape phase which is high-risk as they try to find a new home, a job and so on; and a life-apart phase when they might want to shield location, email address and phone numbers to escape harassment, and may have lifelong concerns. Risks are greater for poorer people who may not be able to just buy a new phone. Sunny gave some case stories of extreme mate guarding and survivors’ strategies such as using a neighbour’s phone or a computer in a library or at work. It takes seven escape attempts on average to get to life apart. After escape, a survivor may have to restrict childrens’ online activities and sever mutual relationships; letting your child post anything can leak the school location and lead to the abuser turning up. She may have to change career as it can be impossible to work as a self-employed professional if she can no longer advertise. The takeaway is that designers should focus on usability during times of high stress and high risk; they should allow users to have multiple accounts; they should design things so that someone reviewing your history should not be able to tell you deleted anything; they should push 2-factor authentication, unusual activity notifications, and incognito mode. They should also think about how a survivor can capture evidence for use in divorce and custody cases while minimising the trauma. Finally she suggests serious research on other abuse survivors of different age groups and in different countries. For more see her paper here.

I will try to liveblog the rest of the talks in followups to this post.

What you get is what you C

We have a new paper on compiler security appearing this morning at EuroS&P.

Up till now, writers of crypto and security software not only have to fight the bad guys. We also have to deal with compiler writers, who every so often dream up some new optimisation routine which spots the padding instructions that we put in to make our crypto algorithms run in constant time, or the tricks that we use to ensure that sensitive data will be zeroised when a function returns. All of a sudden some critical code is optimised away, your code is insecure, and you scramble to figure out how to outwit the compiler once more.

So while you’re fighting the enemy in front, the compiler writer is a subversive fifth column in your rear.

It’s time that our toolsmiths were our allies rather than our enemies. We have therefore worked out what’s needed for a software writer to tell a compiler that a loop really must be executed in constant time, or that a variable really must be set to zero when a function returns. Languages like C have no way of expressing programmer intent, so we do this by means of code annotations.

Doing it properly turns out to be surprisingly tricky, but we now have a working proof of concept in the form of plugins for LLVM. For more details, and links to the code, see the web page of Laurent Simon, the lead author; the talk slides are here. This is the first technical contribution in our research programme on sustainable security.

Making security sustainable

Making security sustainable is a piece I wrote for Communications of the ACM and has just appeared in the Privacy and security column of their March issue. Now that software is appearing in durable goods, such as cars and medical devices, that can kill us, software engineering will have to come of age.

The notion that software engineers are not responsible for things that go wrong will be laid to rest for good, and we will have to work out how to develop and maintain code that will go on working dependably for decades in environments that change and evolve. And as security becomes ever more about safety rather than just privacy, we will have sharper policy debates about surveillance, competition, and consumer protection.

Perhaps the biggest challenge will be durability. At present we have a hard time patching a phone that’s three years old. Yet the average age of a UK car at scrappage is about 14 years, and rising all the time; cars used to last 100,000 miles in the 1980s but now keep going for nearer 200,000. As the embedded carbon cost of a car is about equal to that of the fuel it will burn over its lifetime, we just can’t afford to scrap cars after five years, as do we laptops.

For durable safety-critical goods that incorporate software, the long-term software maintenance cost may become the limiting factor. Two things follow. First, software sustainability will be a big research challenge for computer scientists. Second, it will also be a major business opportunity for firms who can cut the cost.

This paper follows on from our earlier work for the European Commission on what happens to safety regulation in the future Internet of Things.

Compartmentation is hard, but the Big Data playbook makes it harder still

A new study of Palantir’s systems and business methods makes sobering reading for people interested in what big data means for privacy.

Privacy scales badly. It’s OK for the twenty staff at a medical practice to have access to the records of the ten thousand patients registered there, but when you build a centralised system that lets every doctor and nurse in the country see every patient’s record, things go wrong. There are even sharper concerns in the world of intelligence, which agencies try to manage using compartmentation: really sensitive information is often put in a compartment that’s restricted to a handful of staff. But such systems are hard to build and maintain. Readers of my book chapter on the subject will recall that while US Naval Intelligence struggled to manage millions of compartments, the CIA let more of their staff see more stuff – whereupon Aldrich Ames betrayed their agents to the Russians.

After 9/11, the intelligence community moved towards the CIA model, in the hope that with fewer compartments they’d be better able to prevent future attacks. We predicted trouble, and Snowden duly came along. As for civilian agencies such as Britain’s NHS and police, no serious effort was made to protect personal privacy by compartmentation, with multiple consequences.

Palantir’s systems were developed to help the intelligence community link, fuse and visualise data from multiple sources, and are now sold to police forces too. It should surprise no-one to learn that they do not compartment information properly, whether within a single force or even between forces. The organised crime squad’s secret informants can thus become visible to traffic cops, and even to cops in other forces, with tragically predictable consequences. Fixing this is hard, as Palantir’s market advantage comes from network effects and the resulting scale. The more police forces they sign up the more data they have, and the larger they grow the more third-party databases they integrate, leaving private-sector competitors even further behind.

This much we could have predicted from first principles but the details of how Palantir operates, and what police forces dislike about it, are worth studying.

What might be the appropriate public-policy response? Well, the best analysis of competition policy in the presence of network effects is probably Lina Khan’s, and her analysis would suggest in this case that police intelligence should be a regulated utility. We should develop those capabilities that are actually needed, and the right place for them is the Police National Database. The public sector is better placed to commit the engineering effort to do compartmentation properly, both there and in other applications where it’s needed, such as the NHS. Good engineering is expensive – but as the Los Angeles Police Department found, engaging Palantir can be more expensive still.

Testing the usability of offline mobile payments

Last September we spent some time in Nairobi figuring out whether we could make offline phone payments usable. Phone payments have greatly improved the lives of millions of poor people in countries like Kenya and Bangladesh, who previously didn’t have bank accounts at all but who can now send and receive money using their phones. That’s great for the 80% who have mobile phone coverage, but what about the others?

Last year I described how we designed and built a prototype system to support offline payments, with the help of a grant from the Bill and Melinda Gates Foundation, and took it to Africa to test it. Offline payments require both the sender and the receiver to enter some extra digits to ensure that the payer and the payee agree on who’s paying whom how much. We worked as hard as we could to minimise the number of digits and to integrate them into the familar transaction flow. Would this be good enough?

Our paper setting out the results was accepted to the Symposium on Usable Privacy and Security (SOUPS), the leading security usability event. This has now started and the paper’s online; the lead author, Khaled Baqer, will be presenting it tomorrow. As we noted last year, the DigiTally pilot was a success. For the data and the detailed analysis, please see our paper:

DigiTally: Piloting Offline Payments for Phones, Khaled Baqer, Ross Anderson, Jeunese Adrienne Payne, Lorna Mutegi, Joseph Sevilla, 13th Symposium on Usable Privacy & Security (SOUPS 2017), pp 131–143

When safety and security become one

What happens when your car starts getting monthly upgrades like your phone and your laptop? It’s starting to happen, and the changes will be profound. We’ll be able to improve car safety as we learn from accidents, and fixing a flaw won’t mean spending billions on a recall. But if you’re writing navigation code today that will go in the 2020 Landrover, how will you be able to ship safety and security patches in 2030? In 2040? In 2050? At present we struggle to keep software patched for three years; we have no idea how to do it for 30.

Our latest paper reports a project that Éireann Leverett, Richard Clayton and I undertook for the European Commission into what happens to safety in this brave new world. Europe is the world’s lead safety regulator for about a dozen industry sectors, of which we studied three: road transport, medical devices and the electricity industry.

Up till now, we’ve known how to make two kinds of fairly secure system. There’s the software in your phone or laptop which is complex and exposed to online attack, so has to be patched regularly as vulnerabilities are discovered. It’s typically abandoned after a few years as patching too many versions of software costs too much. The other kind is the software in safety-critical machinery which has tended to be stable, simple and thoroughly tested, and not exposed to the big bad Internet. As these two worlds collide, there will be some rather large waves.

Regulators who only thought in terms of safety will have to start thinking of security too. Safety engineers will have to learn adversarial thinking. Security engineers will have to think much more about ease of safe use. Educators will have to start teaching these subjects together. (I just expanded my introductory course on software engineering into one on software and security engineering.) And the policy debate will change too; people might vote for the FBI to have a golden master key to unlock your iPhone and read your private messages, but they might be less likely to vote them a master key to take over your car or your pacemaker.

Researchers and software developers will have to think seriously about how we can keep on patching the software in durable goods such as vehicles for thirty or forty years. It’s not acceptable to recycle cars after seven years, as greedy carmakers might hope; the embedded carbon cost of a car is about equal to its lifetime fuel burn, and reducing average mileage from 200,000 to 70,000 would treble the car industry’s CO2 emissions. So we’re going to have to learn how to make software sustainable. How do we do that?

Our paper is here; there’s a short video here and a longer video here. The full report is available from the EU here.

Video on Edge

John Brockman of Edge interviewed me in London in March. The video of the interview, and a transcript, are now available on the Edge website. Edge runs big interviews with several dozen scientists a year, with particular interest in people who do cross-disciplinary work. For me, the interaction of economics, psychology and engineering is one of the things that makes security so fascinating, as well as the creativity driven by adversarial behaviour.

The topics covered include the last thirty years of progress (of lack of it) in information security, from the early beginnings, through the crypto wars and crime moving online, to the economics of security. We talked about how cryptography can help less developed countries; about managing complexity in big projects; about how network effects lead firms to design insecure products; about whether big data can undermine democracy by empowering elites; and about how in a future world of intelligent things, security may become more about safety than anything else. Finally I talk about our current big project, the Cambridge Cybercrime Centre.

John runs a literary agency, and he’s worked on books by many of the scientists who feature on his site. This makes me wonder: on what topic should I write my next book?

Pico in the Wild: Replacing Passwords, One Site at a Time

The Pico team have just returned from Paris, where Kat Krol presented at both EuroS&P and the affiliated EuroUSEC workshop on usable security.

Pico is an ERC-funded project, led by Frank Stajano, to liberate humanity from passwords. It lets you log into devices and websites without having to remember any secrets. It relies on “something you have”: in the current prototype, that’s your smartphone, potentially coupled with other wearables, though high-security niche applications could use a dedicated token instead.

Our latest paper presents a new study performed in collaboration with the Gyazo.com website, where we invited users to test out the Pico authentication app for logging in to the site. A QR code was displayed on the Gyazo login page for the duration of the trial, allowing users to access their images simply by scanning the QR code and avoiding the need to enter a username or password.

Participants used Pico for two weeks, during which time we collected feedback using telemetry data, questionnaires and phone interviews. Our aim was to conduct a trial with high ecological validity, avoiding the usual lab-based studies which can run the risk of collecting intentions rather than actual behaviour.

Some of the key results from the paper are that participants liked the idea of Pico and generally found it to be secure and less cognitively demanding than passwords. However, some disliked the need to scan QR codes and suggested replacing them with another modality of interaction. There was also a general consensus that participants wanted to see Pico extended for use with more sites. The pain of password entry on any particular site isn’t so great, but when you scale it up to the plurality of sites we all routinely have to deal with, it becomes a much more serious burden.

The study attracted participants from all over the world, including Brazil, Greece, Japan, Latvia, Spain and the United States. However, it also highlighted some of the challenges of performing experimental studies ‘in the wild’. From an initial pool of seven million potential participants – the number of active users of the Gyazo photo sharing site – after reducing down to those users who entered passwords more regularly on the site and who were willing to participate in the study, we eventually recruited twelve participants to test out Pico. Not as many as we’d hoped for.

In the paper we discuss some of the reasons for this, including the fact that popular websites attempt to minimise the annoyance of password entry through the use of mechanisms such as long-lived cookies and dedicated apps.

While the purpose of the paper is to explore usable security and end-user reactions, it also allowed us to test out the Pico nginx reverse-proxy lens. Using this we could deploy Pico to the Gyazo website as in-page Javascript, demonstrating seamless deployment (zero changes to the backend Gyazo code) and removing the need for the user to install a browser plugin. The tech worked like a charm throughout the trial.

The paper is available from the Internet Society and the abstract for Kat’s short talk, covering future Pico evaluation studies, is available from the EuroS&P website.