Call for Papers: 12th Privacy Enhancing Technologies Symposium (PETS 2012)

Privacy and anonymity are increasingly important in the online world. Corporations, governments, and other organizations are realizing and exploiting their power to track users and their behavior. Approaches to protecting individuals, groups, but also companies and governments, from profiling and censorship include decentralization, encryption, distributed trust, and automated policy disclosure.

The 12th Privacy Enhancing Technologies Symposium addresses the design and realization of such privacy services for the Internet and other data systems and communication networks by bringing together anonymity and privacy experts from around the world to discuss recent advances and new perspectives.

The symposium seeks submissions from academia and industry presenting novel research on all theoretical and practical aspects of privacy technologies, as well as experimental studies of fielded systems. We encourage submissions with novel technical contributions from other communities such as law, business, and data protection authorities, that present their perspectives on technological issues.

Submissions are due 20 February 2012, 23:59 UTC. Further details can be found in the full Call for Papers.

Metrics for dynamic networks

There’s a huge literature on the properties of static or slowly-changing social networks, such as the pattern of friends on Facebook, but almost nothing on networks that change rapidly. But many networks of real interest are highly dynamic. Think of the patterns of human contact that can spread infectious disease; you might be breathed on by a hundred people a day in meetings, on public transport and even in the street. Yet if we were facing a flu pandemic, how could we measure whether the greatest spreading risk came from high-order static nodes, or from dynamic ones? Should we close the schools, or the Tube?

Today we unveiled a paper which proposes new metrics for centrality in dynamic networks. We wondered how we might measure networks where mobility is of the essence, such as the spread of plague in a medieval society where most people stay in their villages and infection is carried between them by a small number of merchants. We found we can model the effects of mobility on interaction by embedding a dynamic network in a larger time-ordered graph to which we can apply standard graph theory tools. This leads to dynamic definitions of centrality that extend the static definitions in a natural way and yet give us a much better handle on things than aggregate statistics can. I spoke about this work today at a local workshop on social networking, and the paper’s been accepted for Physical Review E. It’s joint work with Hyoungshick Kim.

Beware of cybercrime data memes

Last year when I wrote a paper about mitigating malware I needed some figures on the percent of machines infected with malware. There are a range of figures, mainly below 10%, but one of the highest was 25%.

I looked into why this occurred and wrote it up in footnote #9 (yes, it’s a paper with a lot of footnotes!). My explanation was:

The 2008 OECD report on Malware [14] contained the sentence “Furthermore, it is estimated that 59 million users in the US have spyware or other types of malware on their computers.” News outlets picked up on this, e.g. The Sydney Morning Herald [20] who divided the 59 million figure into the US population, and then concluded that around a quarter of US computers were infected (assuming that each person owned one computer). The OECD published a correction in the online copy of the report a few days later. They were actually quoting PEW Internet research on adware/spyware (which is a subtly different threat) from 2005 (which was a while earlier than 2008). The sentence should have read “After hearing descriptions of ‘spyware’ and ‘adware’, 43% of internet users, or about 59 million American adults, say they have had one of these programs on their home computer.” Of such errors in understanding the meaning of data is misinformation made.

We may be about to have a similar thing happen with Facebook account compromises.
Continue reading Beware of cybercrime data memes

Call for Papers: USENIX Security 2012

The USENIX Security Symposium brings together researchers, practitioners, system administrators, system programmers, and others interested in the latest advances in the security of computer systems and networks. The 21st USENIX Security Symposium will be held August 8–10, 2012, in Bellevue, WA.

All researchers are encouraged to submit papers covering novel and scientifically significant practical works in computer security. Submissions are due on Thursday, 16 February 2012, 11:59 p.m. PST. The Symposium will span three days, with a technical program including refereed papers, invited talks, posters, panel discussions, and Birds-of-a-Feather sessions. Workshops will precede the symposium on August 6 and 7. Further details can be found in the full Call for Papers.

In common with other USENIX conferences, the proceedings of USENIX Security 2012 will be open access, and made available for free to everyone from the first day of the event.

Brute force password-guessing attempts on SSH

I recently set up a server, and predictably it started seeing brute-force password-guessing attempts on SSH. The host only permits public key authentication, and I also used fail2ban to temporarily block repeat offenders and so stop my logs from being filled up. However, I was curious what attackers were actually doing, so I patched OpenSSH to log the username and password for log-in attempts to invalid users (i.e. all except my user-account).

Some of the password attempts are predictable (e.g. username: “root”, password: “root”) but others are less easy to explain. For example, there was a log-in attempt for the usernames “root” and “dark” with the password “ManualulIngineruluiMecanic”, which I think is Romanian for Handbook of Mechanical Engineering. Why would someone use this password, especially for the uncommon username “dark”? Is this book common in Romania; is it likely to be by the desk of a sys-admin (or hacker) trying to choose a password? Has the hacker found the password in use on another compromised system; is it the default password for anything?

Over the next few weeks I’ll be posting other odd log-in attempts on my Twitter feed. Follow me if you would like to see what I find. Feel free to comment here if you have any theories on why these log-in attempts are being seen.

Bankers’ Christmas present

Every Christmas we give our friends in the banking industry a wee present. Sometimes it’s the responsible disclosure of a vulnerability, which we publish the following February: 2007’s was PED certification, 2008’s was CAP while in 2009 we told the banking industry of the No-PIN attack. This year too we have some goodies in the hamper: watch our papers at Financial Crypto 2012.

In other years, we’ve had arguments with the bankers’ PR wallahs. In 2010, for example, their trade association tried to censor one of our students’ thesis. That saga also continues; Britain’s bankers tried once more to threaten us so we told them once more to go away. We have other conversations in progress with bankers, most of them thankfully a bit more constructive.

This year’s Christmas present is different: it’s a tale with a happy ending. Eve Russell was a fraud victim whom Barclays initially blamed for her misfortune, as so often happens, and the Financial Ombudsman Service initially found for the bank as it routinely does. Yet this was clearly not right; after many lawyers’ letters, two hearings at the ombudsman, two articles in The Times and a TV appearance on Rip-off Britain, Eve won. This is the first complete case file since the ombudsman came under the Freedom of Information Act; by showing how the system works, it may be useful to fraud victims in the future.

(At Eve’s request, I removed the correspondence and case papers from my website on 5 Oct 2015. Eve was getting lots of calls and letters from other fraud victims and was finally getting weary. I have left just the article in the Times.)

Blood donation and privacy

The UK’s National Blood Service screens all donors for a variety of health and lifestyle risks prior donation. Many are highly sensitive, particularly sexual history and drug use. So I found it disappointing that, after consulting with a nurse who took detailed notes about specific behaviours and when they occurred, I was expected to consent to this information being stored indefinitely. When I pressed as to why this data is retained, I was told it was necessary so that I can be contacted as soon as I’m eligible again to donate blood, and to prevent me from donating before that.

The first reason seems weak, as contacting donors on an annual or semi-annual basis wouldn’t greatly decrease the level of donation (most risk-factor restrictions last at least 12 months or are indefinite). The second reason is a security fantasy, as it would only detect donors who lie at a second visit after being honest initially. I doubt donor dishonesty is a major problem and all blood is tested anyway. The purpose of lifestyle restrictions is to reduce the base rate of unsafe blood because all tests have false negatives. Storing detailed donor history doesn’t even have much time-saving benefit: history needs to be re-taken before each donation, since lifestyle risks can change.

I certainly don’t think the NBS is trying to stockpile data for nefarious reasons. I expect instead that the increasingly low technical costs of storing data speciously justify its very minor secondary uses if one ignores the risk of a massive compromise (NBS gets about 2 M donors per year). I wonder whether the inherent hazard of data collection was considered in the NBS’ cost/benefit analysis when this privacy policy was adopted . Security engineers and privacy advocates would do well to advocate non-collection of sensitive data before fancier privacy-enhancing technology. The NHS provides a vital service but they can’t do it without their donors, who are always in short supply. It would be a shame to discourage anybody from donating and being honest about their health history by demanding to store their data forever.

Job ad: post-doctoral researcher in security, operating systems, computer architecture

We are pleased to announce a job opening at the University of Cambridge Computer Laboratory for a post-doctoral researcher working in the areas of security, operating systems, and computer architecture.

Research Associate
University of Cambridge – Faculty of Computer Science & Technology

Salary: £27,428 – £35,788 pa
The funds for this post are available for one year:

We are seeking a Post-doctoral Research Associate to join the CTSRD Project, which is investigating fundamental improvements to CPU architecture, operating system (OS), and programming language structure in support of computer security. The CTSRD Project is a collaboration between the University of Cambridge and SRI International, and part of the DARPA CRASH research programme on clean-slate computer system design.

This position will be an integral part of an international team of researchers spanning multiple institutions across academia and industry. The successful candidate will contribute to low-level aspects of system software: compilers, language run-times, and OS kernels. Responsibilities will include researching the application of novel dynamic techniques to C-language operating systems and applications, including adaptation of the FreeBSD kernel and LLVM compiler suite, and measurement of the resulting system.

An ideal candidate will hold (or be close to finishing) a PhD in Computer Science, Mathematics, or similar with a strong background in low-level system software development, which should include at least of one of strong kernel development experience (FreeBSD preferred; Linux acceptable), or compiler internals experience (LLVM preferred; gcc acceptable). Strong experience with the C programming language is critical. Some background in computer security is also recommended.

Candidates must be able to provide evidence of relevant work demonstrated by a research publication track record or industrial experience. Good interpersonal and organisational skills and the ability to work in a team are also essential. This post is intended to be filled as soon as practically possible after the closing date.

Applications should include:

  • Curriculum Vitae
  • Brief statement of the particular contribution you would make to the project
  • A completed form CHRIS6

Completed applications should be sent by post to: Personnel-Admin,Computer Laboratory, William Gates Building, JJ Thomson Avenue, Cambridge, CB3 0FD, or by email to: personnel-admin@cl.cam.ac.uk

Quote Reference: NR10692
Closing Date: 10 January 2012

The University values diversity and is committed to equality of opportunity.

Privacy event on Wednesday

I will be talking in London on Wednesday at a workshop on Anonymity, Privacy, and Open Data about the difficulty of anonymising medical records properly. I’ll be on a panel with Kieron O’Hara who wrote a report on open data for the Cabinet Office earlier this year, and a spokesman from the ICO.

This will be the first public event on the technology and policy issues surrounding anonymisation since yesterday’s announcement that the government will give wide access to anonymous versions of our medical records. I’ve written extensively on the subject: for an overview, see my book chapter which explores the security of medical systems in general from p 282 and the particular problems of using “anonymous” records in research from p 298. For the full Monty, start here.

Anonymity is hard enough if the data controller is capable, and motivated to try hard. In the case of the NHS, anonymity has always been perfunctory; the default is to remove patient names and addresses but leave their postcodes and dates of birth. This makes it easy to re-identify about 99% of patients (the exceptions are mostly twins, soldiers, students and prisoners). And since I wrote that book chapter, the predicted problems have come to pass; for example the NHS lost a laptop containing over eight million patients’ records.

Here we go again

The Sunday media have been trailing a speech by David Cameron tomorrow about giving us online access to our medical records and our kids’ school records, and making anonymised versions of them widely available to researchers, companies and others. Here is coverage in the BBC, the Mail and the Telegraph; there’s also a Cabinet Office paper. The measures are supported by the CEO of Glaxo and opposed by many NGOs.

If the Government is going to “ensure all NHS patients can access their personal GP records online by the end of this Parliament”, they’ll have to compel the thousands of GPs who still keep patient records on their own machines to transfer them to centrally-hosted facilities. The systems are maintained by people who have to please the Secretary of State rather than GPs, and thus become progressively less useful. This won’t just waste doctors’ time but will have real consequences for patient safety and the quality of care.

We’ve seen this repeatedly over the lifetime of NPfIT and its predecessor the NHS IM&T strategy. Officials who can’t develop working systems become envious of systems created by doctors; they wrest control, and the deterioration starts.

It’s astounding that a Conservative prime minister could get the idea that nationalising something is the best way to make it work better. It’s also astonishing that a Government containing Liberals who believe in human rights, the rule of law and privacy should support the centralisation of medical records a mere two years after the Joseph Rowntree Reform Trust, a Liberal charity, produced the Database State report which explained how the centralisation of medical records (and for that matter children’s records) destroys privacy and contravenes human-rights law. The coming debate will no doubt be vigorous and will draw on many aspects of information security, from the dreadful security usability (and safety usability) of centrally-purchased NHS systems, through the real hazards of coerced access by vulnerable patients, to the fact that anonymisation doesn’t really work. There’s much more here. Of course the new centralisation effort will probably fail, just like the last two; health informatics is a hard problem, and even Google gave up. But our privacy should not depend on the government being incompetent at wrongdoing. It should refrain from wrongdoing in the first place.