Category Archives: Security engineering

Bad security, good security, case studies, lessons learned

(In)security at the University of Birmingham

I travelled to the University of Birmingham on Friday to give a guest lecture to their undergraduates on Anonymity and Traceability. It was given in a smart new lecture theatre, which had what Birmingham apparently call a lectern PC at the front with buttons to give the speaker control of the room’s AV devices and lighting, along with a proper PC running various Windows applications, so you can plug in your USB flash drive and display your material.

As you can see from the photo, they have a rather trivial security model for using this PC:

Birmingham Lectern PC with text “Username=user” and “Password=user&2006″

The text (apologies for a rather fuzzy photo) says: "Username=user" and "Password=user&2006".

With a little thought, it can be seen that most likely this isn’t really a security issue at all, but a software design issue. I rather suspect that there just isn’t a way of turning off the login function, and the PC can’t be used to access any other important systems — and no-one wants to see lectures delayed if the password isn’t to hand. That’s undoubtedly why they’ve used proper Dymo-style tape for the information, rather than relying on the traditional yellow sticky, which could get lost!

Health database optout – latest news

This morning I debated health privacy on Radio 4’s Today programme with health minister Lord Warner. You can listen to the debate here, and there is an earlier comment by Michael Summers of the Patients’ Association here.

I support a campaign by TheBigOptOut.org which has so far persuaded thousands of people to write to their GPs forbidding the upload of their patient records to central systems. Once they are uploaded, you’ll have to prove ‘substantial mental distress’ to the government (as Lord Warner says) to get them removed or restricted. It is much simpler to tell your GP not to upload them in the first place (and you can always change your mind later if the Government delivers on its claims about safety and privacy).

For more, see TheBigOptOut.org, nhs-it.info and my previous blog posts here, here and here, and our work on children’s databases (children’s safety and privacy might be particularly at risk from the proposals, as I explain in the debate).

23rd Chaos Communication Congress

23C3 logoThe 23rd Chaos Communication Congress will be held later this month in Berlin, Germany on 27–30 December. I will be attending to give a talk on Hot or Not: Revealing Hidden Services by their Clock Skew. Another contributor to this blog, George Danezis, will be talking on An Introduction to Traffic Analysis.

This will be my third time speaking at the CCC (I previously talked on Hidden Data in Internet Published Documents and The Convergence of Anti-Counterfeiting and Computer Security in 2004 then Covert channels in TCP/IP: attack and defence in 2005) and I’ve always had a great time but this year looks to be the best yet. Here are a few highlights from the draft programme, although I am sure there are many great talks I have missed.

It’s looking like a great line-up, so I hope many of you can make it. See you there!

Kids’ databases

The Information Commissioner has just published a report we wrote for him on the UK Government’s plans to link up most of the public-sector databases that contain information on children. We’re concerned that aggregating this data will be both unsafe and illegal. Our report has got coverage in the Guardian, the Telegraph (with a leader), the Daily Mail, the BBC and the Evening Standard.

A backwards way of dealing with image spam

There is a great deal more email spam in your inboxes this Autumn (as noted, for example, here, here and here!). That’s partly because a very great deal more spam is being generated — perhaps twice as much as just a few months ago.

A lot of this junk is “image spam”, where the advertisement is contained within an embedded picture (almost invariably a GIF file). The filtering systems that almost everyone now uses are having significant problems in dealing with these images and so a higher percentage of the spam that arrives at the filters is getting through to your inbox.

So higher volumes and weaker filtering are combining to cause a significant problem for us all 🙁

But I have an interesting suggestion for filtering the images: it might be a lot simpler to go about it backwards 🙂

So read on!

Continue reading A backwards way of dealing with image spam

Traffic Data Retention and Forensic Imaging

Last week I participated in yet another workshop on traffic data retention, in ICRI, with the added twist that now traffic data retention is in ‘European Law’, and shall become actual law in most EU countries very soon. It was a special treat to be talking just after Chief Superindendent Luc Beirens, Head of the Belgian Federal Computer Crime Unit, that tried to sell the idea of retention to a crowd of people from the flagship EU privacy project PRIME.

As usually Beirens assured us that proper judicial oversight exists and will regulate access to traffic data. Yet a different pictured emerged when we got into the details of how cyber-crime investigations are conducted. It turns out that the first thing that the police does, to the suspects but also the victims, of cyber-crime is to take a forensic image of their hard disk. This is a sound precaution: booting up the machine to extract evidence may activate malware on a victim’s machine to erase traces, or an alert system on a suspects computer.

The obvious question becomes: how does this policy of automatic forensic imaging and analysis of a hard disk interacts with traffic data retention? Luc was keen to acknowledge that the investigation procedure would proceed unchanged, and an image of a hard disk that may contain retained data would be taken — and forensic tools used on the totality of the hard disk. To be fair, tools that take a forensic image or only look at parts of the disk according to a set security policy do not exist.

What does this mean? If you are a victim of cyber-crime, or a company you have given your data to is a victim of cyber-crime, all the data will end up with the police. This will be the case irrespective of judicial oversight, or any other safeguards. You may ask yourself what the chance is that the retained data will be kept of a computer that maybe part of an investigation? First do not underestimate the fact that these machines will end up on-line to serve requests, and therefore will be subject to their fair share of attacks. But most importantly this case will obviously occur as part of an investigation on themisuse, unauthorized access, or attempted access to the traffic data retention systems!

This standard procedure may also explain why companies are so reluctant to call in the high tech crime units to help them investigate cyber-crime. Their procedures are simply incompatible with any security policy with a confidentiality component. Would you report some of your documents being stolen from your home or business, if this meant the police taking a copy of every single paper in the building?

Shishir wins BCS best student award

Security group member Shishir Nagaraja has won the BCS best PhD student award for his paper The topology of covert conflict. The judges remarked that “the work made an important contribution to traffic analysis in an area that had been previously overlooked; the authors used realistic models with clear results and exciting directions for future research.”

New website on NHS IT problems

At http://nhs-it.info, colleagues and I have collected material on the NHS National Programme for IT, which shows all the classic symptoms of a large project failure in the making. If it goes belly-up, it could be the largest IT disaster ever, and could have grave consequences for healthcare in Britain. With 22 other computer science professors, I wrote to the Health Select Committee urging them to review the project. The Government is dragging its feet, and things seem to be going from bad to worse.

Kish's "totally secure" system is insecure

Recently, Kish proposed a “totally secure communication system” that uses only resistors, wires and Johnson noise. His paper—“Totally Secure Classical Communication Utilizing Johnson (-like) Noise and Kirchoff’s Law”—was published on Physics Letters (March 2006).

The above paper had been featured in Science magazine (Vol. 309), reported in News articles (Wired news, Physorg.com) and discussed in several weblogs (Schneier on security, Slashdot). The initial sensation created was that Quantum communication could now be replaced by a much cheaper means. But not quite so …

This paper—to appear in IEE Information Security—shows that the design of Kish’s system is fundamentally flawed. The theoretical model, which underpins Kish’s system, implicitly assumes thermal equilibrium throughout the communication channel. This assumption, however, is invalid in real communication systems.

Kish used a single symbol ‘T’ to denote the channel temperature throughout his analysis. This, however, disregards the fact that any real communication system has to span a distance and endure different conditions. A slight temperature difference between the two communicating ends will lead to security failure—allowing an eavesdropper to uncover the secret bits easily (more details are in the paper).

As a countermeasure, it might be possible to adjust the temperature difference at two ends to be as small as possible—for example, by using external thermal noise generators. However, this gives no security guarantee. Instead of requiring a fast computer, an eavesdropper now merely needs a voltage meter that is more accurate than the equipments used by Alice and Bob.

In addition, the transmission line must maintain the same temperature (and noise bandwidth) as the two ends to ensure “thermal equilibrium”, which is clearly impossible. Kish avoids this problem by assuming zero resistance on the transmission line in his paper. Since the problem with the finite resistance on the transmission line had been reported before, I will not discuss it further here.

To sum up, the mistake in Kish’s paper is that the author wrongly grafted assumptions from one subject into another. In circuit analysis, it is common practice to assume the same room temperate and ignore wire resistance in order to simplify the calculation; the resultant discrepancy is usually well within the tolerable range. However, the design of a secure communication is very different, as a tiny discrepancy could severely compromise the system security. Basing security upon invalid assumptions is a fundamental flaw in the design of Kish’s system.