Britain’s phone hacking scandal touches many issues of interest to security engineers. Murdoch’s gumshoes listened to celebs’ voicemail messages using default PINs. They used false-pretext phone calls – blagging – to get banking and medical records.
We’ve known for years that private eyes blag vast amounts of information (2001 book, from page 167; 2006 ICO Report). Centralisation and the ‘Cloud’ are making things worse. Twenty years ago, your bank records were available only in your branch; now any teller at any branch can look them up. The dozen people who work at your doctor’s surgery used to be able to keep a secret, but the 840,000 staff with a logon to our national health databases?
Attempts to fix the problem using the criminal justice system have failed. When blagging was made illegal in 1995, the street price of medical records actually fell from £200 to £150! Parliament increased the penalty from fines to jail in 2006 but media pressure scared ministers off implementing this law.
Our Database State report argued that the wholesale centralisation of medical and other records was unsafe and illegal; and the NHS Population Demographics Service database appears to be the main one used to find celebs’ ex-directory numbers. Celebs can opt out, but most of them are unaware of PDS abuse, so they don’t. Second, you can become a celeb instantly if you are a victim of crime, war or terror. Third, even if you do opt out, the gumshoes can just bribe policemen, who have access to just about everything.
In future, security engineers must pay much more attention to compartmentation (even the Pentagon is now starting to get it), and we must be much more wary about the risk that law-enforcement access to information will be abused.
They can make blagging as illegal as they want, but they can no more stop people from doing it as they can stop them breathing. Blagging is what people do.
Errata: “…than stop them doing it”.
Oh I give up.
Guido Fawkes says “We are on the verge of criminalising hundreds of journalists”. A colleague commented that the journos had managed to do that perfectly well themselves.
Ross, would you say that the reason this scandal has been such an issue is that the general public do not realise how much data is available to so many members of the police, NHS, etc?
If so, do you think that this case could be used to publicise the issues around centralised databases in general, and help the cause of campaigns like No2ID, given how these could be abused even more thoroughly than this simple case of phone “hacking” and blagging?
Agree entirely.
Having been involved with compartmentation for some years, there are some seriously hard problems to solve, though (but security wouldn’t be so interesting if it was easy). Consider:
* characterisation of compartments
* timely creation of compartments, and their mapping to users
* timely maintenance of said mappings
* mapping auditability
* privilege creep in such systems
Healthcare is also a unique case, as in a genuine emergency most individuals would want the system to fail open.
I’ll take a look at that Pentagon info. Thanks for posting.
“general public do not realise how much data is available”
That’s true, but the other point is that most people don’t believe they might become a target. The likes of us habitually change default PINs, because we know the things that have PINs associated with them, set reasonably strong passwords, keep our “core” email secure so that password reset attacks are less likely to work, opt out of the edited electoral register, are XD, 93C3 our medical records, etc, etc, etc. So if, by some hideous ill chance, we were victims of newsworthy crime, we would be starting with a vaguely reasonable line of defence against common tabloid hacking attacks (or at least, we might tell ourselves that).
But most people aren’t us. They wouldn’t even know about the things that present risks, still less how to mitigate those risks. Why should they? They’ve got their jobs to do, whereas for security people, it _is_ our jobs. So when they suddenly find themselves in the cross-hairs, it’s too late.
It’s the flip-side of the old “you’re not interesting enough” argument, used to belittle people who worry about SCR. You don’t know you’re interesting to the bad guys until it’s too late to defend yourself.
I find it amazing that the phone companies have not faced more flak from the public for delivering a system that doesn’t enforce the changing of default PINs and turns on remote access to voice mail by default. Perhaps it’s not too long before the first law suit is levied at them.
DMJ that lawsuit would be a little late, since they already did that years ago. Until you set a PIN (and only a handful of nerds even know they can) the voicemail doesn’t work from other phones. Simple.
But remember you’re talking about journalists who were already bribing the police. It’s no trouble for them to bribe a few engineers at mobile phone companies and get in that way. And if you bribe the telco engineers suddenly you’re not restricted to a few voicemail messages, you can get a tap, copies of SMS messages, location data, anything you want.
Hence the emphasis on compartmentalisation. The Titanic could have wallowed for days with her nose in the water after striking an iceberg, but because she was not built from sealed compartments like a warship the water flooded section after section and she sank the same night. Think about where you work, how many people _really_ have access to personnel records? Sure, the HR staff. And the office manager, cleaners, the IT department, anybody with key J4, the third party maintenance company… And how soon would you know (if ever) if someone copied all those records and gave them to a journalist in exchange for an envelope full of cash?
I’ve yet to meet a journalist (down South here in States) who could even program a VCR. Have their techno-capabilities suddenly soared? Apparently so.
@Nick Lamb: “Until you set a PIN (and only a handful of nerds even know they can) the voicemail doesn’t work from other phones. Simple.”
Hmm.
“[Mitnick] was able to get into my voice mail by tricking my mobile operator’s equipment into registering the call as coming from the handset–basically pretending to be me. To do this, he wrote a script using open-source telecom software and used a voice-over-IP provider that allows him to set caller ID, but there also are online services that provide similar capability that non-hackers could subscribe to.”
http://news.cnet.com/8301-27080_3-20077732-245/kevin-mitnick-shows-how-easy-it-is-to-hack-a-phone/
@igb Like everyone else, I don’t believe that I would ever be the target of a tabloid (or other) attack and I’m not a security professional. Can you point people like me to some resources for learning how to protect myself and my family?
A friend of ours, who was the victim of an inside job credit union fraud, told us that a hacker can get my private info in as little as 7 seconds. That’s really scary, if true.
It’s not my intention to conflate journalists and hackers here but I assume that similar precautions are useful.
While I agree with the general policy concerns I think that only a fool would miss the fact that good old fashioned street politics are at the root of the /magnitude/ of this particular incident. Rupert is rich and famous and 80 years old and has walked on a lot of people’s toes on his way to the top. Payback time. The PM’s ties to him are well known and considering how little dirt they can get on David any smear will serve it’s turn.
Again, I agree with the policy issues addressed. But it’s politics and not policy that has caused this particular incident to blow up.
I liked the proposals in the Mitre report for the new security model, but I take issue with their proposal to expand NetTop & SELinux use. That’s absolutely ridiculous. These depend on a huge, complex TCB & have a steady stream of holes in them. Moreover, Bell points out that NSA developments like NetTop & acceptance of low assurance products effectively killed the COTS high assurance market.
They also refuse to allow B3/A1-certified software to RAMP up to Common Criteria: they are demanding full re-evaluations for EAL6/7. This is contrary to what they promised vendors & ensures existing high assurance approaches stay on the shelf. (Boeing SNS being an exception because they can afford to re-certify to EAL7.)
http://selfless-security.offthisweek.com/presentations/Bell_LBA.pdf
What *should* be done is the recertification, RAMP style, of existing high assurance products. GEMSOS, Boeing SNS, & LOCK come to mind. They should also openly release any old A1 or B3 class code, documentation, etc. that they aren’t using anymore. DTOS, ASOS, BLACKER VPN and Navy’s recently cancelled (but finished) High Assurance Crypto Gateway are good examples.
Finally, they should start buying COTS solutions that actually have the potential to offer real assurance. A perfect, modern example is INTEGRITY Workstation (maybe called Dell SCS now). It combines a high quality RTOS kernel, small trusted components, and a user-mode virtualization layer to run MSL security like NetTop. This sounds assurable to high levels, but not SELinux. Aesec, owner of GEMSOS, also has Citrix licenses & can build a true MLS thin client workstation using their technology. So, why is NSA investing in SELinux-based solutions?
It’s all… just… ridiculous…
It is interesting to note that compartmentation comes that high as a countermeasure to protect against this type of security issue. An option to already mentioned products could be PolyXene (www.polyxene.com).
Nick P: But, SELinux and similar software are actually deployable. That’s even why there have been holes found – the grey hats who found many of those holes aren’t looking at your niche proprietary code solutions because nobody uses them, they’re looking at SELinux because it’s out there in the wild.
The wild is what we need to fix. As this story illustrates – it’s not about improving security at some specialist military installation, it’s about every GP’s desktop PC. It’s about raising the baseline.
@ Nick Lamb
“But, SELinux and similar software are actually deployable”
That the systems I’ve mentioned were “deployed” somewhat undermines your claim, yes?
“the grey hats who found many of those holes aren’t looking at your niche proprietary code solutions because nobody uses them”
It’s actually more due to how they are built. You should look up the requirements for an Orange Book A1 system or a EAL7 Common Criteria system. The formal/mathematical development process, extremely modular design, intense independent review, covert channel analysis, & rigorous testing eliminates almost every major flaw (in practice, not theory). The very first of these systems, SCOMP, wasn’t given the A1 rating until it passed five years of review by mathematicians, coders, testers, pentesters, and cryptographers at NSA & other top labs. GEMSOS, used in many products, & Boeing SNS server, deployed currently, went through the same process.
Products like these have been securing high value assets from Internet & internal attacks for a few decades now with no known compromise. High assurance development processes just inherently produce products that are the highest quality achievable. For more info, see some of the looking back papers like “Lessons learned from GEMSOS” or “Lessons learned building the Caernarvon high assurance OS.”
Aside from that, SELinux wasn’t designed to secure an OS. It was a research prototype that was built to prove the Flask architecture & offer a few “tangible” protection benefits in the area of “mandatory access controls.” (SELinux FAQ) The systems its used in have been certified to EAL4. This level of assurance means indicates confidence that the system protects against “casual or inadvertent attempts to breach security.” Jonathan Shapiro likes to call it “certified insecure.” This is not what we should be promoting, especially if even volunteer projects achieved better (*cough* OpenBSD *cough*).
So, what can we do? What is practical? Well, if high assurance solutions are available, they should be used if the cost makes sense. Aesec, current owner of GEMSOS, have demoed it in “undefaceable” web servers, the BLACKER VPN, trusted databases, guards, file servers, and MLS-secure Citrix thin clients. Integrity, a medium assurance RTOS, has been used in TONS of embedded deployments and INTEGRITY Global Security offers many practical products. One variant, INTEGRITY-178B, was certified to EAL6+ by NSA & is used in the Dell Secure Consolidated Solution.
The Boeing SNS is still a great firewall/guard, currently in re-evaluation to EAL7. BAE Systems STOP OS has dropped in assurance, but is still the most secure UNIX-like OS out there. The recent variant supports desktop, server, embedded and Linux applications. INTEGRITY, LynxSecure, and PikeOS all have linux & posix app support. Another guy mentioned PolyXene but its assurance (EAL5) is disappointingly lower than many other products & I have to wonder about France’s EAL5 certification process: that they gave Mandrake Linux one says something.
But, what if we need it cheap, legacy, COTS, etc? Well, there are options for that too. The best bet is to go with a product that was used in medium to high assurance certifications by the government. Those products are usually designed more carefully & thoroughly audited. Argus PitBull is MUCH better than SELinux in this regard because it has all the features of a “trusted” workstation. Trustifier relies on a small kernel to do the same for Linux. The Turaya Security Kernel platform uses a microkernel, TPM, sound design, and paravirtualized Linux layer to run legacy applications alongside isolated, security-critical components. OK Labs has done a lot in this area, too.
The point is that there exists software that is provably secure against certain threats. There are also systems designed to be secure from the get-go and that have been extensively audited/tested. All of these systems have years of field use protecting things that hacker wanted & failed to get. There are also techniques that preserve legacy hardware & software investments, while offering much more security than a SELinux install. So, businesses really have no reason to use SELinux when its buggy, wasn’t designed to be the TCB of a trusted OS, and there’s tons of real-world alternatives with better security in theory & in practice. Avoid SELinux unless you have no alternative.
I think flame wars on the design details of compartmented systems are off topic, guys!
Just by making it a crime, won’t make blagging go away ( as previously stated.) Instead you need to structure insentives and disinsentives.
1. Make it a crime with fines that far exceed the gain for all criminal participants and beneficiaries.
2. The Database keepers ( hospitals, doctors, etc) need to be fined, highly, for each loss and also must notify all people whose records were lost.
3. The database keepers need to be rewarded for good pratices ( multi factor auth, limiting who can access records, etc ).
And more than an off the top of the head list, a reasonable study to determine a good combination of stick and carrot.
@ Ross Anderson
To be clear, I wasn’t intending a flame war. I was merely criticizing the defense contrator’s proposed “solution” to these problems and the popular belief that low assurance approaches like SELinux are unavoidable or our only option. Tangent over, though. My bad. 😉
@Jason Sands: Cell-phone providers could easily prevent other phones from accessing your voicemail if you don’t have a PIN. If your cell phone company can’t distinguish between a call from your cell phone and a call from outside with spoofed caller ID, then they suck at security.
Of course, we already knew that your cell phone company sucks at security. Just look at the joke that is femtocell authentication.
Dozens more comments here.
From your list above about leaking info or clandestinely obtaining personal data another angle used as an exploit to obtain and circulate personal data is happening via the use (in a pernicous way) of access personal data under s2 Data Protection Act 1998.
Sensitive personal data.
In this Act “sensitive personal data” means personal data consisting of information as to—
(a)the racial or ethnic origin of the data subject,
(b)his political opinions,
(c)his religious beliefs or other beliefs of a similar nature,
(d)whether he is a member of a trade union (within the meaning of the M1Trade Union and Labour Relations (Consolidation) Act 1992),
(e)his physical or mental health or condition,
(f)his sexual life,
(g)the commission or alleged commission by him of any offence, or
(h)any proceedings for any offence committed or alleged to have been committed by him, the disposal of such proceedings or the sentence of any court in such proceedings.
The demand for the knowledge to be imparted to a not fully identifyed audience is made to potential employees in England that they have to agree to such checks using a scottish back-ground checking service to avoid english law and this is overseen by a scottish bank on behalf of their client. If the person wants to be employed (the work involves handling particular goods; the goods are not important in nature) then it is mandated that the person ‘must’ agree to such checks being made. The personal data then acquired is then circulated to the bank and its group members, the bank acting for its overseas client means the overseas client also gets a copy, hence why the person also has to agree to his/her personal data being stored outside of the UK in country not signed-up to or obligated under English Law or the Laws of England.
Millions of people’s private and personal data and details have either gone missing or, without their knowledge, been accessed by unintended and unauthorized third parties.
To what extent can the use of a distributed personal data service in which the data owner remains in full control of access, and can direct exactly who sees what, and when, mitigate this kind of problem in the NHS specifically, and with personal data in general?