Each year we divide our masters of public policy students into teams and get them to write case studies of public policy failures. The winning team this year wrote a case study of the care.data fiasco. The UK government collected personal health information on tens of millions of people who had had hospital treatment in England and then sold it off to researchers, drug companies and even marketing firms, with only a token gesture of anonymisation. In practice patients were easy to identify. The resulting scandal stalled plans to centralise GP data as well, at least for a while.
Congratulations to Lizzie Presser, Maia Hruskova, Helen Rowbottom and Jesse Kancir, who tell the story of how mismanagement, conflicts and miscommunication led to a failure of patient privacy on an industrial scale, and discuss the lessons that might be learned. Their case study has just appeared today in Technology Science, a new open-access journal for people studying conflicts that arise between technology and society. LBT readers will recall several posts reporting the problem, but it’s great to have a proper, peer-reviewed case study that we can give to future generations of students. (Incidentally, the previous year’s winning case study was on a related topic, the failure of the NHS National Programme for IT.)
A fine read, and a good reminder of how dogmatic pursuit of a goal without regard to dissenting views can derail anything.
As I see it, even if they actually do fix all the problems, the project (or anything it gets renamed to) is so tarnished that it will be a very hard sell to persuade people that actually “yes we have really fixed the problems”.
At the end there is a box for suggestions for further research, and I believe one area highlighted in the paper is the issue of attaining trustworthy anonymisation of the data. Is there actually any method which can provide the anonymity with the sort of assurance that would persuade me to cancel my opt-out ?
Since I suspect the answer to that will be “no”, what combination of processes and legal protections might provide enough re-assurance ? Would, for example, making any linking of the data with other records in order to de-anonymise it an absolute offence with meaningful consequences (e.g. jail time for those responsible, rather than what can in effect be a “pocket money” fine for the organisation concerned) do it ?
One thing I can state with complete certainty: over the last few years, I have become significantly more guarded in consultations with my GP. I definitely do not say things now that I would have said back in the days when there was a genuine expectation of confidentiality.
Coverage in The Register
I like the paper and agree with the analysis.
I suggest that the problem started rather earlier. The Data Protection Act 1998 distinguishes personal data from sensitive personal data. The latter are subject to Schedule 3, which requires ‘explicit consent’ unless certain exceptions apply. Unfortunately successive governments have weakened that control.
It isn’t true to say that the Data Protection Act requires explicit consent in order for an organisation to process sensitive personal data. Explicit consent is one condition, but there are others. Although successive governments have added new conditions, expanding the range of alternatives to consent, those alternatives were always there, and the Act contained a gateway to allow ministers to add them from the beginning. The Act has never been geared exclusively towards consent, even for the use of medical data. If data protection is ever to be taken seriously in the UK, it’s important to talk about what the law actually says, so we can debate whether it’s adequate or not.