What happens when your car starts getting monthly upgrades like your phone and your laptop? It’s starting to happen, and the changes will be profound. We’ll be able to improve car safety as we learn from accidents, and fixing a flaw won’t mean spending billions on a recall. But if you’re writing navigation code today that will go in the 2020 Landrover, how will you be able to ship safety and security patches in 2030? In 2040? In 2050? At present we struggle to keep software patched for three years; we have no idea how to do it for 30.
Our latest paper reports a project that Éireann Leverett, Richard Clayton and I undertook for the European Commission into what happens to safety in this brave new world. Europe is the world’s lead safety regulator for about a dozen industry sectors, of which we studied three: road transport, medical devices and the electricity industry.
Up till now, we’ve known how to make two kinds of fairly secure system. There’s the software in your phone or laptop which is complex and exposed to online attack, so has to be patched regularly as vulnerabilities are discovered. It’s typically abandoned after a few years as patching too many versions of software costs too much. The other kind is the software in safety-critical machinery which has tended to be stable, simple and thoroughly tested, and not exposed to the big bad Internet. As these two worlds collide, there will be some rather large waves.
Regulators who only thought in terms of safety will have to start thinking of security too. Safety engineers will have to learn adversarial thinking. Security engineers will have to think much more about ease of safe use. Educators will have to start teaching these subjects together. (I just expanded my introductory course on software engineering into one on software and security engineering.) And the policy debate will change too; people might vote for the FBI to have a golden master key to unlock your iPhone and read your private messages, but they might be less likely to vote them a master key to take over your car or your pacemaker.
Researchers and software developers will have to think seriously about how we can keep on patching the software in durable goods such as vehicles for thirty or forty years. It’s not acceptable to recycle cars after seven years, as greedy carmakers might hope; the embedded carbon cost of a car is about equal to its lifetime fuel burn, and reducing average mileage from 200,000 to 70,000 would treble the car industry’s CO2 emissions. So we’re going to have to learn how to make software sustainable. How do we do that?
Our paper is here; there’s a short video here and a longer video here. The full report is available from the EU here.
LaTeX springs to mind as an example of a software project which has been kept updated for over thirty years, despite being complex. I’m not sure I’d grant it full authority over an engine, but clearly the task of long-term updates is not impossible.
If I were in a guessing mood (which it seems I am!) I’d suggest that this is probably because a sizable fraction of its users are both skilled enough and legally permitted to modify the software to solve problems they encounter. While the notion of relying on auto enthusiasts for long-term software maintenance may chill the blood, consider that it’s a fairly routine procedure to remove all the working fluid from the brake system and replace it: you’re already relying on those drivers not having messed up an involved technical process in a not-immediately-obvious way.
There are so many technical challenges come to mind when thinking about supporting products with a 5, 10, or 30 year, including –
* If a manufacturer has a responsibility to distribute security & safety related software updates for free, then what models are acceptable for distributing software updates with new features, when testing & supporting many feature combinations will increase the complexity & difficulty of software testing.
* Code signing best practices (key life length, algorithms) actually change pretty rapidly, and will crypto libraries continue to keep & maintain ‘obsolete’ algorithms?
* how will deployment work, GPRS modems from just a few years ago are now becoming obsolete as 2G mobile networks are decomissioned to give frequencies to 3G/4G.
* Development tools for low power microcontroller architectures tend to be pretty stable over decades, though specific support for discontinued parts does often get dropped. But how about an embedded Linux or NetBSD subsystem, used perhaps for a navigation system, how far into the future will they be supported for maintaining applications; it is very hard to keep updating the OS itself it comes to expect & rely new hardware features.
* Should product manufacturers be compelled too put their source code in escrow somewhere, so that it is available in case the company goes out of business at some point.
* In the event a car manufacturer goes out of business now then third parties can relatively easily step in to provide physical spare parts for maintenance & repair. Who should or could provide (sell?) software updates to the embedded computer systems?
(I’ve skimmed your paper and appreciate some of these points are raised there.)
It will be more complicated if a part of a firm changes hands without someone willing and able to support old models.
http://www.telegraph.co.uk/investing/funds/poor-financial-advice-lost-mum-117k-and-no-comeback/
And here’s a piece on how to hack a Subaru.
This piece by Trevor Pott is first-class and highly relevant; we have to study how the Microsoft update treadmill evolved, and the incentives around that, if we’re to do better next time.
New technology – old problem – no thought given to security:
https://www.wired.com/story/wind-turbine-hack/
The problem is that Putin is engaging in cyberwarfare with the West and no one wants to admit that the state of war exists. A change in mindset is needed.
In some ways I think that cyber-security breaks the traditional certification paradigm of ‘certifying the whole not the part’. The problem is that in demonstrating and certifying safety you are trying to demonstrate that a system a) does what is needed and b) does nothing else. Introduce the possibility of malware and you’ve just invalidated any certification of (b). So say there’s a zero day and we respond by releasing a patch to our certified system. That’s great because the patch fixes the threat to our certification of (b). But we’ve changed the system so does it still do what is needed? Um… maybe? So the patch threatens our certification of (a).
You raise an issue I’ve raised with IT folk when they say why can’t you just patch that safety system. It certainly needs more thought.
As far as certifying the whole goes in my industry (large industrial systems) we use certification if products used as an element to build a safety case as the overall system cannot be certified as a whole. Other assurance methods add to the argument hopefully convincing the regulator/insurer the system does what it is supposed to – and nothing else.
Bruce Schneier is talking on the same topic; we discussed this on the last panel at SHB and didn’t disagree much. In fact, Bruce has just blogged my paper here
I talked about the issue of mandating patch lifetimes in a Naked Scientists radio piece and podcast, available here.
Fully agree with the content. They should be seen as one, but it’s hard to put a dividing line.
Here is my latest talk, on how to regulate AI in practice, given at the recent future of AI conference
Here’s an op-ed in Prospect Magazine on the topic
And here’s a great piece in The Economist!
Here are two further relevant talks – one for Computerphile and one for Edge
Here is my latest podcast on software obsolescence
And here is a talk I gave in Portugal in January
Why do we need EU regulation? Well, Tesla won’t give drivers their own crash data without a court order
I think the problem has many tendrils, and ones that I hope other readers will pick up. I think in academia we have a Murphy versus Malice problem, to borrow from Ross. IN the sense that Safety engineers usually study Murphy and refuse Malice as part of their concern, while most Computer Scientists do the converse. This means we have silo metrics and literature, when we need a system that does both. If you don’t do firmware integrity checks of your safety system, is it safe? Is an unsafe security system useful? Academic isolation of these topics needs to erode.
Additional tendrils for interested parties are:
Legal Enforcement for IoT liability
Consumer Rights in IoT
Managing vulnerabilities inherited by import of libraries
Sustainability of Critical Infrastructure
Firmware Attestation
Quality Assurance
Firmware Signing (Authenticity is functional today, but real world integrity is a dicey)
Corporate duty of care
Offensive Persistence
Here is the keynote talk I gave at AsiaCCS in Abu Dhabi