A lot of people are starting to ask about the security and privacy implications of the “Internet of Things”. Once there’s software in everything, what will go wrong? We’ve seen a botnet recruiting CCTV cameras, and a former Director of GCHQ recently told a parliamentary committee that it might be convenient if a suspect’s car could be infected with malware that would cause it to continually report its GPS position. (The new Investigatory Powers Bill will give the police and the spooks the power to hack any device they want.)
So here is the video of a talk I gave on The Internet of Bad Things to the Virus Bulletin conference. As the devices around us become smarter they will become less loyal, and it’s not just about malware (whether written by cops or by crooks). We can expect all sorts of novel business models, many of them exploitative, as well as some downright dishonesty: the recent Volkswagen scandal won’t be the last.
But dealing with pervasive malware in everything will demand new approaches. Our approach to the Internet of Bad Things includes our new Cambridge Cybercrime Centre, which will let us monitor bad things online at the kind of scale that will be required.
One of the ways a regulatory (policing) body like the U.S. FCC can respond is to prohibit anyone but the vendor from changing or inspecting compliance-critical software. As we saw with VW, explicitly trusting the crook you’re supposed to be policing can cause horrible problems (;-))
Dave Taht, Vint Cerf and 200-odd other experts recommended the opposite. We argued that the software explicitly be under the control of the purchaser, who is legally responsible in any case. In addition, we recommended a good, IETF-like policy of source control, code inspection and trusted builds.
See http://apps.fcc.gov/ecfs/comment/view?id=60001303221
I’ll post a note to the list pointing to your work in this same field.
The list I spoke of, by the way, is bufferbloat-fcc-advisory@lists.redbarn.org
It’s bufferbloat-fcc-discuss@, actually.