Infrastructure used to be regulated and boring; the phones just worked and water just came out of the tap. Software has changed all that, and the systems our society relies on are ever more complex and contested. We have seen Twitter silencing the US president, Amazon switching off Parler and the police closing down mobile phone networks used by crooks. The EU wants to force chat apps to include porn filters, India wants them to tell the government who messaged whom and when, and the US Department of Justice has launched antitrust cases against Google and Facebook.
Infrastructure – the Good, the Bad and the Ugly analyses the security economics of platforms and services. The existence of platforms such as the Internet and cloud services enabled startups like YouTube and Instagram soar to huge valuations almost overnight, with only a handful of staff. But criminals also build infrastructure, from botnets through malware-as-a-service. There’s also dual-use infrastructure, from Tor to bitcoins, with entangled legitimate and criminal applications. So crime can scale too. And even “respectable” infrastructure has disruptive uses. Social media enabled both Barack Obama and Donald Trump to outflank the political establishment and win power; they have also been used to foment communal violence in Asia. How are we to make sense of all this?
I argue that this is not simply a matter for antitrust lawyers, but that computer scientists also have some insights to offer, and the interaction between technical and social factors is critical. I suggest a number of principles to guide analysis. First, what actors or technical systems have the power to exclude? Such control points tend to be at least partially social, as social structures like networks of friends and followers have more inertia. Even where control points exist, enforcement often fails because defenders are organised in the wrong institutions, or otherwise fail to have the right incentives; many defenders, from payment systems to abuse teams, focus on process rather than outcomes.
There are implications for policy. The agencies often ask for back doors into systems, but these help intelligence more than interdiction. To really push back on crime and abuse, we will need institutional reform of regulators and other defenders. We may also want to complement our current law-enforcement strategy of decapitation – taking down key pieces of criminal infrastructure such as botnets and underground markets – with pressure on maintainability. It may make a real difference if we can push up offenders’ transaction costs, as online criminal enterprises rely more on agility than on on long-lived, critical, redundant platforms.
This was a Dertouzos Distinguished Lecture at MIT in March 2021.
Enjoyed the video on YouTube. Is the presentation file/(pdf of it or similar) available anywhere? Thanks
Follow the link to MIT at the bottom of the post.
There is a throwaway remark here that there is a big gap regarding the legal requirements for software that supports infrastructure (buildings, power) to be maintained. I don’t know about power, but as regards buildings, its interesting to note that the powers that Building Control have are quite hardcore. If your building is dangerous, and you are unresponsive to notice of this, then Building Control have the right to demolish all or part of it. Threatening an unresponsive owner with being reported to Building Control is therefore has quite a stimulating effect.
I am not sure if the “danger [..] from the condition of the building or structure” can be read to include security holes in embedded systems, but it’s interesting to speculate that if such a flaw was a safety issue, and you found a sufficiently tech-savvy Building Control Officer, you might get them to “demolish” the embedded system.
Ross,
“We may also want to complement our current law-enforcement strategy of decapitation – taking down key pieces of criminal infrastructure such as botnets and underground markets – with pressure on maintainability.”
More than a decade ago now I worked out a way to have a headless command and control system that also provided a significant disconnect between the bot and the herder. Which could not be taken down as long as another piece of infrastructure (Google cache) remained available. I won’t go into all the ins and outs and I will only mention the control not the data exfiltration.
In essence the herder would find a blog site that got scraped by Googles robots and leave a post that contained a unique identifer and some control string. Google then scraped that into it’s cache. The bots of the herder would search for the unique identifer and pull down the control string and act on it. Provided the herder only ever put one message up per blog and used a different unique identifer each time stopping the bots getting the control string would depend on Google being able to recognise and not cache the control messages.
Obviously there are complications with doing it for real, but there are known ways to solve those problems.
The two points to note are,
1, To stop the command and control would be very difficult if practically not possible.
2, Trying to track the herder down would be very difficult if practically not possible.
Thus the question is one similar to banks or other infrastructure that is to important or “to big to fail” what do you do?
Well the obvious answer is to never let the likes of Google to get “to big to fail” but that time may be long past…
Ross, ALL,
One thing I wish people would make clear is that the war against E2E encryption or similar is pointless as even AI can not solve it in favour of the evesdroppers/censors. Worse this has been known since WWII if not earlier so vast quantities of money are without any doubt being wasted on a holy grail chase, to the enrichment of those close to the Treasury spigots in every nation or state that has a Treasury.
Put simply communications has as many layers in it as the two communicating parties wish to put in. An evesdroping third party can always be kept at one or more layers below that of the upper most layer of the two communicating parties providing neither “betrays the system” intentionally or otherwise.
As long as the two communicating parties are alowed to communicate they can implement a secure communicatons network on top even if it is a broadcast network with millions listening. The classic example of this was the “Now Some messages to our friends…” the BBC put out in WWII where a One Time Phrase acted as a code word to cause some prearanged action to happen.
With Prisoners it is assumed that each communication is “gated” by a “censor” who reads through a written communication looking for codes or ciphers etc. If they detect such then that communication is redacted or stoped.
Thus the high level layer the prisoners use has to have certain characteristics,
1, It is “one time” to stop any form of analytical attack.
2, It must be indistinguishable from inocuous plaintext to limit the number of messages being redacted or stopped.
3, There must be a sequence identifier to indicate when a message has been stopped, redacted, replayed, etc.
4, There must be a way to authenticate the message, to stop impersonation and other message injection attacks.
One Time Phrases actually cover those requirments. However they lack forward flexibility, that is all actions need to be predicted and put in the code book, which is realistically not possible
One Time Pads can allow future flexability to what ever extent is required as long as enough KeyMat has previously been securely transfered between the two communicating parties.
However the output from a One Time Pad generally looks like code or causes styalisation that looks awkward especially with traditional stego type methods.
The hunt for a cipher that takes free plaintext in and outputs free plaintext that reads like ordinary text has been on since before Sir Fransis Bacon in Elizabethan times.
Such ciphers are entirely possible unfortunatly they generaly have failings. Sir Francis Bacon’s system using two fonts to send messages in five bit binary characters was nice in theory but unworkable in practice as it looked “odd” thus would be stopped by a censor.
Can we do better well yes we can. Certain parts of messages are always stylized the opening greating and certain niceties/pleasantries by over use become stylized, as they are not realy free text but social formalities / conventions.
For instance in Emails we say “Hi”, “hello” or similar often followed by “I hope you are well / OK” or similar. Each of these offers one bit of information, others such as “We should meet up for a XXX” can give upto 4bits for XXX (ie tea, coffee, beer, drink, meal, lunch, diner, brunch,….). Stylistically the lead in sentance can be likewise altered so “We should” can be “How about we” or several other variations. These could be selected at random or to send further bit’s or even signals like a parity bit to validate or invalidate the XXX.
The problem is the “message in depth” issue will find correlations if things are not randomized for every message. Again back in WWII and earlier “codes” got “super enciphered” by One Time Tapes in the CommCen for exactly this reason.
So the system can use a One Time Pad to encipher the bits prior to them being converted to the code word to use in the phrases.
With the use of modern computers to generate messages whilst rhe bandwidth is low the system can be made to meet the requirments.
Which means it is game over for the evesdropper / censor unless they want to stop all communications.
Even if they put the laughable “Artificial Intelligence” into the communications end points, the communicating parties having the security end points at a higher level just send their message as even a plaintext SMS that get so odd looking anyway censoring would be at best probabilistic and not alowing communications a source of complaint and attendent resource tieup that would cause any such system to become economically not viable.
However the flip side is what the Dutch “secure phone cracking” works on and that users are lazy / unknowing / want convenience. The secure apps are put on the same device as the communications end point. This means that the use of a whole number of techniques to do an “end run” attack around the security end point in the app to get at the plaintext user interface is almost trivial. Thus game over for the users unless they take the security end point off of the device, then it’s game over for the evesdroppers as the security end point is now out of reach.
However this does leave the evesdroppers the meta-data for “Traffic Analysis” Gordon Welchman thought up again in WWII. However enough is known about TA that the users can take steps to make TA useless not just to LEA’s that almost always “need plaintext of mesages” bot the meta-data, but also the IC who usually only need the meta-data that TA provides.
None of this information is either secret or new, it’s all been in the public domain since WWII or in the case of TA since the 1980’s.
Which begs the qurstion “Why are people not aware of it?”.
The term is new, but concept is not. Throughout the history of computing, IT organizations have been using their own infrastructure to host applications, data, servers etc. Now most of them are renting the infrastructure, with remote servers to host their application or data. Organizations called service providers exist especially to provide, manage and maintain the infrastructure on which their client organization’s application or data are hosted. The client organization gets access controls to manage their applications and data hosted on the remote server. This is the main idea behind cloud computing.nice article useful content