Depictions of cybercrime often revolve around the figure of the lone ‘hacker’, a skilled artisan who builds their own tools and has a deep mastery of technical systems. However, much of the work involved is now in fact more akin to a deviant customer service or maintenance job. This means that exit from cybercrime communities is less often via the justice system, and far more likely to be a simple case of burnout.
Continue reading Cybercrime is (still) (often) boringMonthly Archives: April 2021
Data ordering attacks
Most deep neural networks are trained by stochastic gradient descent. Now “stochastic” is a fancy Greek word for “random”; it means that the training data are fed into the model in random order.
So what happens if the bad guys can cause the order to be not random? You guessed it – all bets are off. Suppose for example a company or a country wanted to have a credit-scoring system that’s secretly sexist, but still be able to pretend that its training was actually fair. Well, they could assemble a set of financial data that was representative of the whole population, but start the model’s training on ten rich men and ten poor women drawn from that set – then let initialisation bias do the rest of the work.
Does this generalise? Indeed it does. Previously, people had assumed that in order to poison a model or introduce backdoors, you needed to add adversarial samples to the training data. Our latest paper shows that’s not necessary at all. If an adversary can manipulate the order in which batches of training data are presented to the model, they can undermine both its integrity (by poisoning it) and its availability (by causing training to be less effective, or take longer). This is quite general across models that use stochastic gradient descent.
This work helps remind us that computer systems with DNN components are still computer systems, and vulnerable to a wide range of well-known attacks. A lesson that cryptographers have learned repeatedly in the past is that if you rely on random numbers, they had better actually be random (remember preplay attacks) and you’d better not let an adversary anywhere near the pipeline that generates them (remember injection attacks). It’s time for the machine-learning community to carefully examine their assumptions about randomness.