Of testing centres, snipe, and wild geese: COVID briefing paper #8

Does the road wind up-hill all the way?
   Yes, to the very end.
Will the day's journey take the whole long day?
   From morn to night, my friend.

Christina Rossetti, 1861: Up-Hill. 

This week’s COVID briefing paper takes a personal perspective as I recount my many adventures in complying with a call for testing from my local council.

So as to immerse the reader in the experience, this post is long. If you don’t have time for that, you can go directly to the briefing.

The council calls for everyone in my street to be tested

On Thursday 13 August my household received a hand-delivered letter from the chief executive of my local council. There had been an increase in cases in my area, and as a result, they were asking everyone on my street to get tested.

Dramatis personae:

  • ME, a knowledge worker who has structured her life so as to minimize interaction with the outside world until the number of daily cases drops a lot lower than it is now;
  • OTHER HOUSEHOLD MEMBERS, including people with health conditions, who would be shielding if shielding hadn’t ended on August 1.

Fortunately, everyone else in my household is also in a position to enjoy the mixed blessing of a lifestyle without social interaction. So, none of us reacted to the news of an outbreak amongst our neighbours with fear for our own health, considering our habits over the last six months. Rather, we were, and are, reassured that the local government was taking a lead.

My neighbour, however, was having a different experience. Like most people on our street, he does not have the same privileges I do: he works in a supermarket, he does not have a car, and his only Internet access is through his dumbphone. Days before, he had texted me at the end of his tether, because customers were not wearing masks or observing social distancing. He felt (because he is) unprotected, and said it was only a matter of time before he becomes infected. Receiving the council’s letter only reinforced his alarm.

Booking the tests

Continue reading Of testing centres, snipe, and wild geese: COVID briefing paper #8

Cambridge Cybercrime Centre: COVID briefing papers

The current coronavirus pandemic has significantly disrupted all our societies and, we believe, it has also significantly disrupted cybercrime.

In the Cambridge Cybercrime Centre we collect crime-related datasets of many types and we expect, in due course, to be able to identify, measure and document this disruption. We will not be alone in doing so — a key aim of our centre is to make datasets available to other academic researchers so that they too can identify, measure and document. What’s more, we make this data available in a timely manner — sometimes before we have even looked at it ourselves!

When we have looked at the data and identified what might be changing (or where the criminals are exploiting new opportunities) then we shall of course be taking the traditional academic path of preparing papers, getting them peer reviewed, and then presenting them at conferences or publishing them in academic journals. However, that process is extremely slow — so we have decided to provide a faster route for getting out the message about what we find to be going on.

Our new series of “COVID Briefing Papers” are an ongoing series of short-form, open access reports aimed at academics, policymakers, and practitioners, which aim to provide an accessible summary of our ongoing research into the effects which the coronavirus pandemic (and government responses) are having on cybercrime. We’re hoping, at least for a while, to produce a new briefing paper each week … and you can now read the very first, where Ben Collier explains what has happened to illegal online drug markets… just click here!

Reinforcement Learning and Adversarial thinking

We all know that learning a new craft is hard. We spend a large part of our lives learning how to operate in everyday physics.  A large part of this learning comes from observing others, and when others can’t help we learn through trial and error. 

In machine learning the process of learning how to deal with the environment is called Reinforcement Learning (RL). By continuous interaction with its environment, an agent learns a policy that enables it to perform better. Observational learning in RL is referred to as Imitation Learning. Both trial and error and imitation learning are hard: environments are not trivial, often you can’t tell the ramifications of an action until far in the future, environments are full of non-determinism and there are no such thing as a correct policy. 

So, unlike in supervised and unsupervised learning, it is hard to tell if your decisions are correct. Episodes usually constitute thousands of decisions, and you will only know if you perform well after exploring other options. But experiment is also a hard decision: do you exploit the skill you already have, or try something new and explore the unknown?

Despite all these complexities, RL has managed to achieve incredible performance in a wide variety of tasks from robotics through recommender systems to trading. More impressively, RL agents have achieved superhuman performance in Go and other games, tasks previously believed to be impossible for computers. 

Continue reading Reinforcement Learning and Adversarial thinking

Towards greater ecological validity in security usability

When you are a medical doctor, friends and family invariably ask you about their aches and pains. When you are a computer specialist, they ask you to fix their computer. About ten years ago, most of the questions I was getting from friends and family as a security techie had to do with frustration over passwords. I observed that what techies had done to the rest of humanity was not just wrong but fundamentally unethical: asking people to do something impossible and then, if they got hacked, blaming them for not doing it.



So in 2011, years before the Fido Alliance was formed (2013) and Apple announced its smartwatch (2014), I published my detailed design for a clean-slate password replacement I called Pico, an alternative system intended to be easier to use and more secure than passwords. The European Research Council was generous enough to fund my vision with a grant that allowed me to recruit and lead a team of brilliant researchers over a period of five years. We built a number of prototypes, wrote a bunch of papers, offered projects to a number of students and even launched a start-up and thereby learnt a few first-hand lessons about business, venture capital, markets, sales and the difficult process of transitioning from academic research to a profitable commercial product. During all those years we changed our minds a few times about what ought to be done and we came to understand a lot better both the problem space and the mindset of the users.

Continue reading Towards greater ecological validity in security usability

Security and Human Behaviour 2020

I’ll be liveblogging the workshop on security and human behaviour, which is online this year. My liveblogs will appear as followups to this post. This year my program co-chair is Alice Hutchings and we have invited a number of eminent criminologists to join us. Edited to add: here are the videos of the sessions.

Three paper Thursday: Ethics in security research

Good security and cybercrime research often creates an impact and we want to ensure that impact is positive. This week I will discuss three papers on ethics in computer security research in the run up to next week’s Security and Human Behaviour workshop (SHB). Ethical issues in research using datasets of illicit origin (Thomas, Pastrana, Hutchings, Clayton, Beresford) from IMC 2017, Measuring eWhoring (Pastrana, Hutchings, Thomas, Tapiador) from IMC 2019, and An Ethics Framework for Research into Heterogeneous Systems (Happa, Nurse, Goldsmith, Creese, Williams).

Ethical issues in research using datasets of illicit origin (blog post) came about because in prior work we had noticed that there were ethical complexities to take care of when using data that had “fallen off the back of a lorry” such as the backend databases of hacked booter services that we had used. We took a broad look at existing published guidance to synthesise those issues which particularly apply to using data of illicit origin and we expected to see discussed by researchers:

Continue reading Three paper Thursday: Ethics in security research

How to jam neural networks

Deep neural networks (DNNs) have been a very active field of research for eight years now, and for the last five we’ve seen a steady stream of adversarial examples – inputs that will bamboozle a DNN so that it thinks a 30mph speed limit sign is a 60 instead, and even magic spectacles to make a DNN get the wearer’s gender wrong.

So far, these attacks have targeted the integrity or confidentiality of machine-learning systems. Can we do anything about availability?

Sponge Examples: Energy-Latency Attacks on Neural Networks shows how to find adversarial examples that cause a DNN to burn more energy, take more time, or both. They affect a wide range of DNN applications, from image recognition to natural language processing (NLP). Adversaries might use these examples for all sorts of mischief – from draining mobile phone batteries, though degrading the machine-vision systems on which self-driving cars rely, to jamming cognitive radar.

So far, our most spectacular results are against NLP systems. By feeding them confusing inputs we can slow them down over 100 times. There are already examples in the real world where people pause or stumble when asked hard questions but we now have a dependable method for generating such examples automatically and at scale. We can also neutralize the performance improvements of accelerators for computer vision tasks, and make them operate on their worst case performance.

One implication is that engineers designing real-time systems that use machine learning will have to pay more attention to worst-case behaviour; another is that when custom chips used to accelerate neural network computations use optimisations that increase the gap between worst-case and average-case outcomes, you’d better pay even more attention.

Three Paper Thursday – Analysing social networks within underground forums

One would be hard pressed to find an aspect of life where networks are not present. Interconnections are at the core of complex systems – such as society, or the world economy – allowing us to study and understand their dynamics. Some of the most transformative technologies are based on networks, be they hypertext documents making up the World Wide Web, interconnected networking devices forming the Internet, or the various neural network architectures used in deep learning. Social networks that are formed based on our interactions play a central role in our every day lives; they determine how ideas and knowledge spread and they affect behaviour. This is also true for cybercriminal networks present on underground forums, and social network analysis provides valuable insights to how these communities operate either on the dark web or the surface web.

For today’s post in the series `Three Paper Thursday’, I’ve selected three papers that highlight the valuable information we can learn from studying underground forums if we model them as networks. Network topology and large scale structure provide insights to information flow and interaction patterns. These properties along with discovering central nodes and the roles they play in a given community are useful not only for understanding the dynamics of these networks but for various purposes, such as devising disruption strategies.

Continue reading Three Paper Thursday – Analysing social networks within underground forums

Hiring for the Cambridge Cybercrime Centre

We have just advertised some short-term “post-doc” positions in the Cambridge Cybercrime Centre: https://www.cambridgecybercrime.uk.

We are specifically interested in extending our data collection to better record how cybercrime has changed in response the COVID-19 pandemic and we wish to mine our datasets in order to understand whether cybercrime has increased, decreased or displaced during 2020.

There are a lot of theories being proposed as to what may or may not have changed, often based on handfuls of anecdotes — we are looking for researchers who will help us provide data driven descriptions of what is (now) going on — which will feed into policy debates as to the future importance of cybercrime and how best to respond to it.

We are not necessarily looking for existing experience in researching cybercrime, although this would be a bonus. However, we are looking for strong programming skills — and experience with scripting languages and databases would be much preferred. Good knowledge of English and communication skills are important.

Since these posts are only guaranteed to be funded until the end of September, we will be shortlisting candidates for (online) interview as soon as possible (NOTE the application deadline is less than ONE WEEK AWAY) and will be giving preference to people who can take up a post without undue delay. The rapid timescale of the hiring process means that we will only be able to offer positions to candidates who already have permission to work in the UK (which, as a rough guide, means UK or EU citizens or those with existing appropriate visas).

We do not realistically expect to be permitted to return to our desks in the Computer Laboratory before the end of September, so it will be necessary for successful candidates to be able to successfully “work from home” … not necessarily within the UK.

Please follow this link to the advert to read the formal advertisement for the details about exactly who and what we’re looking for and how to apply.

Cybercrime is (often) boring

Much has been made in the cybersecurity literature of the transition of cybercrime to a service-based economy, with specialised services providing Denial of Service attacks, cash-out services, escrow, forum administration, botnet management, or ransomware configuration to less-skilled users. Despite this acknowledgement of the ‘industrialisation’ of much for the cybercrime economy, the picture of cybercrime painted by law enforcement and media reports is often one of ’sophisticated’ attacks, highly-skilled offenders, and massive payouts. In fact, as we argue in a recent paper accepted to the Workshop on the Economics of Information Security this year (and covered in KrebsOnSecurity last week), cybercrime-as-a-service relies on a great deal of tedious, low-income, and low-skilled manual administrative work.

Continue reading Cybercrime is (often) boring