3 thoughts on “2019 Cambridge Cybercrime Conference”
Jamie Saunders spent 20 years at GCHQ, then was director of cyber policy at the FO and director of cyber and intelligence at the NCA. Now he’s splitting time between UCL, Oxford and commercial work. How has cybercrime evolved in five years? Operationally a lot has changed, but strategically not much has. Most of the elite, often Russian speakers, are still making lots of money. Law enforcement has had some tactical wins, but it’s about containment. Mass petty crime is the bedrock: fraud, sextortion, harassment at such scale that law enforcement can’t budge it. The public policy realm has a wearied resignation; more resource is going into cybercrime steadily but it’s not headline stuff. There’s no public agitation for more cybercrime fighters. Two things might shift stuff strategically: nation-state threats as capabilities proliferate, and tech change. The World Economic Forum has set up a cybercrime centre and have a project “Future cybercrime 2025” to identify technologies that might impact the attack/defence balance. David Birch reckons good identity management and electronic money will change things, while Sadie Creese believes in cyborgs. They’ll have a conference in Atlanta in September and talk about AI. He has to write a report by September 2020. Ideas are welcome
Sergio Pastrana has been measuring the crypto-mining malware ecosystem. Illicit mining has two forms: web-based cryptojacking miners embedded in scripts, and binary-based where malware spread by botnets. These bots mine currencies like Monero in public mining pools. Yuxing Huang calculated in 2014 that such mining would not pay for a botnet, but might provide a useful secondary income stream. Sergio has been analysing data from the Cambridge CrimeBB dataset from 2007-2018 and built a system to link up binary analysis and network analysis to give profit analysis by campaign. Monero was by far the most common cryptocurrency mined with over 2000 wallets involved in mining activity, which he could group into campaigns by looking at shared proxies, crime forum messages and other data. The binary miners he studied accounted for 4.5% of the Monero in circulation and perhaps $80m in profits.
Victoria Wang has been studying the not so dark web. She got 17 darknet users to answer 10 interview questions; they came up with many positive reasons for using anonymous fora including the ability to speak freely, which is valuable for people in countries like Saudi Arabia where a hint of lack of faith can lead to charges of apostasy, with deadly consequences; it could also provide a forum for frank discussion of policy issues. Some started off using fora to buy drugs but changed over time to community participation. Others denounced the mainstream media as propaganda, arguing that Silk Road brought real social benefits by improving drug quality and safety while reducing street violence.
Jack Hughes has been studying cybercrime activity in an underground gaming forum. He manually selected key actors who’d released cracking tools or tutorials, or had advertised DDoS for hire; he then used social network analysis, activity metrics and various NLP tools to analyse sentiment. Gamers tend to start off interested in gaming; over time they come to spend most of their time on community activities, then gaming, then on market activities. Social network analysis lets him identify bridge actors who can get members to transition from gaming to crime. As for those who dally with crime, most are fickle and lose interest, while a minority are sustainers and keep at it. He developed classifiers that enable him to predict who will become a key actor; the best is a random forest, but that’s not fully explainable. Key actors have a higher h-index, a higher impact, have a high eigenvector centrality, and tend to sustain low-frequency posting on market threads combined with high-frequency posts on gaming. However no single techniques was effective of itself. In practical terms, this suggests that law enforcement might try to disrupt low-level sustaining activity in the marketplace.
Greg Francis is the acting national lead for preventing cybercrime at the NCA. His focus is preventing serious organised crime – offences that warrant three years or more in prison. He’s interested in whether competent malware writers are sufficiently rare that a focus on them might give results. He has 9 officers in the Prevent team and a further 26 across the country, or a total of 300 officers working on cybercrime. The average age of arrest of people for serious organised crime across the NCA is 37, but for cyber-crime it’s 19-21. The mechanism appears to be that youngsters who have an aptitude for coding can rapidly find themselves in a milieu online that gives them enormous validation, and a huge incentive to learn more. Many have autism spectrum disorder or similar traits or are at least socially awkward. There’s a huge gap between their low status offline and their high status online. The path from gaming, cheats and modding leads them into fora with event more challenge and even higher rewards. The regulators such as parents and teachers have no clue what’s happening other than that their kid is coding and so no grounds to intervene. There’s a lot of work to be done on raising parental awareness, and educating kids who’re bored and bright but not intrinsically bad about the Computer Misuse Act (and the possible outcome of extradition to the USA with multi-decade sentences). The NCA has also been developing cease-and-desist letters to put kids on notice, break the online bubble, increase risk perception and create parental awareness. It’s also important to keep bright kids stimulated; bad teachers are also a cause of cybercrime.
Ugur Akyazi has been analysing cybercrime-as-a-service using data from AlphaBay, Hansa and other closed dark markets. His goal is to understand the evolution of criminal business markets (these market closures may be starting to drive users to channels such as Telegram). The data enabled him to develop a framework for analysing the underground service economy, with its offers of CAPTCHA solvers, phone/SMS verification hacks, password cracking, e-whoring and much more. Services can rent a platform, sell a service, products with remote services, and products with remote support. The long-term aim is to understand trading and communication processes to support disruption strategies.
Qiu-Hong Wang has been studying cybersecurity tools that are dual-use in that they can be used for offence and defence. Do cybersecurity laws have a chilling effect on defensive tool production and use? She studied the effect on hacker forums of a tightening of computer misuse law in Singapore, which criminalised dealings in items capable of being used to commit an offence. She manually labelled some 50,000 posts as offensive, defensive or neutral; after enforcement, offensive posts dropped from 8.78% to 5.84% while the defensive variety increased from 6.62% to 12.63%. More sophisticated analysis by a mixed nested logit model disclosed deterrence, substitution and chilling effects.
Leonie Tanczer is studying the effect of smart technologies on domestic and sexual abuse. Technology is gendered; look at bicycles, MRI machines or even phones. Women suffer worse in car crashes as the crash-test dummies are based on men. And security is no different; there’s a growing body of work on technology-enabled abuse, from cyberstalking and spyware to nonconsensual intimate imagery. She’s worried about “smart abuse” where ordinary devices like kettles acquire capabilities that can be used to extend coercive control of family members; physical violence, sexual violence, emotional violence and technical violence all come as a package. Technologists often both overestimate and underestimate the capabilities of devices they produce. Leonie’s approach is action research; she works with the London violence against women and girls consortium which has 27 shelters across London, and with Privacy International. A survey showed that a typical member of shelter staff encounters tech-related abuse more than once a week; phones, watches, Alexa. In theory violence is easier to prove but the police has a six-month backlog for phones and hasn’t started to look at the IoT stuff yet. Also the police trivialise digital abuse; in one case a woman whose partner hacked her gmail was told it wasn’t an offence (although it is). Abuse is something that builds slowly over years, raising issues of how you even detect it. There are further issues with the escape planning phase and the life-apart phase. Some support services are starting to have ways of categorising tech abuse, but most don’t; so a current effort is to educate them to know what questions to ask and some idea of what basic advice to give. We need something in the UK comparable to the Cornell centre.
Diego Silva has been mining the dark web to predict exploits in the wild. Intel is about the systematic acquisition of information about capabilities and intent for exploitation by leaders at all levels; for cyber threats this means their tactics, techniques and procedures at the top end, and just below that the tools – on which Diego focuses. He uses 11 million CrimeBB posts from 2015-8, and a similar number from Orpheus Cyber, the NIST National Vulnerability Database, China’s Seebug, the marketplace 0Day.today and packetstorm; threat intel comes from Trend Micro, Symantec, Orpheus and AlienVault (now AT&T security). He analysis how proof-of-concept exploits end up being used for real. As an example, the CVE-2018-8120 exploit in GrandCrab ransomware turns up in APT37 and RockRat, a North Korean exploit. There were 69 hits, mostly on cybercrime.in, before it was seen in the wild in October 2018. GrandCrab announced their retirement in June saying they’d harvested $2bn and kept $150m for themselves. Diego used the Crisp-DM framework for data mining. His goal is to use forum chatter to predict which CVEs will actually be seen in the wild.
Ben Collier has been studying the impact of interventions in DDoS-for-hire service markets. Such services enable anyone to launch attacks for a few dollars; how can we stop them? The police try techniques such as arrests, takedowns and messaging, and observe the effects when perps are sentenced. Which works? Well, we can get self-reported data from the DDOS services. Sentencing seems to have no effect at all, and arrests only a small one. Takedowns have a big effect though, especially if multiple takedowns happen at once, as was done by he FBI at the end of 2018. This completely changed the market structure; a lot of small providers dropped out and now one booter has most of the market. Messaging works surprisingly well; the UK bought £3k of Google ads telling people that booters were illegal in Jan-May 2018, and this suppressed demand growth here compared with the USA. Community and networks are important in cybercrime, just as in ordinary crime, so we can use social methods too: kicking the booter operators off hackforums also had a noticeable effect, as operators can no longer use their activities to generate social capital. Other factors are that the operators are young, with fairly flimsy neutralisations; the misconception that booting is legal or at least not taken seriously by the police; the lack of a value system or culture; and it’s a lemons market, dependent on third-party infrastructure. Booting people off games is low-harm, high-volume, so it can draw lots of people in; the relevant discords are extreme right-wing, strongly misogynistic and strongly homophobic. The FBI are good at dealing with a handful of crooks targeting high-value targets, not a sea of spotty 15-year-olds. However there are some cases of community policing at school level having an effect.
Richard Clayton concluded by describing how the Cambridge Cybercrime Center collects data and counts things. We have all sorts of spam, phish, malware, honeypot datasets; we have 26 research groups signed up and 50+ researchers worldwide. The most popular by far is the CrimeBB Database we collected from hackforums since Sergio turned up; many users are from criminology, sociology and psychology. So the next phase will to make the data more usable by people who are not computer scientists by enabling them to find out if we have relevant data – whether by using AI to label data or by letting other people add labels and sharing them.
Jamie Saunders spent 20 years at GCHQ, then was director of cyber policy at the FO and director of cyber and intelligence at the NCA. Now he’s splitting time between UCL, Oxford and commercial work. How has cybercrime evolved in five years? Operationally a lot has changed, but strategically not much has. Most of the elite, often Russian speakers, are still making lots of money. Law enforcement has had some tactical wins, but it’s about containment. Mass petty crime is the bedrock: fraud, sextortion, harassment at such scale that law enforcement can’t budge it. The public policy realm has a wearied resignation; more resource is going into cybercrime steadily but it’s not headline stuff. There’s no public agitation for more cybercrime fighters. Two things might shift stuff strategically: nation-state threats as capabilities proliferate, and tech change. The World Economic Forum has set up a cybercrime centre and have a project “Future cybercrime 2025” to identify technologies that might impact the attack/defence balance. David Birch reckons good identity management and electronic money will change things, while Sadie Creese believes in cyborgs. They’ll have a conference in Atlanta in September and talk about AI. He has to write a report by September 2020. Ideas are welcome
Sergio Pastrana has been measuring the crypto-mining malware ecosystem. Illicit mining has two forms: web-based cryptojacking miners embedded in scripts, and binary-based where malware spread by botnets. These bots mine currencies like Monero in public mining pools. Yuxing Huang calculated in 2014 that such mining would not pay for a botnet, but might provide a useful secondary income stream. Sergio has been analysing data from the Cambridge CrimeBB dataset from 2007-2018 and built a system to link up binary analysis and network analysis to give profit analysis by campaign. Monero was by far the most common cryptocurrency mined with over 2000 wallets involved in mining activity, which he could group into campaigns by looking at shared proxies, crime forum messages and other data. The binary miners he studied accounted for 4.5% of the Monero in circulation and perhaps $80m in profits.
Victoria Wang has been studying the not so dark web. She got 17 darknet users to answer 10 interview questions; they came up with many positive reasons for using anonymous fora including the ability to speak freely, which is valuable for people in countries like Saudi Arabia where a hint of lack of faith can lead to charges of apostasy, with deadly consequences; it could also provide a forum for frank discussion of policy issues. Some started off using fora to buy drugs but changed over time to community participation. Others denounced the mainstream media as propaganda, arguing that Silk Road brought real social benefits by improving drug quality and safety while reducing street violence.
Jack Hughes has been studying cybercrime activity in an underground gaming forum. He manually selected key actors who’d released cracking tools or tutorials, or had advertised DDoS for hire; he then used social network analysis, activity metrics and various NLP tools to analyse sentiment. Gamers tend to start off interested in gaming; over time they come to spend most of their time on community activities, then gaming, then on market activities. Social network analysis lets him identify bridge actors who can get members to transition from gaming to crime. As for those who dally with crime, most are fickle and lose interest, while a minority are sustainers and keep at it. He developed classifiers that enable him to predict who will become a key actor; the best is a random forest, but that’s not fully explainable. Key actors have a higher h-index, a higher impact, have a high eigenvector centrality, and tend to sustain low-frequency posting on market threads combined with high-frequency posts on gaming. However no single techniques was effective of itself. In practical terms, this suggests that law enforcement might try to disrupt low-level sustaining activity in the marketplace.
Greg Francis is the acting national lead for preventing cybercrime at the NCA. His focus is preventing serious organised crime – offences that warrant three years or more in prison. He’s interested in whether competent malware writers are sufficiently rare that a focus on them might give results. He has 9 officers in the Prevent team and a further 26 across the country, or a total of 300 officers working on cybercrime. The average age of arrest of people for serious organised crime across the NCA is 37, but for cyber-crime it’s 19-21. The mechanism appears to be that youngsters who have an aptitude for coding can rapidly find themselves in a milieu online that gives them enormous validation, and a huge incentive to learn more. Many have autism spectrum disorder or similar traits or are at least socially awkward. There’s a huge gap between their low status offline and their high status online. The path from gaming, cheats and modding leads them into fora with event more challenge and even higher rewards. The regulators such as parents and teachers have no clue what’s happening other than that their kid is coding and so no grounds to intervene. There’s a lot of work to be done on raising parental awareness, and educating kids who’re bored and bright but not intrinsically bad about the Computer Misuse Act (and the possible outcome of extradition to the USA with multi-decade sentences). The NCA has also been developing cease-and-desist letters to put kids on notice, break the online bubble, increase risk perception and create parental awareness. It’s also important to keep bright kids stimulated; bad teachers are also a cause of cybercrime.
Ugur Akyazi has been analysing cybercrime-as-a-service using data from AlphaBay, Hansa and other closed dark markets. His goal is to understand the evolution of criminal business markets (these market closures may be starting to drive users to channels such as Telegram). The data enabled him to develop a framework for analysing the underground service economy, with its offers of CAPTCHA solvers, phone/SMS verification hacks, password cracking, e-whoring and much more. Services can rent a platform, sell a service, products with remote services, and products with remote support. The long-term aim is to understand trading and communication processes to support disruption strategies.
Qiu-Hong Wang has been studying cybersecurity tools that are dual-use in that they can be used for offence and defence. Do cybersecurity laws have a chilling effect on defensive tool production and use? She studied the effect on hacker forums of a tightening of computer misuse law in Singapore, which criminalised dealings in items capable of being used to commit an offence. She manually labelled some 50,000 posts as offensive, defensive or neutral; after enforcement, offensive posts dropped from 8.78% to 5.84% while the defensive variety increased from 6.62% to 12.63%. More sophisticated analysis by a mixed nested logit model disclosed deterrence, substitution and chilling effects.
Leonie Tanczer is studying the effect of smart technologies on domestic and sexual abuse. Technology is gendered; look at bicycles, MRI machines or even phones. Women suffer worse in car crashes as the crash-test dummies are based on men. And security is no different; there’s a growing body of work on technology-enabled abuse, from cyberstalking and spyware to nonconsensual intimate imagery. She’s worried about “smart abuse” where ordinary devices like kettles acquire capabilities that can be used to extend coercive control of family members; physical violence, sexual violence, emotional violence and technical violence all come as a package. Technologists often both overestimate and underestimate the capabilities of devices they produce. Leonie’s approach is action research; she works with the London violence against women and girls consortium which has 27 shelters across London, and with Privacy International. A survey showed that a typical member of shelter staff encounters tech-related abuse more than once a week; phones, watches, Alexa. In theory violence is easier to prove but the police has a six-month backlog for phones and hasn’t started to look at the IoT stuff yet. Also the police trivialise digital abuse; in one case a woman whose partner hacked her gmail was told it wasn’t an offence (although it is). Abuse is something that builds slowly over years, raising issues of how you even detect it. There are further issues with the escape planning phase and the life-apart phase. Some support services are starting to have ways of categorising tech abuse, but most don’t; so a current effort is to educate them to know what questions to ask and some idea of what basic advice to give. We need something in the UK comparable to the Cornell centre.
Diego Silva has been mining the dark web to predict exploits in the wild. Intel is about the systematic acquisition of information about capabilities and intent for exploitation by leaders at all levels; for cyber threats this means their tactics, techniques and procedures at the top end, and just below that the tools – on which Diego focuses. He uses 11 million CrimeBB posts from 2015-8, and a similar number from Orpheus Cyber, the NIST National Vulnerability Database, China’s Seebug, the marketplace 0Day.today and packetstorm; threat intel comes from Trend Micro, Symantec, Orpheus and AlienVault (now AT&T security). He analysis how proof-of-concept exploits end up being used for real. As an example, the CVE-2018-8120 exploit in GrandCrab ransomware turns up in APT37 and RockRat, a North Korean exploit. There were 69 hits, mostly on cybercrime.in, before it was seen in the wild in October 2018. GrandCrab announced their retirement in June saying they’d harvested $2bn and kept $150m for themselves. Diego used the Crisp-DM framework for data mining. His goal is to use forum chatter to predict which CVEs will actually be seen in the wild.
Ben Collier has been studying the impact of interventions in DDoS-for-hire service markets. Such services enable anyone to launch attacks for a few dollars; how can we stop them? The police try techniques such as arrests, takedowns and messaging, and observe the effects when perps are sentenced. Which works? Well, we can get self-reported data from the DDOS services. Sentencing seems to have no effect at all, and arrests only a small one. Takedowns have a big effect though, especially if multiple takedowns happen at once, as was done by he FBI at the end of 2018. This completely changed the market structure; a lot of small providers dropped out and now one booter has most of the market. Messaging works surprisingly well; the UK bought £3k of Google ads telling people that booters were illegal in Jan-May 2018, and this suppressed demand growth here compared with the USA. Community and networks are important in cybercrime, just as in ordinary crime, so we can use social methods too: kicking the booter operators off hackforums also had a noticeable effect, as operators can no longer use their activities to generate social capital. Other factors are that the operators are young, with fairly flimsy neutralisations; the misconception that booting is legal or at least not taken seriously by the police; the lack of a value system or culture; and it’s a lemons market, dependent on third-party infrastructure. Booting people off games is low-harm, high-volume, so it can draw lots of people in; the relevant discords are extreme right-wing, strongly misogynistic and strongly homophobic. The FBI are good at dealing with a handful of crooks targeting high-value targets, not a sea of spotty 15-year-olds. However there are some cases of community policing at school level having an effect.
Richard Clayton concluded by describing how the Cambridge Cybercrime Center collects data and counts things. We have all sorts of spam, phish, malware, honeypot datasets; we have 26 research groups signed up and 50+ researchers worldwide. The most popular by far is the CrimeBB Database we collected from hackforums since Sergio turned up; many users are from criminology, sociology and psychology. So the next phase will to make the data more usable by people who are not computer scientists by enabling them to find out if we have relevant data – whether by using AI to label data or by letting other people add labels and sharing them.
The amount that the NCA spent on adverts was more like £3k than £10k.