In this paper (winner of the eCrime 2019 Best Paper award), we consider the types of things that can go wrong when you intend to make things better and more secure. Consider this scenario. You are browsing through Internet and see a news headline on one of the presidential candidates. You are unsure if the headline is true. What you can do is to navigate to a fact-checking website and type in the headline of interest. Some platforms also have fact-checking bots that would update periodically on false information. You do some research through three fact-checking websites and the results consistently show that the news contains false information. You share the results as a comment on the news article. Within two hours, you receive hundreds of notifications with comments countering your resources with other fact-checking websites.
Such a scenario is increasingly common as we rely on the Internet and social media platforms for information and news. Although they are meant to increase security, these cybersecurity countermeasures can result in confusion and frustration among users due to the incorporation of additional actions as part of users’ daily online routines. As seen, fact-checking can easily be used as a mechanism for attacks and demonstration of in-group/out-group distinction which can contribute further to group polarisation and fragmentation. We identify these negative effects as unintended consequences and define it as shifts in expected burden and/or effort to a group.
To understand unintended harms, we begin with five scenarios of cyber aggression and deception. We identify common countermeasures for each scenario, and brainstorm potential unintended harms with each countermeasure. The unintended harms are inductively organized into seven categories: 1) displacement, 2) insecure norms, 3) additional costs, 4) misuse, 5) misclassification, 6) amplification and 7) disruption. Applying this framework to the above scenario, insecure norms, miuse, and amplification are both unintended consequences of fact-checking. Fact-checking can foster a sense of complacency where checked news are automatically seen as true. In addition, fact-checking can be used as tools for attacking groups of different political views. Such misuse facilitates amplification as fact-checking is being used to strengthen in-group status and therefore further exacerbate the issue of group polarisation and fragmentation.
To allow for a systematic application to existing or new cybersecurity measures by practitioners and stakeholders, we expand the categories into a functional framework by developing prompts for each harm category. During this process, we identify the underlying need to consider vulnerable groups. In other words, practitioners and stakeholders need to take into consideration the impacts of countermeasures on at-risk groups as well as the possible creation of new vulnerable groups as a result of deploying a countermeasure. Vulnerable groups refer to user groups who may suffer while others are unaffected or prosper from the countermeasure. One example is older adult users where their non-familiarity and less frequent interactions with technologies means that they are forgotten or hidden when assessing risks and/or countermeasures within a system.
It is important to note the framework does not propose measurements for the severity or the likelihood of unintended harm occurring. Rather, the emphasis of the framework is in raising stakeholders’ and practitioners’ awareness of possible unintended consequences. We envision this framework as a common-ground tool for stakeholders, particularly for coordinating approaches in complex, multi-party services and/or technology ecosystems. We would like to extend a special thank you to Schloss Dagstuhl and the organisers of Seminar #19302 (Cybersafety Threats – from Deception to Aggression). It brought all of the authors together and laid out the core ideas in this paper. A complimentary blog post by co-author Dr Simon Parkin can be found at UCL’s Benthams Gaze blog. The accepted manuscript for this paper is available here.