David Modic and I have just published a paper on The psychology of malware warnings. We’re constantly bombarded with warnings designed to cover someone else’s back, but what sort of text should we put in a warning if we actually want the user to pay attention to it?
To our surprise, social cues didn’t seem to work. What works best is to make the warning concrete; people ignore general warnings such as that a web page “might harm your computer” but do pay attention to a specific one such as that the page would “try to infect your computer with malware designed to steal your bank account and credit card details in order to defraud you”. There is also some effect from appeals to authority: people who trust their browser vendor will avoid a page “reported and confirmed by our security team to contain malware”.
We also analysed who turned off browser warnings, or would have if they’d known how: they were people who ignored warnings anyway, typically men who distrusted authority and either couldn’t understand the warnings or were IT experts.
“Turning off malware warnings was significantly predicted by IT proficiency (the more familiar the user is with IT, the more likely they are to switch the warnings off)” (p14) vs “Our analysis showed that the more familiar our respondents were with computers, the more likely they were to keep the malware warnings on.” (p23)
I didn’t read closely enough to get the subtlety of these two statements. Could anyone explain?
My interest was piqued by mention early of overconfidence in being able to avoid fraud as a risk for falling victim of fraud. Was clicking through to a fraudulent site related to keeping warnings on or not, and/or did survey ask about confidence in ability to avoid fraud? I did not see a mention of these, but again may not have read closely enough.
Thanks for posting this interesting and important paper!
“Descriptive analysis shows that across all groups, the leading reason for individuals to turn the warnings off is a high rate of false positives.”
… which is why it’s lamentable that the focus of most security-oriented user interface research is on how to make life harder for all those naughty users who ignore/turn off the warnings, rather than on technological means for reducing the false positive rate.
End of section 3.3: “[The model 1 analysis’] odds ratio indicates that for every unit of increasing level of agreement with the wish to ignore malware warnings, individuals are approximately twice as likely to keep the warnings on.”
That doesn’t make sense. Is it an error in the paper?
“Trust in the team was positively correlated with trust in the company (r491 = .626, p < .001) and negatively correlated with the trust in authorities in general (r491 = -.285, p < .001)."
Don't you mean negatively correlated with distrust of authorities in general? (Similarly for the next sentence.)
Nice writeup in Techrepublic.
Brilliant!
The boy who cried this link may possibly perhaps maybe harm your computer.
As well as having important implications for usability and design, this paper is also very helpful for information security awareness practitioners.
People routinely filter risk advice and it’s important to improve our understanding of how their filtering actually works.
Thanks!
SSRN tells us we’re now the top download for cognitive science for the past 60 days, and indeed we’re already the tenth-highest of all time for the Journal of Writing Technologies! This is truly astonishing for what we considered a warm-up exercise to get our research programme on deterring deception underway.