Increasing Users' Cyber-Security Compliance by Reducing Present Bias

Principal Investigator(s): 
Serge Egelman

Despite recent advances in increasing computer security through automation, there are still situations in which humans must manually perform computer security tasks. These tasks may include enabling automatic updates, rebooting machines to apply those updates, configuring automatic backups, or enrolling in two-factor authentication. However, despite viewing these tasks as important for security, many people still choose to ignore them. Two decades of usable security research have shown that these tasks are often seen as annoyances because they are almost never the user's primary objective. It is therefore no surprise that these tasks are postponed, in some cases indefinitely, leaving users vulnerable for extended periods of time. While research has shown that security can be increased by removing the human-in-the-loop through increased automation, there are still many important security tasks that require human action.

Researchers and technology companies have tested various techniques to increase security compliance rates, but the scientists working on this project know of no attempts to address what they consider to be the root cause of the problem: present bias. Present bias is the tendency to discount future risks and gains in favor of immediate gratifications. Some effective techniques for countering present bias include the use of commitment devices, persuasion profiling, and affective computing, which have all been shown to be effective across multiple research fields, such as decision-making and psychology. Based on insights from recent developments in behavioral economics on the effectiveness of pre-commitment nudges and persuasion techniques in other realms, such as saving for retirement or charitable giving, they are looking into when and under what conditions commitment nudges, amongst other techniques aimed at countering present bias, can be used to improve users' security behaviors.

The goal is to perform experiments across specific security application areas that are impacted by present bias, such as applying system software updates, enrollment in two-factor authentication, and configuring automatic backups. Research has already shown that these security applications are impacted by present bias, but there is a need for rigorous empirical research into ways of mitigating it.

Funding provided by NSF