Abstract:
Cyber-attacks, i.e., the disruption of normal functioning of computers and loss of private information through malicious network events, are becoming widespread. Deception and Intrusion Detection System (IDS) could be two promising interventions to secure the network
from cyber-attacks. However, little is known on how these interventions would interact with human decision-makers in the role of hackers (people who wage cyber-attacks) and analysts
(people who defend networks against cyber-attacks). This thesis aims to understand the decisions of people performing as hackers and analysts in cyber-security games using both lab-based experiments and computational cognitive models. First, we experimentally investigated the role of amount and timing of deception on hacker’s decisions using a deception game. Results revealed that the average proportion of cyber-attacks were lower and not-attack actions were higher when deception occurred late and its amount was high. Next, we developed and used a real-world simulation tool called HackIt to replicate our lab-based findings in the deception game. Furthermore, we conducted another experiment to investigate the role of the availability of
IDS alerts and the availability of information about actions of opponents in games involving participants performing as hackers and analysts. Results revealed that the IDS availability influenced the proportion of defend actions and the information availability about opponent’s
actions influenced the proportion of attack actions. To understand the cognitive mechanisms that drive hackers’ and analysts’ decisions in the presence of deception and IDSs, we developed computational cognitive models based upon Instance-based Learning (IBL) theory, a theory of
decisions made by relying upon recency and frequency of experienced information. Results from IBL model calibrated to human data in deception game revealed that hackers relied heavily upon recent information when deception occurred late and its amount was high. Furthermore, we calibrated IBL models to human data collected in games involving IDS alerts. Results revealed that an IBL model that specifically accounted for IDS alert information in its memory structure
explained the human data accurately where IDS alerts were available, and IDS was accurate or inaccurate. IBL models were also compared with the ACT-R default parameters and Nash solutions to evaluate their ability to account for hacker’s and analyst’s decisions across conditions that differed in the availability and accuracy of IDSs. We found that the IBL model with calibrated parameters performed more accurately compared to both ACT-R default models and Nash solutions in capturing human decisions. We highlight the implications of our results for cyber decision-making in the presence of interventions like deception and IDSs.