Cyber risk management :AI-generated warnings of threats
- Isaac J. Faber.
- [Stanford, California] : [Stanford University], 2019.
- Copyright notice
- Physical description
- 1 online resource.
Also available at
- Faber, Isaac Justin, author.
- Paté-Cornell, M. Elisabeth (Marie Elisabeth), degree supervisor.
- Lin, Herbert, degree committee member.
- Shachter, Ross D., degree committee member.
- Stanford University. Department of Management Science and Engineering.
- This research presents a warning systems model in which early-stage cyber threat signals are generated using machine learning and artificial intelligence (AI) techniques. Cybersecurity is most often, in practice, reactive. Based on the manual forensics of machine-generated data by humans, security efforts only begin after a loss has taken place. The current security paradigm can be significantly improved. Cyber-threat behaviors can be modeled as a set of discrete, observable steps called a 'kill chain.' Data produced from observing early kill chain steps can support the automation of manual defensive responses before an attack causes losses. However, early AI-based approaches to cybersecurity have been sensitive to exploitation and overly burdensome false positive rates resulting in low adoption and low trust from human experts. To address the problem, this research presents a collaborative decision paradigm with machines making low-impact/high-confidence decisions based on human risk preferences and uncertainty thresholds. Human experts only evaluate signals generated by the AI when decisions exceed these thresholds. This approach unifies core concepts from the disciplines of decision analysis and machine learning by creating a super-agent. An early warning system using these techniques has the potential to avoid more severe downstream consequences by disrupting threats at the beginning of the kill chain.
- Publication date
- Copyright date
- Submitted to the Department of Management Science and Engineering.
- Thesis Ph.D. Stanford University 2019.