Free, Open-Source, and Anonymous: Why deep learning regulators are in deep water
- Type of resource
- Date created
- May 2019
Also available at
Item belongs to a collection
Stanford University, Center for International Security and Cooperation, Interschool Honors Program in International Security Studies, Theses
Collected here are the theses written by the CISAC Undergraduate Honors Program students during their senior year at Stanford., Collected here are the theses written by the CISAC Undergraduate Honors Program students during their senior year at Stanford.
- Digital collection
- 96 digital items
Deep learning models have been instrumental in driving recent breakthroughs in artificial intelligence. Beyond autonomous navigation and game-playing, some deep learning applications, such as facial recognition and deep fakes, or counterfeit audio and video created by algorithms, pose challenges to individual privacy and US national security. The Director of National Intelligence's 2019 Worldwide Threat Assessment explicitly mentions the growing threat from deep fakes and machine learning systems. Yet, as most deep learning software, datasets, and academic papers are publicly accessible, individuals can effortlessly access the technical prerequisites required to develop deep learning models, build surveillance systems, and access instruments of mass deception. Thousands of deep fakes of politicians and celebrities have already been shared on the internet. The first step towards combating this threat is understanding how deep learning spreads and how its proliferation process differs from other dual-use technologies, or technologies with both civilian and military applications. This thesis seeks to fill this crucial gap by examining how all three critical components of deep learning - data, software, and hardware - have become accessible to internet users. We analyze how existing approaches to mitigating risks from dual-use technology, including establishing norms, limiting supply, and controlling exports, may fail to effectively delay or prevent deep learning threats. After examining how deep fake technology spread from academia to individuals, we present an original experimental study of how technical countermeasures could confuse a facial recognition deep learning model and be used to mitigate risks from deep fakes. Our experiments indicate how US policymakers could leverage both technical and policy mechanisms to delay or undermine malicious deep learning systems.
- Preferred Citation
- Milich, Andrew Burke. (2019). Free, open-source, and anonymous: Why deep learning regulators are in deep water. Stanford Digital Repository. Available at: https://purl.stanford.edu/fp343bx6008.
- Use and reproduction
- User agrees that, where applicable, content will not be used to identify or to otherwise infringe the privacy or confidentiality rights of individuals. Content distributed via the Stanford Digital Repository may be subject to additional license and use restrictions applied by the depositor.