LEADER 03241nam a22003137i 4500
006 m d
007 cr un
008 161208s2016 xx sm 000 0 eng d
a| CSt c| CSt d| UtOrBLW
a| Osband, Ian. ?| UNAUTHORIZED
a| Deep exploration via randomized value functions h| [electronic resource] / c| Ian Osband.
a| 1 online resource.
a| Submitted to the Department of Management Science and Engineering.
a| Thesis (Ph.D.)--Stanford University, 2016.
a| The "Big Data" revolution is spawning systems designed to make decisions from data. Statistics and machine learning has made great strides in prediction and estimation from any fixed dataset. However, if you want to learn to take actions where your choices can affect both the underlying system and the data you observe, you need reinforcement learning. Reinforcement learning builds upon learning from datasets, but also addresses the issues of partial feedback and long term consequences. In a reinforcement learning problem the decisions you make may affect the data you get, and even alter the underlying system for future timesteps. Statistically efficient reinforcement learning requires "deep exploration" or the ability to plan to learn. Previous approaches to deep exploration have not been computationally tractable beyond small scale problems. For this reason, most practical implementations use statistically inefficient methods for exploration such as epsilon-greedy dithering, which can lead to exponentially slower learning. In this dissertation we present an alternative approach to deep exploration through the use of randomized value functions. Our work is inspired by the Thompson sampling heuristic for multi-armed bandits which suggests, at a high level, to "randomly select a policy according to the probability that it is optimal". We provide insight into why this algorithm can be simultaneously more statistically efficient and more computationally efficient than existing approaches. We leverage these insights to establish several state of the art theoretical results and performance guarantees. Importantly, and unlike previous approaches to deep exploration, this approach also scales gracefully to complex domains with generalization. We complement our analysis with extensive empirical experiments; these include several didactic examples as well as a recommendation system, Tetris, and Atari 2600 games.
a| Van Roy, Benjamin, e| primary advisor. 4| ths =| ^A2485256
a| Duchi, John e| advisor. 4| ths =| ^A3262783
a| Johari, Ramesh, d| 1976- e| advisor. 4| ths =| ^A2465976
a| Kochenderfer, Mykel J., d| 1980- e| advisor. 4| ths =| ^A3262323
a| Stanford University. b| Department of Management Science and Engineering. =| ^A2960627
a| 21 22
u| http://purl.stanford.edu/rp457qc7612 x| SDR-PURL x| item
a| DATE CATALOGED b| 20170106
a| 3781 2016 O w| ALPHANUM c| 1 i| 36105223695136 k| INPROCESS l| U-ARCHIVES m| SPEC-COLL r| Y s| Y t| ARCHIVE u| 12/9/2016
a| INTERNET RESOURCE w| ASIS c| 1 i| 11891201-2001 l| INTERNET m| SUL r| Y s| Y t| SUL u| 12/9/2016 x| E-THESIS