Algorithms and generalization for large-scale matrices and tensors
- Responsibility
- Hongyang Zhang.
- Publication
- [Stanford, California] : [Stanford University], 2019.
- Copyright notice
- ©2019
- Physical description
- 1 online resource.
Digital content
Also available at
More options
Description
Creators/Contributors
- Author/Creator
- Zhang, Hongyang (Computer scientist) author.
- Contributor
- Goel, Ashish degree supervisor. Thesis advisor
- Valiant, Gregory degree supervisor. Thesis advisor
- Charikar, Moses degree committee member. Thesis advisor
- Ma, Tengyu degree committee member. Thesis advisor
- Stanford University. Computer Science Department.
Contents/Summary
- Summary
- Algorithms research has undergone a paradigm shift in the past decade, from the challenge of processing data to understanding data. The need to process large-scale data raises the question of designing simple and efficient algorithms, that can be adaptive to a rich variety of settings. Once the computational challenge is addressed, we are faced with the next question of how to understand, or learn from the data. A fundamental question for learning from the data is generalization, i.e. the ability to make predictions on unknown data. My thesis focuses on studying formats of data including graphs, matrices and tensors, that are rich enough to model a large family of real world settings, yet analytically tractable. In the first part, we study the generalization performance of commonly used gradient-based optimization methods, motivated by their huge success in learning neural networks. First, we study gradient based optimization methods and their generalization performance (or sample efficiency) in over-parameterized matrix models. Our result highlights the role of the optimization algorithm in explaining generalization when there are more trainable parameters than the size of the dataset. Next, we consider the problem of predicting the missing entries of high-dimensional tensor data. We show an interesting representation-sample trade-off in the choice of tensor models for fitting the data. In the second part, we study algorithms for processing large-scale social and information networks. Lots of work has been dedicated to dealing with large graphs in the theory community. However this line of work has typically focused on worst-case analysis. On the other hand, real world networks enjoy many interesting structures as discovered over the past two decades. My thesis has combined ideas from both sides, by incorporating network structures into the algorithm design process. I will present several results on static or dynamic data structures for shortest path and personalized PageRank computation, as well as modeling trust on decentralized networks with expansion properties.
Bibliographic information
- Publication date
- 2019
- Copyright date
- 2019
- Note
- Submitted to the Computer Science Department.
- Note
- Thesis Ph.D. Stanford University 2019.