Online 82. F08CME20002 : Linear Algebra with Application to Engineering Computations. 2008 Fall [2008]
 Stanford University. Department of Computational and Mathematical Engineering (Sponsor)
 Stanford (Calif.), 2008
 Description
 Book — 1 text file
 Summary

Computer based solution of systems of algebraic equations obtained from engineering problems and eigensystem analysis, Gaussian elimination, effect of roundoff error, operation counts, banded matrices arising from discretization of differential equations, illconditioned matrices, matrix theory, least square solution of unsolvable systems, solution of nonlinear algebraic equations, eigenvalues and eigenvectors, similar matrices, unitary and Hermitian matrices, positive definiteness, CayleyHamilton theory and function of a matrix and iterative methods. Prerequisite: familiarity with computer programming, and MATH51.
 Digital collection
 Stanford University Syllabi
Online 83. Sp08CME10401 : Linear Algebra and Partial Differential Equations for Engineers. 2008 Spring [2008]
 Stanford University. Department of Computational and Mathematical Engineering (Sponsor)
 Stanford (Calif.), 2008
 Description
 Book — 1 text file
 Summary

Linear algebra: matrix operations, systems of algebraic equations, Gaussian elimination, undetermined and overdetermined systems, coupled systems of ordinary differential equations, eigensystem analysis, normal modes. Fourier series with applications, partial differential equations arising in science and engineering, analytical solutions of partial differential equations. Numerical methods for solution of partial differential equations: iterative techniques, stability and convergence, time advancement, implicit methods, von Neumann stability analysis. Examples and applications from various engineering fields. Prerequisite: CME 102/ENGR 155A.
 Digital collection
 Stanford University Syllabi
 Stanford University. Department of Computational and Mathematical Engineering (Sponsor)
 Stanford (Calif.), 2008
 Description
 Book — 1 text file
 Summary

Numerical methods from a user?s point of view. Lagrange interpolation, splines. Integration: trapezoid, Romberg, Gauss, adaptive quadrature; numerical solution of ordinary differential equations: explicit and implicit methods, multistep methods, RungeKutta and predictorcorrector methods, boundary value problems, eigenvalue problems; systems of differential equations, stiffness. Emphasis is on analysis of numerical methods for accuracy, stability, and convergence. Introduction to numerical solutions of partial differential equations; Von Neumann stability analysis; alternating direction implicit methods and nonlinear equations. Prerequisite: CME 200/ME 300A.
 Digital collection
 Stanford University Syllabi
Online 85. Sp08CME34201 : Parallel Methods in Numerical Analysis. 2008 Spring [2008]
 Stanford University. Department of Computational and Mathematical Engineering (Sponsor)
 Stanford (Calif.), 2008
 Description
 Book — 1 text file
 Summary

Emphasis is on techniques for obtaining maximum parallelism in numerical algorithms, especially those occurring when solving matrix problems, partial differential equations, and the subsequent mapping onto the computer. Implementation issues on parallel computers. Topics: parallel architecture, programming models (MPI, GPU Computing with CUDA ? quick review), matrix computations, FFT, fast multiple methods, domain decomposition, graph partitioning, discrete algorithms. Prerequisites: 302 or 200 (ME 300A), 213 or equivalent, or consent of instructor. Recommended: differential equations and knowledge of a highlevel programming language such as C or C++ (F90/95 also allowable).
 Digital collection
 Stanford University Syllabi
Online 86. Su08CME10601 : Introduction to Probability and Statistics for Engineers. 2008 Summer [2008]
 Stanford University. Department of Computational and Mathematical Engineering (Sponsor)
 Stanford (Calif.), 2008
 Description
 Book — 1 text file
 Summary

Probability: random variables, independence, and conditional probability; discrete and continuous distributions, moments, distributions of several random variables. Topics in mathematical statistics: random sampling, point estimation, confidence intervals, hypothesis testing, nonparametric tests, regression and correlation analyses; applications in engineering, industrial manufacturing, medicine, biology, and other fields. Prerequisite: CME 100/ENGR154 or MATH 51.
 Digital collection
 Stanford University Syllabi
 Lee, Chris PanChi.
 2006.
 Description
 Book — vii,108 leaves, bound.
 Online

 Search ProQuest Dissertations & Theses. Not all titles available.
 Google Books (Full view)
SAL3 (offcampus storage), Special Collections
SAL3 (offcampus storage)  Status 

Stacks  Request (opens in new tab) 
3781 2006 L  Available 
Special Collections  Status 

University Archives  Request via Aeon (opens in new tab) 
3781 2006 L  Inlibrary use 
Online 88. Su15CME10801 : Introduction to Scientific Computing. 2015 Summer [2015]
 Stanford University. Department of Computational and Mathematical Engineering (Sponsor)
 Stanford (Calif.), 2015
 Description
 Book — 1 text file
 Summary

Introduction to Scientific Computing Numerical computation for mathematical, computational, physical sciences and engineering: error analysis, floatingpoint arithmetic, nonlinear equations, numerical solution of systems of algebraic equations, banded matrices, least squares, unconstrained optimization, polynomial interpolation, numerical differentiation and integration, numerical solution of ordinary differential equations, truncation error, numerical stability for time dependent problems and stiffness. Implementation of numerical methods in MATLAB programming assignments. Prerequisites: MATH 51, 52, 53; prior programming experience (MATLAB or other language at level of CS 106A or higher). Graduate students should take it for 3 units and undergraduate students should take it for 4 units.
 Digital collection
 Stanford University Syllabi
Online 89. Su14CME10801 : Introduction to Scientific Computing. 2014 Summer [2014]
 Stanford University. Department of Computational and Mathematical Engineering (Sponsor)
 Stanford (Calif.), 2014
 Description
 Book — 1 text file
 Summary

Introduction to Scientific Computing Numerical computation for mathematical, computational, physical sciences and engineering: error analysis, floatingpoint arithmetic, nonlinear equations, numerical solution of systems of algebraic equations, banded matrices, least squares, unconstrained optimization, polynomial interpolation, numerical differentiation and integration, numerical solution of ordinary differential equations, truncation error, numerical stability for time dependent problems and stiffness. Implementation of numerical methods in MATLAB programming assignments. Prerequisites: MATH 51, 52, 53; prior programming experience (MATLAB or other language at level of CS 106A or higher).
 Digital collection
 Stanford University Syllabi
Online 90. W14CME10801 : Introduction to Scientific Computing. 2014 Winter [2014]
 Stanford University. Department of Computational and Mathematical Engineering (Sponsor)
 Stanford (Calif.), 2014
 Description
 Book — 1 text file
 Summary

Introduction to Scientific Computing Numerical computation for mathematical, computational, physical sciences and engineering: error analysis, floatingpoint arithmetic, nonlinear equations, numerical solution of systems of algebraic equations, banded matrices, least squares, unconstrained optimization, polynomial interpolation, numerical differentiation and integration, numerical solution of ordinary differential equations, truncation error, numerical stability for time dependent problems and stiffness. Implementation of numerical methods in MATLAB programming assignments. Prerequisites: MATH 51, 52, 53; prior programming experience (MATLAB or other language at level of CS 106A or higher). Graduate students should take it for 3 units and undergraduate students should take it for 4 units.
 Digital collection
 Stanford University Syllabi
Online 91. F13CME32901 : Top Ten Algorithms of the 20th Century. 2013 Fall [2013]
 Stanford University. Department of Computational and Mathematical Engineering (Sponsor)
 Stanford (Calif.), 2013
 Description
 Book — 1 text file
 Summary

A highlevel survey course covering one algorithm per week: metropolis, simplex method, conjugate gradient, QR, quicksort, fast fourier transform, maxcut, fast multipole method, integer relation detection, and convex/semidefinite programming.
 Digital collection
 Stanford University Syllabi
Online 92. F13CME36201 : An Introduction to Compressed Sensing. 2013 Fall [2013]
 Stanford University. Department of Computational and Mathematical Engineering (Sponsor)
 Stanford (Calif.), 2013
 Description
 Book — 1 text file
 Summary

Compressed sensing is a new data acquisition theory asserting that one can design nonadaptive sampling techniques that condense the information in a compressible signal into a small amount of data. This revelation may change the way engineers think about signal acquisition. Course covers fundamental theoretical ideas, numerical methods in largescale convex optimization, hardware implementations, connections with statistical estimation in high dimensions, and extensions such as recovery of data matrices from few entries (famous Netflix Prize).
 Digital collection
 Stanford University Syllabi
Online 93. Sp13CME21301 : Introduction to parallel computing using MPI, openMP, and CUDA. 2013 Spring [2013]
 Stanford University. Department of Computational and Mathematical Engineering (Sponsor)
 Stanford (Calif.), 2013
 Description
 Book — 1 text file
 Summary

This class will give hands on experience with programming multicore processors, graphics processing units (GPU), and parallel computers. Focus will be on the message passing interface (MPI, parallel clusters) and the compute unified device architecture (CUDA, GPU). Topics will include: network topologies, modeling communication times, collective communication operations, parallel efficiency, MPI, dense linear algebra using MPI. Symmetric multiprocessing (SMP), pthreads, openMP. CUDA, combining MPI and CUDA, dense linear algebra using CUDA, sort, reduce and scan using CUDA. Prerequisites include: C programming language and numerical algorithms (solution of differential equations, linear algebra, Fourier transforms).
 Digital collection
 Stanford University Syllabi
 Stanford University. Department of Computational and Mathematical Engineering (Sponsor)
 Stanford (Calif.), 2011
 Description
 Book — 1 text file
 Summary

Basic usage of the Python and C/C++ programming languages are introduced and used to solve representative computational problems from various science and engineering disciplines. Software design principles including time and space complexity analysis, data structures, objectoriented design, decomposition, encapsulation, and modularity are emphasized. Usage of campus wide Linux compute resources: login, file system navigation, editing files, compiling and linking, file transfer, etc. Versioning and revision control, software build utilities, and the LaTeX typesetting software are introduced and used to help complete programming assignments. Prerequisite: introductory programming course equivalent to CS 106A or instructor consent.
 Digital collection
 Stanford University Syllabi
Online 95. Su10CME10801 : Introduction to Scientific Computing. 2010 Summer [2010]
 Stanford University. Department of Computational and Mathematical Engineering (Sponsor)
 Stanford (Calif.), 2010
 Description
 Book — 1 text file
 Summary

Introduction to Scientific Computing Numerical computation for mathematical, computational, physical sciences and engineering: error analysis, floatingpoint arithmetic, nonlinear equations, numerical solution of systems of algebraic equations, banded matrices, least squares, unconstrained optimization, polynomial interpolation, numerical differentiation and integration, numerical solution of ordinary differential equations, truncation error, numerical stability for time dependent problems and stiffness. Implementation of numerical methods in MATLAB programming assignments. Prerequisites: MATH 51, 52, 53; prior programming experience (MATLAB or other language at level of CS 106A or higher). Graduate students should take it for 3 units and undergraduate students should take it for 4 units.
 Digital collection
 Stanford University Syllabi
 Carlsson, John Gunnar.
 2009.
 Description
 Book — ix, 85 leaves, bound.
 Online

 Search ProQuest Dissertations & Theses. Not all titles available.
 Google Books (Full view)
SAL3 (offcampus storage), Special Collections
SAL3 (offcampus storage)  Status 

Stacks  Request (opens in new tab) 
3781 2009 C  Available 
Special Collections  Status 

University Archives  Request via Aeon (opens in new tab) 
3781 2009 C  Inlibrary use 
Online 97. Factor models, meanreversion time, and statistical arbitrage [electronic resource] [2016]
 Yeo, Joongyeub.
 2016.
 Description
 Book — 1 online resource.
 Summary

Financial markets are welldefined complex systems, which are continuously monitored and recorded, and generate tremendous amounts of data. This thesis is an attempt to analyze financial markets as highdimensional and dynamic systems. In dealing with highdimensional data sets, factor models are often useful for dimension reduction. In the first part of this thesis, we present a new approach to estimate highdimensional factor models, using the empirical spectral density of residuals. The spectrum of covariance matrices from financial data typically exhibits two characteristic aspects: a few spikes and bulk. The former represent factors that mainly drive the features and the latter arises from idiosyncratic noise. Motivated by these two aspects, we consider a minimum distance between two spectrums; one from a covariance structure model and the other from real residuals of financial data that are obtained by subtracting principal components. Our method simultaneously provides estimators of the number of factors and information about correlation structures in residuals. Using free random variable techniques, the proposed algorithm can be implemented and controlled effectively. Monte Carlo simulations confirm that our method is robust to noise or the presence of weak factors. Furthermore, the application to financial timeseries shows that our estimators capture essential aspects of market dynamics. In the second part of this thesis, we turn to pay attention to more practical applications using residual dynamics. In this part, we study the risk of meanreversions in statistical arbitrage. The basic concept of statistical arbitrage is to exploit shortterm deviations in returns from a longterm equilibrium across several assets. This kind of strategy heavily relies on the assumption of meanreversion of idiosyncratic returns  reverting to a longterm mean after a certain amount of time, but literature on the assessment of risk on this belief is rare. In this chapter, we propose a scheme that controls the risk on meanreversions, via portfolio selections and screenings. Realizing that each residual has a different meanreversion time, the residuals that are fast meanreverting are selected to form portfolios. In addition, further control is imposed by allowing the trading activity only when the goodnessoffit of the estimation for trading signals is sufficiently high. We design the dynamic asset allocation with the market and dollar neutrality conditions as a constrained optimization problem, which is solved numerically. The improved reliability and robustness of the strategy of our method is demonstrated through backtesting with real data. It is found that the performance of this investment framework is robust to market changes. We further provide answers to puzzles regarding relations among the number of factors, length of estimation window, and transaction costs, which are crucial parameters that have direct impacts on the investment strategy. In the third part, we consider the Wishart processes, which is a good candidate for modeling a stochastic covariance matrix. Specifically, we use perturbation theory to derive the eigenvalues and eigenvectors of the Wishart processes. These processes have some interesting features where each eigenvalue is part of a multidimensional stochastic systems of noncolliding particles, which has been widely studied in mathematical physics. In addition, we implement a numerical scheme to effectively simulate eigenvalues of the square OrnsteinUhlenbeck processes. In this scheme, the Yosida approximation was exploited to resolve the difficulty that arises from the singular drift terms. The results in this chapter contribute to the dynamic modeling of stochastic covariance matrices and their financial applications.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request via Aeon (opens in new tab) 
3781 2016 Y  Inlibrary use 
 Lee, Jason Dean.
 2015.
 Description
 Book — 1 online resource.
 Summary

This thesis studies two problems in modern statistics. First, we study selective inference, or inference for hypothesis that are chosen after looking at the data. The motiving application is inference for regression coefficients selected by the lasso. We present the ConditiononSelection method that allows for valid selective inference, and study its application to the lasso, and several other selection algorithms. In the second part, we consider the problem of learning the structure of a pairwise graphical model over continuous and discrete variables. We present a new pairwise model for graphical models with both continuous and discrete variables that is amenable to structure learning. In previous work, authors have considered structure learning of Gaussian graphical models and structure learning of discrete models. Our approach is a natural generalization of these two lines of work to the mixed case. The penalization scheme involves a novel symmetric use of the grouplasso norm and follows naturally from a particular parametrization of the model. We provide conditions under which our estimator is model selection consistent in the highdimensional regime.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request via Aeon (opens in new tab) 
3781 2015 L  Inlibrary use 
Online 99. Algorithms for bipartite matching problems with connections to sparsification and streaming [electronic resource] [2012]
 Kapralov, Mikhail.
 2012.
 Description
 Book — 1 online resource.
 Summary

The problem of finding maximum matchings in bipartite graphs is a classical problem in combinatorial optimization with a long algorithmic history. Graph sparsification is a more recent paradigm of replacing a graph with a smaller subgraph that preserves some useful properties of the original graph, perhaps approximately. Traditionally, sparsification has been used for obtaining faster algorithms for cutbased optimization problems. The contributions of this thesis are centered around new algorithms for bipartite matching problems, in which, surprisingly, graph sparsification plays a major role, and efficient algorithms for constructing sparsifiers in modern data models. In the first part of the thesis we develop sublinear time algorithms for finding perfect matchings in regular bipartite graphs. These graphs have been studied extensively in the context of expander constructions, and have several applications in combinatorial optimization. The problem of finding perfect matchings in regular bipartite graphs has seen almost 100 years of algorithmic history, with the first algorithm dating back to K\"onig in 1916 and an algorithm with runtime linear in the number of edges in the graph discovered in 2000. In this thesis we show that, even though traditionally the use of sparsification has been restricted to cutbased problems, in fact sparsification yields extremely efficient {\em sublinear time} algorithms for finding perfect matchings in regular bipartite graphs when the graph is given in adjacency array representation. Thus, our algorithms recover a perfect matching (with high probability) without looking the whole input. We present two approaches, one based on independent sampling and another on random walks, obtaining an algorithm that recovers a perfect matching in $O(n\log n)$ time, within $O(\log n)$ of output complexity, essentially closing the problem. In the second part of the thesis we study the streaming complexity of maximum bipartite matching. This problem is relevant to modern data models, where the algorithm is constrained in space and is only allowed few passes over the input. We are interested in determining the best tradeoff between the space usage and the quality of the solution obtained. We first study the problem in the single pass setting. A central object of our study is a new notion of sparsification relevant to matching problems: we define the notion of an $\e$matching cover of a bipartite graph as a subgraph that approximately preserves sizes of matchings between every two subsets of vertices, which can be viewed as a 'sparsifier' for matching problems. We give an efficient construction of a sparse subgraph that we call a 'matching skeleton', which we show is a linearsize matching cover for a certain range of parameters (in fact, for $\e> 1/2$). We then show that our 'sparsifier' can be applied repeatedly while maintaining a nontrivial approximation ratio in the streaming model with vertex arrivals, obtaining the first
$ passes, improving upon the previously best known approximation. In the third part of the thesis we consider the concept of {\em spectral sparsification} introduced by Spielman and Teng. Here, we uncover a connection between spectral sparsification and spanners, i.e. subgraphs that approximately preserve shortest path distances. This connection allows us to obtain a quasilinear time algorithm for constructing spectral sparsifiers using approximate distance oracles and entirely bypassing linear system solvers, which was previously the only known way of constructing spectral sparsifiers in quasilinear time. Finally, in the last part of the thesis we design an efficient implementation of cutpreserving sparsification in a streaming setting with edge deletions using only one pass over the data.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request via Aeon (opens in new tab) 
3781 2012 K  Inlibrary use 
Online 100. Locality sensitive hashing for real time and distributed computation [electronic resource] [2012]
 Description
 Book — 1 online resource.
 Summary

Similarity Search (SS) involves retrieving objects from a data set similar to a query object. In most applications, objects are represented by feature vectors in a high dimensional space. In this setting, Locality sensitive hashing(LSH) has emerged as the method of choice to implement largescale SS. LSH uses random projections to insert data into a number of hash tables. SS queries are answered by performing a lookup operation in these hash tables. Although LSH is widely used in data mining and machine learning applications, it requires a large number of hash tables in order to guarantee accurate search results. This makes it difficult to implement LSH in high speed real time applications (due to slow query processing) and distributed systems (due to network load), since each query lookup in a hash bucket requires a network call. In this thesis we describe new techniques to implement SS via LSH in these settings. For Real Time SS, we describe a variant of LSH, Ternary Locality Sensitive Hashing (TLSH), for implementing SS on TCAMs (Ternary Content Addressable Memories), a ternary associative memory popular in networking equipments like routers and switches. Our approach critically exploits the wildcard matching capability of TCAMs. Our TLSHbased TCAM implementation has nearlinear space complexity and answers SS queries in a single lookup, thus offering an exponential improvement over existing LSH solutions, which have a run time that grows as a fractional power of the data set size and sub quadratic space complexity. This enables efficient implementation of SS in realtime, at the cost of using a hardware primitive, the TCAM. Our solution works around known LSH lower bounds, as well as the (more generic) cell probe model lower bounds for near neighbor and nearest neighbor problems. The simulations and experiments we perform indicate that our proposed architecture can handle millions of SS queries per second. Distributed SS: We design a new implementation of LSH, called Layered LSH, for distributed (Key, Value) based frameworks. Layered LSH offers an exponential improvement over the in the network cost per query incurred by simple distributed implementations of LSH. In contrast with simple implementations whose network cost increases with increase in desired quality of SS, we prove that network costs associated with Layered LSH are independent of the search quality. Our experiments using the MapReduce framework indicate that this improvement in network efficiency leads to large improvements in runtime, in scenarios where SS needs to be implemented with high accuracy on data sets containing tens of millions of data points. We also prove that despite being network efficient (achieved by colocating points near points on the same machine), Layered LSH sends points that are a constant distance apart to different machines with high likelihood. This shows that Layered LSH hits the right tradeoff between network efficiency and load balance across machines.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request via Aeon (opens in new tab) 
3781 2012 S  Inlibrary use 
Articles+
Journal articles, ebooks, & other eresources
Guides
Course and topicbased guides to collections, tools, and services.