1  6
Number of results to display per page
Online 1. Code and Data supplement to "Incoherence of Partial Component Sampling in multidimensional NMR" [2016]
 Monajemi, Hatef (Author)
 2016
 Description
 Dataset
 Summary

The data and code provided here are supplementary information for the paper “Incoherence of Partial Component Sampling in multidimensional NMR" by H. Monajemi, D.L. Donoho, J.C. Hoch, and A.D. Schuyler. Please read INSTRUCTION.TXT for reproducing the results of the article. Abstract of the article: In NMR spectroscopy, random undersampling in the indirect dimensions causes reconstruction artifacts whose size can be bounded using the socalled {\it coherence}. In experiments with multiple indirect dimensions, new undersampling approaches were recently proposed: random phase detection (RPD) \cite{Maciejewski11} and its generalization, partial component sampling (PCS) \cite{Schuyler13}. The new approaches are fully aware of the fact that highdimensional experiments generate hypercomplexvalued free induction decays; they randomly acquire only certain lowdimensional components of each highdimensional hypercomplex entry. We provide a classification of various hypercomplexaware undersampling schemes, and define a hypercomplexaware coherence appropriate for such undersampling schemes; we then use it to quantify undersampling artifacts of RPD and various PCS schemes.
The data and code provided here are supplementary information for the paper “Incoherence of Partial Component Sampling in multidimensional NMR" by H. Monajemi, D.L. Donoho, J.C. Hoch, and A.D. Schuyler. Please read INSTRUCTION.TXT for reproducing the results of the article. Abstract of the article: In NMR spectroscopy, random undersampling in the indirect dimensions causes reconstruction artifacts whose size can be bounded using the socalled {\it coherence}. In experiments with multiple indirect dimensions, new undersampling approaches were recently proposed: random phase detection (RPD) \cite{Maciejewski11} and its generalization, partial component sampling (PCS) \cite{Schuyler13}. The new approaches are fully aware of the fact that highdimensional experiments generate hypercomplexvalued free induction decays; they randomly acquire only certain lowdimensional components of each highdimensional hypercomplex entry. We provide a classification of various hypercomplexaware undersampling schemes, and define a hypercomplexaware coherence appropriate for such undersampling schemes; we then use it to quantify undersampling artifacts of RPD and various PCS schemes.  Collection
 Stanford Research Data
Online 2. Phase transitions in deterministic compressed sensing, with application to magnetic resonance spectroscopy [electronic resource] [2016]
 Monajemi, Hatef.
 2016.
 Description
 Book — 1 online resource.
 Summary

Undersampled measurement schemes are ubiquitous throughout science and engineering. Compressed sensing is an undersampling theory that has significantly impacted many engineering applications such as MRI imaging, seismic exploration, NMR logging, and NMR spectroscopy. The theory of compressed sensing posits that a "sufficiently sparse" signal can be undersampled yet acquired exactly. This reduction in the number of required samples is achieved by acquiring measurements in a particular way and then reconstructing the signal by a nonlinear algorithm. A fundamental question in compressed sensing is "for a given reconstruction algorithm, how much information is needed to successfully acquire a compressible signal of a given complexity." To answer this question, researchers have come up with different theories using different mathematical tools such as coherence, restricted isometry property, and phase transition. Amongst these, the phase transition approach pioneered by Donoho and Tanner is the only approach that gives an accurate answer and provides precise predictions that match the experimental observations. The DonohoTanner predictions of phase transitions rely on the assumption that the sensing matrix has Gaussian i.i.d. entries  an assumption that is rarely satisfied in engineering applications. Nonetheless, for an important collection of deterministic matrices used in engineering we have shown that the reconstruction of sparse objects by convex optimization works well  in fact just as well as for true random matrices. In other words, the theories established for the Gaussian random matrices are equally applicable to many other nonrandom matrices. Though the Gaussian phase transition curves, also known as the ``DonohoTanner" curves, are universal across many deterministic sensing matrices, there is a suite of undersampling matrices that exhibit a rather different behavior; For instance, in multidimensional Nuclear Magnetic Resonance (NMR) spectroscopy when free induction decay is measured across multiple time dimensions, one typically acquires anisotropic undersampled measurements by exhaustive sampling in certain time dimensions and partial sampling in others. Moreover, the signal acquired in these experiments belong to the set of hypercomplex numbers, and more recent undersampling techniques such as partial component sampling (PCS) take advantage of this fact by randomly acquiring only certain lowdimensional components of each highdimensional hypercomplex entry. Little is known about the reconstruction behavior of such undersampling schemes in the relevant mathematical literature. In this thesis, we first provide a classification of various hypercomplexaware undersampling schemes, and define a hypercomplexaware measure of coherence that can be used to quantify the size and extent of the corresponding undersampling artifacts. We then show that for NMR undersampling schemes the ability of a convex optimizer to recover the unknown signal from undersampled measurements is significantly worse than the known Gaussian phase transitions. In particular, we show that the compressed sensing phase transitions of anisotropic undersampling schemes in NMR matches those of block diagonal matrices. We establish precise equivalence between NMR undersampling schemes and block diagonal matrices. We then provide finiteN predictions of phase transitions for block diagonal matrices, which are directly applicable to sampling methods in NMR spectroscopy.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2016 M  Inlibrary use 
Online 3. Code and data supplement for "Sparsity/Undersampling Tradeoffs in Anisotropic Undersampling, with Applications in MR Imaging/Spectroscopy" [2013]
 Monajemi, Hatef (Author)
 2013  2016
 Description
 Dataset
 Summary

The data and code provided here are supplementary material for the Information and Inference paper “Sparsity/Undersampling Tradeoffs in Anisotropic Undersampling, with Applications in MR Imaging/Spectroscopy" by H. Monajemi, and D.L. Donoho. Please read README file for reproducing the results of the article. Abstract of the article: We study anisotropic undersampling schemes like those used in multidimensional NMR spectroscopy and MR imaging, which sample exhaustively in certain time dimensions and randomly in others. Our analysis shows that anisotropic undersampling schemes are equivalent to certain blockdiagonal measurement systems. We develop novel exact formulas for the sparsity/undersampling tradeoffs in such measurement systems. Our formulas predict finiteN phase transition behavior differing substantially from the well known asymptotic phase transitions for classical dense undersampling. Extensive empirical work shows that our formulas accurately describe observed finiteN behavior, while the usual asymptotic predictions based on universality are substantially inaccurate. We also vary the anisotropy, keeping the total number of samples fixed, and for each variation we determine the precise sparsity/undersampling tradeoff (phase transition). We show that, other things being equal, the ability to recover a sparse spectrum decreases with an increasing number of exhaustivelysampled dimensions.
The data and code provided here are supplementary material for the Information and Inference paper “Sparsity/Undersampling Tradeoffs in Anisotropic Undersampling, with Applications in MR Imaging/Spectroscopy" by H. Monajemi, and D.L. Donoho. Please read README file for reproducing the results of the article. Abstract of the article: We study anisotropic undersampling schemes like those used in multidimensional NMR spectroscopy and MR imaging, which sample exhaustively in certain time dimensions and randomly in others. Our analysis shows that anisotropic undersampling schemes are equivalent to certain blockdiagonal measurement systems. We develop novel exact formulas for the sparsity/undersampling tradeoffs in such measurement systems. Our formulas predict finiteN phase transition behavior differing substantially from the well known asymptotic phase transitions for classical dense undersampling. Extensive empirical work shows that our formulas accurately describe observed finiteN behavior, while the usual asymptotic predictions based on universality are substantially inaccurate. We also vary the anisotropy, keeping the total number of samples fixed, and for each variation we determine the precise sparsity/undersampling tradeoff (phase transition). We show that, other things being equal, the ability to recover a sparse spectrum decreases with an increasing number of exhaustivelysampled dimensions.  Collection
 Stanford Research Data
4. DETERMINISTIC MATRICES MATCHING THE COMPRESSED SENSING PHASE TRANSITIONS OF GAUSSIAN RANDOM MATRICES [2012]
 Monajemi, Hatef.
 Stanford, California : Department of Statistics, STANFORD UNIVERSITY, 2012.
 Description
 Book — 22 pages.
SAL3 (offcampus storage)
SAL3 (offcampus storage)  Status 

For use in Special Collections Reading Room  Request onsite access 
260464  Inlibrary use 
Online 5. Code and Data supplement to "Efficient Threshold Selection for Multivariate Total Variation Denoising" [2014]
 Sardy, Sylvain (Author)
 [ca. 2014  2017]
 Description
 Dataset
 Summary

The data and code provided here are supplementary information for the paper “Efficient Threshold Selection for Multivariate Total Variation Denoising". Abstract of the article: Total variation (TV) denoising is a nonparametric smoothing method that has good properties for preserving sharp edges and contours in objects with spatial structures like natural images.The estimate is sparse in the sense that TV reconstruction leads to a piecewise constant function with a small number of jumps. A threshold parameter controls the number of jumps and the quality of the estimation. In practice, this threshold is often selected by minimizing a goodnessoffit criterion like crossvalidation, which can be costly as it requires solving the highdimensional and nondifferentiable TV optimization problem many times. We propose instead a two step adaptive procedure via a connection to large deviation of stochastic processes. We also give conditions under which TV denoising achieves exact segmentation. We then apply our procedure to denoise a collection of 1D and 2D test signals verifying the effectiveness of our approach in practice.
The data and code provided here are supplementary information for the paper “Efficient Threshold Selection for Multivariate Total Variation Denoising". Abstract of the article: Total variation (TV) denoising is a nonparametric smoothing method that has good properties for preserving sharp edges and contours in objects with spatial structures like natural images.The estimate is sparse in the sense that TV reconstruction leads to a piecewise constant function with a small number of jumps. A threshold parameter controls the number of jumps and the quality of the estimation. In practice, this threshold is often selected by minimizing a goodnessoffit criterion like crossvalidation, which can be costly as it requires solving the highdimensional and nondifferentiable TV optimization problem many times. We propose instead a two step adaptive procedure via a connection to large deviation of stochastic processes. We also give conditions under which TV denoising achieves exact segmentation. We then apply our procedure to denoise a collection of 1D and 2D test signals verifying the effectiveness of our approach in practice.  Collection
 Stanford Research Data
Online 6. Code and Data supplement to "Deterministic Matrices Matching the Compressed Sensing Phase Transitions of Gaussian Random Matrices." [2012]
 Donoho, David (Author)
 2012
 Description
 Dataset
 Summary

The data and code provided here are supplementary information for the paper “Deterministic Matrices Matching the Compressed Sensing Phase Transitions for Gaussian Random Matrices” by H. Monajemi, S. Jafarpour, Stat330/CME362 Collaboration, and D. L. Donoho. The description of the data is provided in the companion README.TXT file. The data is the outcome of research that started as a course project at Stanford University by participants of Stat330/CME362 class taught by Donoho in Fall 2011 (Course TA: Matan Gavish). Data collection was a joint effort of 40 researchers listed in the original paper.\n\n In compressed sensing, one takes $n < N$ samples of an $N$dimensional vector $x_0$ using an $n\times N$ matrix $A$, obtaining undersampled measurements $y = Ax_0$. For random matrices with Gaussian i.i.d entries, it is known that, when $x_0$ is $k$sparse, there is a precisely determined {\it phase transition}: for a certain region in the ($k/n$,$\ n/N$)phase diagram, convex optimization $\text{min } x_1 \text{ subject to } y=Ax, \ x \in X^N$ typically finds the sparsest solution, while outside that region, it typically fails. It has been shown empirically that the same property  with the same phase transition location  holds for a wide range of nonGaussian \textit{random} matrix ensembles.\n\n We consider specific deterministic matrices including Spikes and Sines, Spikes and Noiselets, Paley Frames, DelsarteGoethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Extensive experiments show that for a typical $k$sparse object, convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian matrices. In our experiments, we considered coefficients constrained to $X^N$ for four different sets $X \in \{[0,1], R_+, R, C\}$. We establish this finding for each of the associated four phase transitions.
The data and code provided here are supplementary information for the paper “Deterministic Matrices Matching the Compressed Sensing Phase Transitions for Gaussian Random Matrices” by H. Monajemi, S. Jafarpour, Stat330/CME362 Collaboration, and D. L. Donoho. The description of the data is provided in the companion README.TXT file. The data is the outcome of research that started as a course project at Stanford University by participants of Stat330/CME362 class taught by Donoho in Fall 2011 (Course TA: Matan Gavish). Data collection was a joint effort of 40 researchers listed in the original paper.\n\n In compressed sensing, one takes $n < N$ samples of an $N$dimensional vector $x_0$ using an $n\times N$ matrix $A$, obtaining undersampled measurements $y = Ax_0$. For random matrices with Gaussian i.i.d entries, it is known that, when $x_0$ is $k$sparse, there is a precisely determined {\it phase transition}: for a certain region in the ($k/n$,$\ n/N$)phase diagram, convex optimization $\text{min } x_1 \text{ subject to } y=Ax, \ x \in X^N$ typically finds the sparsest solution, while outside that region, it typically fails. It has been shown empirically that the same property  with the same phase transition location  holds for a wide range of nonGaussian \textit{random} matrix ensembles.\n\n We consider specific deterministic matrices including Spikes and Sines, Spikes and Noiselets, Paley Frames, DelsarteGoethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Extensive experiments show that for a typical $k$sparse object, convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian matrices. In our experiments, we considered coefficients constrained to $X^N$ for four different sets $X \in \{[0,1], R_+, R, C\}$. We establish this finding for each of the associated four phase transitions.  Collection
 Stanford Research Data
Articles+
Journal articles, ebooks, & other eresources
 Articles+ results include