1 - 2
Number of results to display per page
Online 1. Code and Data supplement to "Deterministic Matrices Matching the Compressed Sensing Phase Transitions of Gaussian Random Matrices." [2012]
- Donoho, David (Author)
- 2012
- Description
- Dataset
- Summary
-
The data and code provided here are supplementary information for the paper “Deterministic Matrices Matching the Compressed Sensing Phase Transitions for Gaussian Random Matrices” by H. Monajemi, S. Jafarpour, Stat330/CME362 Collaboration, and D. L. Donoho. The description of the data is provided in the companion README.TXT file. The data is the outcome of research that started as a course project at Stanford University by participants of Stat330/CME362 class taught by Donoho in Fall 2011 (Course TA: Matan Gavish). Data collection was a joint effort of 40 researchers listed in the original paper.\n\n In compressed sensing, one takes $n < N$ samples of an $N$-dimensional vector $x_0$ using an $n\times N$ matrix $A$, obtaining undersampled measurements $y = Ax_0$. For random matrices with Gaussian i.i.d entries, it is known that, when $x_0$ is $k$-sparse, there is a precisely determined {\it phase transition}: for a certain region in the ($k/n$,$\ n/N$)-phase diagram, convex optimization $\text{min } ||x||_1 \text{ subject to } y=Ax, \ x \in X^N$ typically finds the sparsest solution, while outside that region, it typically fails. It has been shown empirically that the same property -- with the same phase transition location -- holds for a wide range of non-Gaussian \textit{random} matrix ensembles.\n\n We consider specific deterministic matrices including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Extensive experiments show that for a typical $k$-sparse object, convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian matrices. In our experiments, we considered coefficients constrained to $X^N$ for four different sets $X \in \{[0,1], R_+, R, C\}$. We establish this finding for each of the associated four phase transitions.
The data and code provided here are supplementary information for the paper “Deterministic Matrices Matching the Compressed Sensing Phase Transitions for Gaussian Random Matrices” by H. Monajemi, S. Jafarpour, Stat330/CME362 Collaboration, and D. L. Donoho. The description of the data is provided in the companion README.TXT file. The data is the outcome of research that started as a course project at Stanford University by participants of Stat330/CME362 class taught by Donoho in Fall 2011 (Course TA: Matan Gavish). Data collection was a joint effort of 40 researchers listed in the original paper.\n\n In compressed sensing, one takes $n < N$ samples of an $N$-dimensional vector $x_0$ using an $n\times N$ matrix $A$, obtaining undersampled measurements $y = Ax_0$. For random matrices with Gaussian i.i.d entries, it is known that, when $x_0$ is $k$-sparse, there is a precisely determined {\it phase transition}: for a certain region in the ($k/n$,$\ n/N$)-phase diagram, convex optimization $\text{min } ||x||_1 \text{ subject to } y=Ax, \ x \in X^N$ typically finds the sparsest solution, while outside that region, it typically fails. It has been shown empirically that the same property -- with the same phase transition location -- holds for a wide range of non-Gaussian \textit{random} matrix ensembles.\n\n We consider specific deterministic matrices including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Extensive experiments show that for a typical $k$-sparse object, convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian matrices. In our experiments, we considered coefficients constrained to $X^N$ for four different sets $X \in \{[0,1], R_+, R, C\}$. We establish this finding for each of the associated four phase transitions. - Collection
- Stanford Research Data
Online 2. Code and data supplement for "Sparsity/Undersampling Tradeoffs in Anisotropic Undersampling, with Applications in MR Imaging/Spectroscopy" [2013]
- Monajemi, Hatef (Author)
- 2013 - 2016
- Description
- Dataset
- Summary
-
The data and code provided here are supplementary material for the Information and Inference paper “Sparsity/Undersampling Tradeoffs in Anisotropic Undersampling, with Applications in MR Imaging/Spectroscopy" by H. Monajemi, and D.L. Donoho. Please read README file for reproducing the results of the article. Abstract of the article: We study anisotropic undersampling schemes like those used in multi-dimensional NMR spectroscopy and MR imaging, which sample exhaustively in certain time dimensions and randomly in others. Our analysis shows that anisotropic undersampling schemes are equivalent to certain block-diagonal measurement systems. We develop novel exact formulas for the sparsity/undersampling tradeoffs in such measurement systems. Our formulas predict finite-N phase transition behavior differing substantially from the well known asymptotic phase transitions for classical dense undersampling. Extensive empirical work shows that our formulas accurately describe observed finite-N behavior, while the usual asymptotic predictions based on universality are substantially inaccurate. We also vary the anisotropy, keeping the total number of samples fixed, and for each variation we determine the precise sparsity/undersampling tradeoff (phase transition). We show that, other things being equal, the ability to recover a sparse spectrum decreases with an increasing number of exhaustively-sampled dimensions.
The data and code provided here are supplementary material for the Information and Inference paper “Sparsity/Undersampling Tradeoffs in Anisotropic Undersampling, with Applications in MR Imaging/Spectroscopy" by H. Monajemi, and D.L. Donoho. Please read README file for reproducing the results of the article. Abstract of the article: We study anisotropic undersampling schemes like those used in multi-dimensional NMR spectroscopy and MR imaging, which sample exhaustively in certain time dimensions and randomly in others. Our analysis shows that anisotropic undersampling schemes are equivalent to certain block-diagonal measurement systems. We develop novel exact formulas for the sparsity/undersampling tradeoffs in such measurement systems. Our formulas predict finite-N phase transition behavior differing substantially from the well known asymptotic phase transitions for classical dense undersampling. Extensive empirical work shows that our formulas accurately describe observed finite-N behavior, while the usual asymptotic predictions based on universality are substantially inaccurate. We also vary the anisotropy, keeping the total number of samples fixed, and for each variation we determine the precise sparsity/undersampling tradeoff (phase transition). We show that, other things being equal, the ability to recover a sparse spectrum decreases with an increasing number of exhaustively-sampled dimensions. - Collection
- Stanford Research Data
Articles+
Journal articles, e-books, & other e-resources
- Articles+ results include