Search results

1,017 results

View results as:
Number of results to display per page
Collection
VISTA Lab
This site houses sample data and code for the publication, Takemura, H., Rokem, A., Winawer, J., Yeatman, J.D., Wandell, B. A., and Pestilli, F. A major human white-matter pathway between dorsal and ventral visual cortex. All code in this repository is written in MATLAB (Mathworks) and, together with the included data, can be used to reproduce several of the figures from the publication. Code and data are provided as part of the goal of ensuring that computational methods are reproducible by other researchers.
This site houses sample data and code for the publication, Takemura, H., Rokem, A., Winawer, J., Yeatman, J.D., Wandell, B. A., and Pestilli, F. A major human white-matter pathway between dorsal and ventral visual cortex. All code in this repository is written in MATLAB (Mathworks) and, together with the included data, can be used to reproduce several of the figures from the publication. Code and data are provided as part of the goal of ensuring that computational methods are reproducible by other researchers.
Collection
Stanford Research Data
In this code supplement we offer a Matlab software library that includes: - A function that calculates the optimal shrinkage coefficient in known or unknown noise level. - Scripts that generate each of the figures in this paper. - A script that generates figures similar to Figure 7, comparing AMSE to MSE in various situations.
In this code supplement we offer a Matlab software library that includes: - A function that calculates the optimal shrinkage coefficient in known or unknown noise level. - Scripts that generate each of the figures in this paper. - A script that generates figures similar to Figure 7, comparing AMSE to MSE in various situations.
Collection
Code supplement to "Efficient analytical fragility function fitting using dynamic structural analysis."
Example code and spreadsheets to illustrate the calculations from the paper "Efficient analytical fragility function fitting using dynamic structural analysis."
Example code and spreadsheets to illustrate the calculations from the paper "Efficient analytical fragility function fitting using dynamic structural analysis."
Collection
Code supplement to "Efficient analytical fragility function fitting using dynamic structural analysis."
Example code to illustrate the calculations from the paper "Ground-motion intensity and damage map selection for probabilistic infrastructure network risk assessment using optimization."
Example code to illustrate the calculations from the paper "Ground-motion intensity and damage map selection for probabilistic infrastructure network risk assessment using optimization."
Collection
Research Datasets for MPEG
Camera equipped mobile devices, such as mobile phones or tablets are becoming ubiquitous platforms for deployment of visual search and augmented reality applications. A visual database is typically stored on remote servers. Hence, for a visual search, information must be either uploaded from the mobile device to the server, or downloaded from the server to the mobile device. With relatively slow wireless links, the response time of the system critically depends on how much information must be transferred. MPEG is considering standardizing technologies that will enable efficient and interoperable design of visual search applications. In particular we are seeking technologies for visual content matching in images or video. Visual content matching includes matching of views of objects, landmarks, and printed documents that is robust to partial occlusions as well as changes in vantage point, camera parameters, and lighting conditions. There are a number of component technologies that are useful for visual search, including format of visual descriptors, descriptor extraction process, as well as indexing, and matching algorithms. As a minimum, the format of descriptors as well as parts of their extraction process should be defined to ensure interoperability. It is envisioned that a standard for compact descriptors will ensure interoperability of visual search applications and databases, enable high level of performance of implementations conformant to the standard, simplify design of visual search applications, enable hardware support for descriptor extraction and matching functionality in mobile devices, reduce load on wireless networks transmitting visual search-related information. It is envisioned that such standard will provide a complementary tool to the suite of existing MPEG standards, such as MPEG-7 visual descriptors. To build full visual search application this standard may be used jointly with other standards, such as MPEG Query Format, HTTP, XML, JPEG, JPSec, and JPSearch.
Camera equipped mobile devices, such as mobile phones or tablets are becoming ubiquitous platforms for deployment of visual search and augmented reality applications. A visual database is typically stored on remote servers. Hence, for a visual search, information must be either uploaded from the mobile device to the server, or downloaded from the server to the mobile device. With relatively slow wireless links, the response time of the system critically depends on how much information must be transferred. MPEG is considering standardizing technologies that will enable efficient and interoperable design of visual search applications. In particular we are seeking technologies for visual content matching in images or video. Visual content matching includes matching of views of objects, landmarks, and printed documents that is robust to partial occlusions as well as changes in vantage point, camera parameters, and lighting conditions. There are a number of component technologies that are useful for visual search, including format of visual descriptors, descriptor extraction process, as well as indexing, and matching algorithms. As a minimum, the format of descriptors as well as parts of their extraction process should be defined to ensure interoperability. It is envisioned that a standard for compact descriptors will ensure interoperability of visual search applications and databases, enable high level of performance of implementations conformant to the standard, simplify design of visual search applications, enable hardware support for descriptor extraction and matching functionality in mobile devices, reduce load on wireless networks transmitting visual search-related information. It is envisioned that such standard will provide a complementary tool to the suite of existing MPEG standards, such as MPEG-7 visual descriptors. To build full visual search application this standard may be used jointly with other standards, such as MPEG Query Format, HTTP, XML, JPEG, JPSec, and JPSearch.
Collection
Research Datasets for MPEG
MPEG is currently developing a standard titled Compact Descriptors for Visual Search (CDVS) for descriptor extraction and compression. In this work, we develop comprehensive patch-level experiments for a direct comparison of low bitrate descriptors for visual search. For evaluating different compression schemes, we propose a data set of matching pairs of image patches from the MPEG-CDVS image-level data sets.
MPEG is currently developing a standard titled Compact Descriptors for Visual Search (CDVS) for descriptor extraction and compression. In this work, we develop comprehensive patch-level experiments for a direct comparison of low bitrate descriptors for visual search. For evaluating different compression schemes, we propose a data set of matching pairs of image patches from the MPEG-CDVS image-level data sets.
Collection
Payne Paleobiology Lab Data Files
These data were used to produce the figures and analyses presented in the Proceedings B paper by Payne et al, published in 2014.
These data were used to produce the figures and analyses presented in the Proceedings B paper by Payne et al, published in 2014.
Collection
Pleistocene Lake Surprise
Data Repository Item #2014221 for the paper "Rise and fall of late Pleistocene pluvial lakes in response to reduced evaporation and precipitation: Evidence from Lake Surprise, California" by Daniel E. Ibarra, Anne E. Egger, Karrie L. Weaver, Caroline R. Harris and Kate Maher. Included in the document is analytical methods, discussion of the runoff coefficient, as well as supporting figures and tables.
Data Repository Item #2014221 for the paper "Rise and fall of late Pleistocene pluvial lakes in response to reduced evaporation and precipitation: Evidence from Lake Surprise, California" by Daniel E. Ibarra, Anne E. Egger, Karrie L. Weaver, Caroline R. Harris and Kate Maher. Included in the document is analytical methods, discussion of the runoff coefficient, as well as supporting figures and tables.
Collection
Pleistocene Lake Surprise
Data tables (main text and supplement for the paper "Rise and fall of late Pleistocene pluvial lakes in response to reduced evaporation and precipitation: Evidence from Lake Surprise, California" by Daniel E. Ibarra, Anne E. Egger, Karrie L. Weaver, Caroline R. Harris and Kate Maher. All tables are provided as an xlsx file.
Data tables (main text and supplement for the paper "Rise and fall of late Pleistocene pluvial lakes in response to reduced evaporation and precipitation: Evidence from Lake Surprise, California" by Daniel E. Ibarra, Anne E. Egger, Karrie L. Weaver, Caroline R. Harris and Kate Maher. All tables are provided as an xlsx file.
Collection
Research Datasets for Image, Video, and Multimedia Systems Group at Stanford
We present the CNN2h dataset, which can be used for evaluating systems that search videos using image queries. It contains 2 hours of video and 139 image queries with annotated ground truth (based on video frames extracted at 10 frames per second). The annotations also include: i) 2,951 pairs of matching image queries and video frames, and ii) 21,412 pairs of non-matching image queries and video frames (which were verified to avoid visual similarities). Please read the "README" file for a description of the files included here.
We present the CNN2h dataset, which can be used for evaluating systems that search videos using image queries. It contains 2 hours of video and 139 image queries with annotated ground truth (based on video frames extracted at 10 frames per second). The annotations also include: i) 2,951 pairs of matching image queries and video frames, and ii) 21,412 pairs of non-matching image queries and video frames (which were verified to avoid visual similarities). Please read the "README" file for a description of the files included here.
Collection
Research Datasets for Image, Video, and Multimedia Systems Group at Stanford
This collection includes several image sets used for developing algorithms for mobile visual search with images containing visual text. It includes single word patch data and also image level data.
This collection includes several image sets used for developing algorithms for mobile visual search with images containing visual text. It includes single word patch data and also image level data.
Collection
Stanford Research Data
Biases pose a major confound when inferring perception from behavior. Signal detection theory, a powerful theoretical framework for accounting for bias effects in two-alternative forced choice tasks, cannot be applied, without fundamental modifications, to tasks with more than two alternatives. Here, we introduce a multidimensional signal detection model (the m-ADC model) for measuring perceptual sensitivity while accounting for choice bias in multialternative detection tasks. Our model successfully explains behaviors in diverse tasks and provides a powerful tool for decoupling the effects of sensitivity from those of bias in studies of perception, attention and decision-making that increasingly employ multialternative designs. The accompanying files contain: 1) Supplemental Data demonstrating key analytical results regarding the m-ADC model (Sridharan et al, J. Vis, 2014): Appendices E-F, Figures S1-S4 and Tables S1-S3. 2) Matlab scripts for maximum-likelihood and Markov-chain Monte-Carlo estimation of m-ADC model parameters Update (August, 2014): Scripts (item #2) will be uploaded soon! In the meantime, if you would like to use the model to fit data in your studies, please email the corresponding author at "dsridhar AT stanford DOT edu".
Biases pose a major confound when inferring perception from behavior. Signal detection theory, a powerful theoretical framework for accounting for bias effects in two-alternative forced choice tasks, cannot be applied, without fundamental modifications, to tasks with more than two alternatives. Here, we introduce a multidimensional signal detection model (the m-ADC model) for measuring perceptual sensitivity while accounting for choice bias in multialternative detection tasks. Our model successfully explains behaviors in diverse tasks and provides a powerful tool for decoupling the effects of sensitivity from those of bias in studies of perception, attention and decision-making that increasingly employ multialternative designs. The accompanying files contain: 1) Supplemental Data demonstrating key analytical results regarding the m-ADC model (Sridharan et al, J. Vis, 2014): Appendices E-F, Figures S1-S4 and Tables S1-S3. 2) Matlab scripts for maximum-likelihood and Markov-chain Monte-Carlo estimation of m-ADC model parameters Update (August, 2014): Scripts (item #2) will be uploaded soon! In the meantime, if you would like to use the model to fit data in your studies, please email the corresponding author at "dsridhar AT stanford DOT edu".
Collection
Lobell Laboratory
Field-level data on maize and soybean yields, sow dates, and associated weather variables used for the study in the Related Publication below. One hundred fields were randomly sampled from each county in each year, with a different random sample used each year. All information that could be used to identify individual producers, such as latitude and longitude, has been removed to comply with USDA policies on personal identifiable data.
Field-level data on maize and soybean yields, sow dates, and associated weather variables used for the study in the Related Publication below. One hundred fields were randomly sampled from each county in each year, with a different random sample used each year. All information that could be used to identify individual producers, such as latitude and longitude, has been removed to comply with USDA policies on personal identifiable data.
Collection
VISTA Lab
The file in this repository contains demo data for the LiFE software package: http://francopestilli.github.io/life/ The data set can be used in combination with the function life_demo.m: http://francopestilli.github.io/life/doc/scripts/life_demo.html The file contains: (1) Diffusion imaging data acquired at the Center for Neurobiological Imaging Stanford University. (2) High-resolution anatomical T1w MRI images of the same brain coregistered to the diffusion data. (3) Three connectomes generated using the same diffusion data in (1) and the tractogrpahy toolbox MRtrix (http://www.brain.org.au/software/mrtrix/). The three connectomes are created using different tractography algorithms. Two connectomes were generated using constrined-spherical deconvolution (CSD) models and either probabilistic or deterministic tractography. The third connectome was generated using a tensor model and deterministic tractography.
The file in this repository contains demo data for the LiFE software package: http://francopestilli.github.io/life/ The data set can be used in combination with the function life_demo.m: http://francopestilli.github.io/life/doc/scripts/life_demo.html The file contains: (1) Diffusion imaging data acquired at the Center for Neurobiological Imaging Stanford University. (2) High-resolution anatomical T1w MRI images of the same brain coregistered to the diffusion data. (3) Three connectomes generated using the same diffusion data in (1) and the tractogrpahy toolbox MRtrix (http://www.brain.org.au/software/mrtrix/). The three connectomes are created using different tractography algorithms. Two connectomes were generated using constrined-spherical deconvolution (CSD) models and either probabilistic or deterministic tractography. The third connectome was generated using a tensor model and deterministic tractography.
Collection
Research Datasets for Image, Video, and Multimedia Systems Group at Stanford
The Google IO Dataset contains slide and spoken text data crawled from presentations in the Google IO Conference (2010-2012), with manually labeled ground truth relevance judgements. The dataset is particularly suitable for studying information retrieval using multi-modal data.
The Google IO Dataset contains slide and spoken text data crawled from presentations in the Google IO Conference (2010-2012), with manually labeled ground truth relevance judgements. The dataset is particularly suitable for studying information retrieval using multi-modal data.
Collection
Stanford Research Data
This Deposition contains the data and code underlying the paper "The Phase Transition of Matrix Recovery from Gaussian Measurements Matches the Minimax MSE of Matrix Denoising" By David Donoho , Matan Gavish, and Andrea Montanari, In press, PNAS 2013.
This Deposition contains the data and code underlying the paper "The Phase Transition of Matrix Recovery from Gaussian Measurements Matches the Minimax MSE of Matrix Denoising" By David Donoho , Matan Gavish, and Andrea Montanari, In press, PNAS 2013.
Collection
Stanford Research Data
The online Supplemental Material contains high-resolution versions of all images used in the perceptual experiments. Also included is MATLAB code for performing the analyses reported in the article. This includes the natural scene statistics analysis, the image manipulation, and the perceptual experiment analysis (raw response data from both experiments is provided).
The online Supplemental Material contains high-resolution versions of all images used in the perceptual experiments. Also included is MATLAB code for performing the analyses reported in the article. This includes the natural scene statistics analysis, the image manipulation, and the perceptual experiment analysis (raw response data from both experiments is provided).
Collection
VISTA Lab
This site houses sample data and code for the publication, Winawer, J., Kay, K.N., Foster, B.L., Rauschecker, A.M., Parvizi, J., and Wandell, B.A. (2013). Asynchronous broadband signals are the principal source of the BOLD response in human visual cortex. Current Biology 23(13). doi:10.1016/j.cub.2013.05.001 All code in this repository is written in MATLAB (Mathworks) and, together with the included data, can be used to reproduce several of the figures from the publication. Code and data are provided as part of the goal of ensuring that computational methods are reproducible by other researchers.
This site houses sample data and code for the publication, Winawer, J., Kay, K.N., Foster, B.L., Rauschecker, A.M., Parvizi, J., and Wandell, B.A. (2013). Asynchronous broadband signals are the principal source of the BOLD response in human visual cortex. Current Biology 23(13). doi:10.1016/j.cub.2013.05.001 All code in this repository is written in MATLAB (Mathworks) and, together with the included data, can be used to reproduce several of the figures from the publication. Code and data are provided as part of the goal of ensuring that computational methods are reproducible by other researchers.
Collection
Research Datasets for Image, Video, and Multimedia Systems Group at Stanford
We introduce the Stanford Streaming MAR dataset. The dataset contains 23 different objects of interest, divided to four categories: Books, CD covers, DVD covers and Common Objects. We first record one video for each object where the object is in a static position while the camera is moving. These videos are recorded with a hand-held mobile phone with different amounts of camera motion, glare, blur, zoom, rotation and perspective changes. Each video is 100 frames long, recorded at 30 fps with resolution 640 x 480. For each video, we provide a clean database image (no background noise) for the corresponding object of interest. We also provide 5 more videos for moving objects recorded with a moving camera. These videos help to study the effect of background clutter when there is a relative motion between the object and the background. Finally, we record 4 videos that contain multiple objects from the dataset. Each video is 200 frames long and contains 3 objects of interest where the camera captures them one after the other. We provide the ground-truth localization information for 14 videos, where we manually define a bounding quadrilateral around the object of interest in each video frame. This localization information is used in the calculation of the Jaccard index. 1. Static single object: 1.a. Books: Automata Theory, Computer Architecture, OpenCV, Wang Book. 1.b. CD Covers: Barry White, Chris Brown, Janet Jackson, Rascal Flatts, Sheryl Crow. 1.c. DVD Covers: Finding Nemo, Monsters Inc, Mummy Returns, Private Ryan, Rush Hour, Shrek, Titanic, Toy Story. 1.d. Common Objects: Bleach, Glade, Oreo, Polish, Tide, Tuna. 2. Moving object, moving camera: Barry White Moving, Chris Brown Moving, Titanic Moving, Titanic Moving - Second, Toy Story Moving. 3. Multiple objects: 3.a. Multiple Objects 1: Polish, Wang Book, Monsters Inc. 3.b. Multiple Objects 2: OpenCV, Barry White, Titanic. 3.c. Multiple Objects 3: Monsters Inc, Toy Story, Titanic. 3.d. Multiple Objects 4: Wang Book, Barry White, OpenCV.
We introduce the Stanford Streaming MAR dataset. The dataset contains 23 different objects of interest, divided to four categories: Books, CD covers, DVD covers and Common Objects. We first record one video for each object where the object is in a static position while the camera is moving. These videos are recorded with a hand-held mobile phone with different amounts of camera motion, glare, blur, zoom, rotation and perspective changes. Each video is 100 frames long, recorded at 30 fps with resolution 640 x 480. For each video, we provide a clean database image (no background noise) for the corresponding object of interest. We also provide 5 more videos for moving objects recorded with a moving camera. These videos help to study the effect of background clutter when there is a relative motion between the object and the background. Finally, we record 4 videos that contain multiple objects from the dataset. Each video is 200 frames long and contains 3 objects of interest where the camera captures them one after the other. We provide the ground-truth localization information for 14 videos, where we manually define a bounding quadrilateral around the object of interest in each video frame. This localization information is used in the calculation of the Jaccard index. 1. Static single object: 1.a. Books: Automata Theory, Computer Architecture, OpenCV, Wang Book. 1.b. CD Covers: Barry White, Chris Brown, Janet Jackson, Rascal Flatts, Sheryl Crow. 1.c. DVD Covers: Finding Nemo, Monsters Inc, Mummy Returns, Private Ryan, Rush Hour, Shrek, Titanic, Toy Story. 1.d. Common Objects: Bleach, Glade, Oreo, Polish, Tide, Tuna. 2. Moving object, moving camera: Barry White Moving, Chris Brown Moving, Titanic Moving, Titanic Moving - Second, Toy Story Moving. 3. Multiple objects: 3.a. Multiple Objects 1: Polish, Wang Book, Monsters Inc. 3.b. Multiple Objects 2: OpenCV, Barry White, Titanic. 3.c. Multiple Objects 3: Monsters Inc, Toy Story, Titanic. 3.d. Multiple Objects 4: Wang Book, Barry White, OpenCV.
Dataset
3 computer discs ; 4 3/4 in. Digital: Data file; cdb.
Green Library
Status of items at Green Library
Green Library Status
Social Science Data and Software: Velma Denning Room (non-circulating) Find it
HG2040.5 .U5 H56 2003A DISC 1 In-library use
HG2040.5 .U5 H56 2003A DISC 2 In-library use
HG2040.5 .U5 H56 2003A DISC 3 In-library use