Search results

1,023 results

View results as:
Number of results to display per page
Collection
VISTA Lab
This site houses sample data and code for the publication, Takemura, H., Rokem, A., Winawer, J., Yeatman, J.D., Wandell, B. A., and Pestilli, F. A major human white-matter pathway between dorsal and ventral visual cortex. All code in this repository is written in MATLAB (Mathworks) and, together with the included data, can be used to reproduce several of the figures from the publication. Code and data are provided as part of the goal of ensuring that computational methods are reproducible by other researchers.
This site houses sample data and code for the publication, Takemura, H., Rokem, A., Winawer, J., Yeatman, J.D., Wandell, B. A., and Pestilli, F. A major human white-matter pathway between dorsal and ventral visual cortex. All code in this repository is written in MATLAB (Mathworks) and, together with the included data, can be used to reproduce several of the figures from the publication. Code and data are provided as part of the goal of ensuring that computational methods are reproducible by other researchers.
Collection
Stanford Research Data
In this supplement we include the data and R code necessary to reproduce our model fit for the eucalypt data from the paper, in two steps: First, constructData.R, which converts raw data into the processed data frames contained in allData.RData. Second, modeling.R, which loads allData.RData and fits the model that we fit in our article. There is a "lightweight version," biasCorrectionNoGrids.zip, which does not include the raw environmental grid data required to execute constructData.R, but does include allData.RData and modeling.R, so modeling.R can still be executed. See the readme file for more details. The data provided in this archive are described in Online Appendix C of Fithian et al. (2014). The presence-only species data are sourced from Atlas of Living Australia and Atlas of NSW Wildlife, Office of Environment and Heritage (OEH), both publicly available. The presence-absence data were downloaded from the Flora Survey Module of the Atlas of NSW Wildlife, Office of Environment and Heritage (OEH), and we thank them for permission to archive the data here. Any further use of these data should cite Fithian et al. (2014) and acknowledge the data sources.
In this supplement we include the data and R code necessary to reproduce our model fit for the eucalypt data from the paper, in two steps: First, constructData.R, which converts raw data into the processed data frames contained in allData.RData. Second, modeling.R, which loads allData.RData and fits the model that we fit in our article. There is a "lightweight version," biasCorrectionNoGrids.zip, which does not include the raw environmental grid data required to execute constructData.R, but does include allData.RData and modeling.R, so modeling.R can still be executed. See the readme file for more details. The data provided in this archive are described in Online Appendix C of Fithian et al. (2014). The presence-only species data are sourced from Atlas of Living Australia and Atlas of NSW Wildlife, Office of Environment and Heritage (OEH), both publicly available. The presence-absence data were downloaded from the Flora Survey Module of the Atlas of NSW Wildlife, Office of Environment and Heritage (OEH), and we thank them for permission to archive the data here. Any further use of these data should cite Fithian et al. (2014) and acknowledge the data sources.
Collection
Stanford Research Data
In this code supplement we offer a Matlab software library that includes: - A function that calculates the optimal shrinkage coefficient in known or unknown noise level. - Scripts that generate each of the figures in this paper. - A script that generates figures similar to Figure 7, comparing AMSE to MSE in various situations.
In this code supplement we offer a Matlab software library that includes: - A function that calculates the optimal shrinkage coefficient in known or unknown noise level. - Scripts that generate each of the figures in this paper. - A script that generates figures similar to Figure 7, comparing AMSE to MSE in various situations.
Collection
Code supplement to "Efficient analytical fragility function fitting using dynamic structural analysis."
Example code and spreadsheets to illustrate the calculations from the paper "Efficient analytical fragility function fitting using dynamic structural analysis."
Example code and spreadsheets to illustrate the calculations from the paper "Efficient analytical fragility function fitting using dynamic structural analysis."
Collection
Code supplement to "Efficient analytical fragility function fitting using dynamic structural analysis."
Example code to illustrate the calculations from the paper "Ground-motion intensity and damage map selection for probabilistic infrastructure network risk assessment using optimization."
Example code to illustrate the calculations from the paper "Ground-motion intensity and damage map selection for probabilistic infrastructure network risk assessment using optimization."
Collection
Research Datasets for MPEG
Camera equipped mobile devices, such as mobile phones or tablets are becoming ubiquitous platforms for deployment of visual search and augmented reality applications. A visual database is typically stored on remote servers. Hence, for a visual search, information must be either uploaded from the mobile device to the server, or downloaded from the server to the mobile device. With relatively slow wireless links, the response time of the system critically depends on how much information must be transferred. MPEG is considering standardizing technologies that will enable efficient and interoperable design of visual search applications. In particular we are seeking technologies for visual content matching in images or video. Visual content matching includes matching of views of objects, landmarks, and printed documents that is robust to partial occlusions as well as changes in vantage point, camera parameters, and lighting conditions. There are a number of component technologies that are useful for visual search, including format of visual descriptors, descriptor extraction process, as well as indexing, and matching algorithms. As a minimum, the format of descriptors as well as parts of their extraction process should be defined to ensure interoperability. It is envisioned that a standard for compact descriptors will ensure interoperability of visual search applications and databases, enable high level of performance of implementations conformant to the standard, simplify design of visual search applications, enable hardware support for descriptor extraction and matching functionality in mobile devices, reduce load on wireless networks transmitting visual search-related information. It is envisioned that such standard will provide a complementary tool to the suite of existing MPEG standards, such as MPEG-7 visual descriptors. To build full visual search application this standard may be used jointly with other standards, such as MPEG Query Format, HTTP, XML, JPEG, JPSec, and JPSearch.
Camera equipped mobile devices, such as mobile phones or tablets are becoming ubiquitous platforms for deployment of visual search and augmented reality applications. A visual database is typically stored on remote servers. Hence, for a visual search, information must be either uploaded from the mobile device to the server, or downloaded from the server to the mobile device. With relatively slow wireless links, the response time of the system critically depends on how much information must be transferred. MPEG is considering standardizing technologies that will enable efficient and interoperable design of visual search applications. In particular we are seeking technologies for visual content matching in images or video. Visual content matching includes matching of views of objects, landmarks, and printed documents that is robust to partial occlusions as well as changes in vantage point, camera parameters, and lighting conditions. There are a number of component technologies that are useful for visual search, including format of visual descriptors, descriptor extraction process, as well as indexing, and matching algorithms. As a minimum, the format of descriptors as well as parts of their extraction process should be defined to ensure interoperability. It is envisioned that a standard for compact descriptors will ensure interoperability of visual search applications and databases, enable high level of performance of implementations conformant to the standard, simplify design of visual search applications, enable hardware support for descriptor extraction and matching functionality in mobile devices, reduce load on wireless networks transmitting visual search-related information. It is envisioned that such standard will provide a complementary tool to the suite of existing MPEG standards, such as MPEG-7 visual descriptors. To build full visual search application this standard may be used jointly with other standards, such as MPEG Query Format, HTTP, XML, JPEG, JPSec, and JPSearch.
Collection
Research Datasets for MPEG
MPEG is currently developing a standard titled Compact Descriptors for Visual Search (CDVS) for descriptor extraction and compression. In this work, we develop comprehensive patch-level experiments for a direct comparison of low bitrate descriptors for visual search. For evaluating different compression schemes, we propose a data set of matching pairs of image patches from the MPEG-CDVS image-level data sets.
MPEG is currently developing a standard titled Compact Descriptors for Visual Search (CDVS) for descriptor extraction and compression. In this work, we develop comprehensive patch-level experiments for a direct comparison of low bitrate descriptors for visual search. For evaluating different compression schemes, we propose a data set of matching pairs of image patches from the MPEG-CDVS image-level data sets.
Collection
Payne Paleobiology Lab Data Files
These data were used to produce the figures and analyses presented in the Proceedings B paper by Payne et al, published in 2014.
These data were used to produce the figures and analyses presented in the Proceedings B paper by Payne et al, published in 2014.
Collection
Pleistocene Lake Surprise
Data Repository Item #2014221 for the paper "Rise and fall of late Pleistocene pluvial lakes in response to reduced evaporation and precipitation: Evidence from Lake Surprise, California" by Daniel E. Ibarra, Anne E. Egger, Karrie L. Weaver, Caroline R. Harris and Kate Maher. Included in the document is analytical methods, discussion of the runoff coefficient, as well as supporting figures and tables.
Data Repository Item #2014221 for the paper "Rise and fall of late Pleistocene pluvial lakes in response to reduced evaporation and precipitation: Evidence from Lake Surprise, California" by Daniel E. Ibarra, Anne E. Egger, Karrie L. Weaver, Caroline R. Harris and Kate Maher. Included in the document is analytical methods, discussion of the runoff coefficient, as well as supporting figures and tables.
Collection
Pleistocene Lake Surprise
Data tables (main text and supplement for the paper "Rise and fall of late Pleistocene pluvial lakes in response to reduced evaporation and precipitation: Evidence from Lake Surprise, California" by Daniel E. Ibarra, Anne E. Egger, Karrie L. Weaver, Caroline R. Harris and Kate Maher. All tables are provided as an xlsx file.
Data tables (main text and supplement for the paper "Rise and fall of late Pleistocene pluvial lakes in response to reduced evaporation and precipitation: Evidence from Lake Surprise, California" by Daniel E. Ibarra, Anne E. Egger, Karrie L. Weaver, Caroline R. Harris and Kate Maher. All tables are provided as an xlsx file.
Collection
Research Datasets for Image, Video, and Multimedia Systems Group at Stanford
We present the CNN2h dataset, which can be used for evaluating systems that search videos using image queries. It contains 2 hours of video and 139 image queries with annotated ground truth (based on video frames extracted at 10 frames per second). The annotations also include: i) 2,951 pairs of matching image queries and video frames, and ii) 21,412 pairs of non-matching image queries and video frames (which were verified to avoid visual similarities). Please read the "README" file for a description of the files included here.
We present the CNN2h dataset, which can be used for evaluating systems that search videos using image queries. It contains 2 hours of video and 139 image queries with annotated ground truth (based on video frames extracted at 10 frames per second). The annotations also include: i) 2,951 pairs of matching image queries and video frames, and ii) 21,412 pairs of non-matching image queries and video frames (which were verified to avoid visual similarities). Please read the "README" file for a description of the files included here.
Collection
Research Datasets for Image, Video, and Multimedia Systems Group at Stanford
This collection includes several image sets used for developing algorithms for mobile visual search with images containing visual text. It includes single word patch data and also image level data.
This collection includes several image sets used for developing algorithms for mobile visual search with images containing visual text. It includes single word patch data and also image level data.
Collection
Stanford Research Data
Biases pose a major confound when inferring perception from behavior. Signal detection theory, a powerful theoretical framework for accounting for bias effects in binary choice detection tasks, cannot be applied, without fundamental modifications, to detection tasks with more than two alternatives. Here, we introduce a multidimensional signal detection model (the m-ADC model) for measuring perceptual sensitivity while accounting for choice bias in multialternative detection tasks. Our model successfully explains behaviors in diverse tasks and provides a powerful tool for decoupling the effects of sensitivity from those of bias in studies of perception, attention and decision-making that increasingly employ multialternative designs.

Contents:
1) Supplemental Data demonstrating key analytical results regarding the m-ADC model (Sridharan et al, J. Vis, 2014): Appendices E-F, Figures S1-S4 and Tables S1-S3.
2) Matlab scripts for maximum-likelihood and Markov-chain Monte-Carlo estimation of m-ADC model parameters (fit_mADC.m, MLE_4ADC.m).

Update (September, 2014): Matlab scripts have been uploaded! The scripts are specifically for fitting four-alternative tasks (4-ADC tasks) [1,2]. The scripts can also be modified to fit a four-alternative forced-choice task (see fit_mADC.m, for instruction). If you would like to fit a task with a different number of alternatives (e.g., 2-ADC, 3-ADC, 5-ADC etc), please feel free to email the corresponding author at "dsridhar AT stanford DOT edu".

References:
[1] Sridharan, D., Ramamurthy, D.L., and Knudsen, E.I. (2013). Spatial probability dynamically modulates visual target detection in chickens. PLoS One 8, e64136.
[2] Steinmetz, N.A., and Moore, T. (2014). Eye movement preparation modulates neuronal responses in area V4 when dissociated from attentional demands. Neuron 83, 496-506.
Biases pose a major confound when inferring perception from behavior. Signal detection theory, a powerful theoretical framework for accounting for bias effects in binary choice detection tasks, cannot be applied, without fundamental modifications, to detection tasks with more than two alternatives. Here, we introduce a multidimensional signal detection model (the m-ADC model) for measuring perceptual sensitivity while accounting for choice bias in multialternative detection tasks. Our model successfully explains behaviors in diverse tasks and provides a powerful tool for decoupling the effects of sensitivity from those of bias in studies of perception, attention and decision-making that increasingly employ multialternative designs.

Contents:
1) Supplemental Data demonstrating key analytical results regarding the m-ADC model (Sridharan et al, J. Vis, 2014): Appendices E-F, Figures S1-S4 and Tables S1-S3.
2) Matlab scripts for maximum-likelihood and Markov-chain Monte-Carlo estimation of m-ADC model parameters (fit_mADC.m, MLE_4ADC.m).

Update (September, 2014): Matlab scripts have been uploaded! The scripts are specifically for fitting four-alternative tasks (4-ADC tasks) [1,2]. The scripts can also be modified to fit a four-alternative forced-choice task (see fit_mADC.m, for instruction). If you would like to fit a task with a different number of alternatives (e.g., 2-ADC, 3-ADC, 5-ADC etc), please feel free to email the corresponding author at "dsridhar AT stanford DOT edu".

References:
[1] Sridharan, D., Ramamurthy, D.L., and Knudsen, E.I. (2013). Spatial probability dynamically modulates visual target detection in chickens. PLoS One 8, e64136.
[2] Steinmetz, N.A., and Moore, T. (2014). Eye movement preparation modulates neuronal responses in area V4 when dissociated from attentional demands. Neuron 83, 496-506.
Collection
Code supplement to "Efficient analytical fragility function fitting using dynamic structural analysis."
This folder contains example code and data to illustrate the efficient transportation model using iterative traffic assignment described in Chapter 2 of M. Miller, "Seismic risk assessment of complex transportation networks," PhD Thesis, Stanford University, 2014. The compressed folder contains the following files: bd.py -- a function for building the travel demand bridge_metadata_NBI.xlsx -- a file that has the background data about the case study road bridges input/20140114_master_bridge_dict.pkl -- sample data for the SF Bay Area road components (bridges) input/20140114_master_transit_dict.pkl -- sample data for the SF Bay Area BART components input/BATS2000_34SuperD_TripTableData.csv -- average daily trips between different superdistricts. See http://analytics.mtc.ca.gov/foswiki/Main/DataDictionary for more info input/graphMTC_CentroidsLength3int.gpickle -- the graph of the SF Bay Area highways and key local roads input/sample_ground_motion_intensity_map_JUST_THREE.txt -- ground-motion intensity map data for just three ground-motion intensity maps. The columns refer to: first column is simulation number, second is fault id, third is magnitude, fourth is the annual occurrence rate (SUPER USEFUL), fifth is Sa (NOT logSa) in site new ID 1, sixth is Sa in site new ID 2, ...site ID n input/sample_ground_motion_intensity_maps_road_only_filtered.txt -- same columns as the previous file but this has a full hazard-consistent set of events input/superdistricts_centroids_dummies.csv -- file that has a centroidal/dummy link node for each superdistrict (for traffic assignment) input/superdistricts_clean.csv -- file that has a few nodes in each superdistrict (for traffic assignment) ita.py -- the core function that does the iterative traffic assignment mahmodel_road_only.py -- the main file with only road damage considered mahmodel.py -- alternative main file that also keeps track of which transit components are damaged make_bridge_dict.py -- a sample file for showing how to create your own master_bridge_dict.pkl output -- a folder for output. See README for more details. README_quick_traffic_model.txt -- documentation for this folder transit_to_damage.py -- a file that gives some helper functions for translating damaged components to nonoperational transit lines for the case study util.py -- helper functions
This folder contains example code and data to illustrate the efficient transportation model using iterative traffic assignment described in Chapter 2 of M. Miller, "Seismic risk assessment of complex transportation networks," PhD Thesis, Stanford University, 2014. The compressed folder contains the following files: bd.py -- a function for building the travel demand bridge_metadata_NBI.xlsx -- a file that has the background data about the case study road bridges input/20140114_master_bridge_dict.pkl -- sample data for the SF Bay Area road components (bridges) input/20140114_master_transit_dict.pkl -- sample data for the SF Bay Area BART components input/BATS2000_34SuperD_TripTableData.csv -- average daily trips between different superdistricts. See http://analytics.mtc.ca.gov/foswiki/Main/DataDictionary for more info input/graphMTC_CentroidsLength3int.gpickle -- the graph of the SF Bay Area highways and key local roads input/sample_ground_motion_intensity_map_JUST_THREE.txt -- ground-motion intensity map data for just three ground-motion intensity maps. The columns refer to: first column is simulation number, second is fault id, third is magnitude, fourth is the annual occurrence rate (SUPER USEFUL), fifth is Sa (NOT logSa) in site new ID 1, sixth is Sa in site new ID 2, ...site ID n input/sample_ground_motion_intensity_maps_road_only_filtered.txt -- same columns as the previous file but this has a full hazard-consistent set of events input/superdistricts_centroids_dummies.csv -- file that has a centroidal/dummy link node for each superdistrict (for traffic assignment) input/superdistricts_clean.csv -- file that has a few nodes in each superdistrict (for traffic assignment) ita.py -- the core function that does the iterative traffic assignment mahmodel_road_only.py -- the main file with only road damage considered mahmodel.py -- alternative main file that also keeps track of which transit components are damaged make_bridge_dict.py -- a sample file for showing how to create your own master_bridge_dict.pkl output -- a folder for output. See README for more details. README_quick_traffic_model.txt -- documentation for this folder transit_to_damage.py -- a file that gives some helper functions for translating damaged components to nonoperational transit lines for the case study util.py -- helper functions
Collection
Lobell Laboratory
Field-level data on maize and soybean yields, sow dates, and associated weather variables used for the study in the Related Publication below. One hundred fields were randomly sampled from each county in each year, with a different random sample used each year. All information that could be used to identify individual producers, such as latitude and longitude, has been removed to comply with USDA policies on personal identifiable data.
Field-level data on maize and soybean yields, sow dates, and associated weather variables used for the study in the Related Publication below. One hundred fields were randomly sampled from each county in each year, with a different random sample used each year. All information that could be used to identify individual producers, such as latitude and longitude, has been removed to comply with USDA policies on personal identifiable data.
Collection
Luo Lab Dorsal Raphe Tracing Images
Images of Sert-cre and Gad2-cre rabies tracing brains (every ~3rd section), showing inputs to dorsal raphe serotonin and GABA neurons, respectively, as described in Weissbourd et al., Neuron, 2014.
Images of Sert-cre and Gad2-cre rabies tracing brains (every ~3rd section), showing inputs to dorsal raphe serotonin and GABA neurons, respectively, as described in Weissbourd et al., Neuron, 2014.
Collection
VISTA Lab
The file in this repository contains demo data for the LiFE software package: http://francopestilli.github.io/life/ The data set can be used in combination with the function life_demo.m: http://francopestilli.github.io/life/doc/scripts/life_demo.html The file contains: (1) Diffusion imaging data acquired at the Center for Neurobiological Imaging Stanford University. (2) High-resolution anatomical T1w MRI images of the same brain coregistered to the diffusion data. (3) Three connectomes generated using the same diffusion data in (1) and the tractogrpahy toolbox MRtrix (http://www.brain.org.au/software/mrtrix/). The three connectomes are created using different tractography algorithms. Two connectomes were generated using constrined-spherical deconvolution (CSD) models and either probabilistic or deterministic tractography. The third connectome was generated using a tensor model and deterministic tractography.
The file in this repository contains demo data for the LiFE software package: http://francopestilli.github.io/life/ The data set can be used in combination with the function life_demo.m: http://francopestilli.github.io/life/doc/scripts/life_demo.html The file contains: (1) Diffusion imaging data acquired at the Center for Neurobiological Imaging Stanford University. (2) High-resolution anatomical T1w MRI images of the same brain coregistered to the diffusion data. (3) Three connectomes generated using the same diffusion data in (1) and the tractogrpahy toolbox MRtrix (http://www.brain.org.au/software/mrtrix/). The three connectomes are created using different tractography algorithms. Two connectomes were generated using constrined-spherical deconvolution (CSD) models and either probabilistic or deterministic tractography. The third connectome was generated using a tensor model and deterministic tractography.
Collection
Research Datasets for Image, Video, and Multimedia Systems Group at Stanford
The Google IO Dataset contains slide and spoken text data crawled from presentations in the Google IO Conference (2010-2012), with manually labeled ground truth relevance judgements. The dataset is particularly suitable for studying information retrieval using multi-modal data.
The Google IO Dataset contains slide and spoken text data crawled from presentations in the Google IO Conference (2010-2012), with manually labeled ground truth relevance judgements. The dataset is particularly suitable for studying information retrieval using multi-modal data.
Collection
City Nature
Data for greenness, "paved-ness" and multiple demographic variables for 2661 US neighborhoods, developed for the City Nature project (Stanford University) in 2013. This subset of the accompanying hoods3155lite data is used for the "Naturehoods Explorer" parallel coordinates, map and streetview visualization.
Data for greenness, "paved-ness" and multiple demographic variables for 2661 US neighborhoods, developed for the City Nature project (Stanford University) in 2013. This subset of the accompanying hoods3155lite data is used for the "Naturehoods Explorer" parallel coordinates, map and streetview visualization.
Collection
Stanford Research Data
This Deposition contains the data and code underlying the paper "The Phase Transition of Matrix Recovery from Gaussian Measurements Matches the Minimax MSE of Matrix Denoising" By David Donoho , Matan Gavish, and Andrea Montanari, In press, PNAS 2013.
This Deposition contains the data and code underlying the paper "The Phase Transition of Matrix Recovery from Gaussian Measurements Matches the Minimax MSE of Matrix Denoising" By David Donoho , Matan Gavish, and Andrea Montanari, In press, PNAS 2013.