Search results

RSS feed for this result

1,137 results

Collection
A tide prediction and tide height control system for laboratory mesocosms
This archive contains materials related to a tide prediction system and laboratory aquarium tide height control system. It includes a set of tide prediction libraries meant to run on Arduino microcontrollers. The libraries are based on data generated by the National Ocean and Atmospheric Administration's National Ocean Service, and were compiled by David Flater for the open source program XTide. The data from XTide were then adapted to generate the individual libraries for a variety of sites around the US mainland, Alaska, Hawaii, Puerto Rico and the Virgin Islands. We also provide a set of R scripts to generate new libraries for additional NOAA tide station sites that are not included in this repository. See the folder "Generate_new_site_libraries" in the archive for the scripts and description of the library generation process. Diagrams and parts lists for the mechanical portion of the tide control rack are provided as well. R code and raw data for the plant growth analysis are provided as well.
This archive contains materials related to a tide prediction system and laboratory aquarium tide height control system. It includes a set of tide prediction libraries meant to run on Arduino microcontrollers. The libraries are based on data generated by the National Ocean and Atmospheric Administration's National Ocean Service, and were compiled by David Flater for the open source program XTide. The data from XTide were then adapted to generate the individual libraries for a variety of sites around the US mainland, Alaska, Hawaii, Puerto Rico and the Virgin Islands. We also provide a set of R scripts to generate new libraries for additional NOAA tide station sites that are not included in this repository. See the folder "Generate_new_site_libraries" in the archive for the scripts and description of the library generation process. Diagrams and parts lists for the mechanical portion of the tide control rack are provided as well. R code and raw data for the plant growth analysis are provided as well.
Collection
Stanford Center for Reservoir Forecasting (SCRF)
The SCRF Benchmark Reservoir is a synthetic fractured reservoir created by using Paradigm SKUA 2013.2 version. This synthetic dataset is intended to serve as a test bed for algorithms and workflows aimed at prediction of subsurface geology, reservoir modeling and forecasting in fractured reservoirs. The Benchmark reservoir has a three-layered subsurface geology reflecting aeolian, fluvial and coastal environments and major sealing faults that dissect the domain into a “core”, “graben” and a “horst” area. It is populated with relevant facies properties, porosity and permeability. Fracture intensity and orientation distributions are computed from geomechanical constraints. The influence of these fractures on elastic properties and seismic responses is evaluated based on computation of the effective elastic stiffness tensor. Workflows and details of models used in generating the Benchmark Reservoir are documented under "Major Workflows and Models" as data_generation.docx The data set is an appendix to the following manuscript to be submitted to Computers and Geosciences that can be found as a pdf file under "Associated Manuscript Draft1": Roy, A., Shin, Y, Li, P., Aydin, O., Jung, A., Mukerji, T and Caers, J., "A Benchmark Dataset for Fractured Reservoirs" There are three files in this data set: 1) BM_ReservoirModel.sprj is the Benchmark SKUA project user interface 2) BM_REservoirModel.prj.7z is zip file containing the Benchmark SKUA project folder 3) BM_Properties.7z is a zip file containing data files on properties, faults and horizons from the Geologic Grid (1) and (2) can be accessed and opened in SKUA: Please download and unzip (2) before opening the project. The SKUA project contains: wells, structural model (faults and horizons), geologic properties (e.g. facies), petrophysical properties, fracture intensity and orientation, seismic attributes (e.g. velocities) and a reference DFN (not part of the main Benchmark) There are two grids in the project: a) Geologic Model: contains the faices models, petrophysical properties, elastic properties and seismic attributes. b) Flow Model: covers part of the area modeled by (a). Smaller and coarser grid containing the DFN. Also, relevant properties (e.g. fracture intensity, orientation) are copied from (a). Flow responses are simulated in this grid
The SCRF Benchmark Reservoir is a synthetic fractured reservoir created by using Paradigm SKUA 2013.2 version. This synthetic dataset is intended to serve as a test bed for algorithms and workflows aimed at prediction of subsurface geology, reservoir modeling and forecasting in fractured reservoirs. The Benchmark reservoir has a three-layered subsurface geology reflecting aeolian, fluvial and coastal environments and major sealing faults that dissect the domain into a “core”, “graben” and a “horst” area. It is populated with relevant facies properties, porosity and permeability. Fracture intensity and orientation distributions are computed from geomechanical constraints. The influence of these fractures on elastic properties and seismic responses is evaluated based on computation of the effective elastic stiffness tensor. Workflows and details of models used in generating the Benchmark Reservoir are documented under "Major Workflows and Models" as data_generation.docx The data set is an appendix to the following manuscript to be submitted to Computers and Geosciences that can be found as a pdf file under "Associated Manuscript Draft1": Roy, A., Shin, Y, Li, P., Aydin, O., Jung, A., Mukerji, T and Caers, J., "A Benchmark Dataset for Fractured Reservoirs" There are three files in this data set: 1) BM_ReservoirModel.sprj is the Benchmark SKUA project user interface 2) BM_REservoirModel.prj.7z is zip file containing the Benchmark SKUA project folder 3) BM_Properties.7z is a zip file containing data files on properties, faults and horizons from the Geologic Grid (1) and (2) can be accessed and opened in SKUA: Please download and unzip (2) before opening the project. The SKUA project contains: wells, structural model (faults and horizons), geologic properties (e.g. facies), petrophysical properties, fracture intensity and orientation, seismic attributes (e.g. velocities) and a reference DFN (not part of the main Benchmark) There are two grids in the project: a) Geologic Model: contains the faices models, petrophysical properties, elastic properties and seismic attributes. b) Flow Model: covers part of the area modeled by (a). Smaller and coarser grid containing the DFN. Also, relevant properties (e.g. fracture intensity, orientation) are copied from (a). Flow responses are simulated in this grid
Collection
Stanford Geospatial Center Teaching Data
This shapefile was created from the Clowns of America, International Membership Database (anonymized) obtained in 2007 from Clowns of America, International, for use in teaching. It was created by geocoding the ZipCode field of the original table, using OpenRefine and the Geonames.org PostalCodes API. Attributes include those from the original data table ('City', 'ZipCode', 'Clown_Name', and 'Country'), as well attributes added during the geocoding process ('admname1','adm1','adm2','placname','longitude','latitude') and an attribute 'Clown-Na_1' which represents the values in the 'Clown_Name' attribute field after a "Cluster and Edit" operation, performed in OpenRefine to collapse values so that "Co Co" or "Co-Co" both are clustered and edited to become "CoCo" for use in name frequency analysis.
This shapefile was created from the Clowns of America, International Membership Database (anonymized) obtained in 2007 from Clowns of America, International, for use in teaching. It was created by geocoding the ZipCode field of the original table, using OpenRefine and the Geonames.org PostalCodes API. Attributes include those from the original data table ('City', 'ZipCode', 'Clown_Name', and 'Country'), as well attributes added during the geocoding process ('admname1','adm1','adm2','placname','longitude','latitude') and an attribute 'Clown-Na_1' which represents the values in the 'Clown_Name' attribute field after a "Cluster and Edit" operation, performed in OpenRefine to collapse values so that "Co Co" or "Co-Co" both are clustered and edited to become "CoCo" for use in name frequency analysis.
Collection
Software and data produced by Baker Research Group
This page provides data and code to document the referenced paper, which examines four methods by which ground motions can be selected for dynamic seismic response analyses of engineered systems when the underlying seismic hazard is quantified via ground motion simulation rather than empirical ground motion prediction equations. Even with simulation-based seismic hazard, a ground motion selection process is still required in order to extract a small number of time series from the much larger set developed as part of the hazard calculation. Four specific methods are presented for ground motion selection from simulation-based seismic hazard analyses. One of the four methods provides a ‘benchmark’ result (i.e. using all simulated ground motions), enabling the consistency of the other three more efficient selection methods to be addressed.
This page provides data and code to document the referenced paper, which examines four methods by which ground motions can be selected for dynamic seismic response analyses of engineered systems when the underlying seismic hazard is quantified via ground motion simulation rather than empirical ground motion prediction equations. Even with simulation-based seismic hazard, a ground motion selection process is still required in order to extract a small number of time series from the much larger set developed as part of the hazard calculation. Four specific methods are presented for ground motion selection from simulation-based seismic hazard analyses. One of the four methods provides a ‘benchmark’ result (i.e. using all simulated ground motions), enabling the consistency of the other three more efficient selection methods to be addressed.
Collection
Stanford Research Data
Included in this supplement to Lau et al. (in revision) is the R code used to explore controls on d238U and [U] in seawater (U model.R) (Figures 1 and S6). We also include the R code used to explore the links between the OMZ and the carbon cycle (Figures 4, S8). We make this code available without restriction for any purpose as long as the original paper is properly cited.
Included in this supplement to Lau et al. (in revision) is the R code used to explore controls on d238U and [U] in seawater (U model.R) (Figures 1 and S6). We also include the R code used to explore the links between the OMZ and the carbon cycle (Figures 4, S8). We make this code available without restriction for any purpose as long as the original paper is properly cited.
Collection
Stanford Research Data
Included in this supplement is the R code used to calculate Pliocene changes in EAIS mass and global sea level as function of temperature change based on the LR04 benthic stack using Eq.'s 1 and 2 of Winnick and Caves (2015). We make this code available without restriction for any purpose as long as the original paper is properly cited.
Included in this supplement is the R code used to calculate Pliocene changes in EAIS mass and global sea level as function of temperature change based on the LR04 benthic stack using Eq.'s 1 and 2 of Winnick and Caves (2015). We make this code available without restriction for any purpose as long as the original paper is properly cited.
Dataset
1 online resource (6 data files) Digital: data file.
This data set contains two sets of data, nationwide tax and deed data for all counties in the United States, approximately 145 million properties, residential and commerical. Data are collected from U.S. County Assessor and Recorder offices, cleaned and normalized by CoreLogic.
This data set contains two sets of data, nationwide tax and deed data for all counties in the United States, approximately 145 million properties, residential and commerical. Data are collected from U.S. County Assessor and Recorder offices, cleaned and normalized by CoreLogic.
Collection
Stanford Research Data
The mechanisms of perceptual decision-making are frequently studied through measurements of reaction time (RT). Classical sequential-sampling models (SSMs) of decision-making posit RT as the sum of non-overlapping sensory, evidence accumulation, and motor delays. In contrast, recent empirical evidence hints at a continuous-flow paradigm in which multiple motor plans evolve concurrently with the accumulation of sensory evidence. Here we employ a trial-to-trial reliability-based component analysis of encephalographic data acquired during a random-dot motion task to directly image continuous flow in the human brain. We identify three topographically distinct neural sources whose dynamics exhibit contemporaneous ramping to time-of-response, with the rate and duration of ramping discriminating fast and slow responses. Only one of these sources, a parietal component, exhibits dependence on strength-of-evidence. The remaining two components possess topographies consistent with origins in the motor system, and their covariation with RT overlaps in time with the evidence accumulation process. After fitting the behavioral data to a popular SSM, we find that the model decision variable is more closely matched to the combined activity of the three components than to their individual activity. Our results emphasize the role of motor variability in shaping RT distributions on perceptual decision tasks, suggesting that physiologically plausible computational accounts of perceptual decision-making must model the concurrent nature of evidence accumulation and motor planning.
The mechanisms of perceptual decision-making are frequently studied through measurements of reaction time (RT). Classical sequential-sampling models (SSMs) of decision-making posit RT as the sum of non-overlapping sensory, evidence accumulation, and motor delays. In contrast, recent empirical evidence hints at a continuous-flow paradigm in which multiple motor plans evolve concurrently with the accumulation of sensory evidence. Here we employ a trial-to-trial reliability-based component analysis of encephalographic data acquired during a random-dot motion task to directly image continuous flow in the human brain. We identify three topographically distinct neural sources whose dynamics exhibit contemporaneous ramping to time-of-response, with the rate and duration of ramping discriminating fast and slow responses. Only one of these sources, a parietal component, exhibits dependence on strength-of-evidence. The remaining two components possess topographies consistent with origins in the motor system, and their covariation with RT overlaps in time with the evidence accumulation process. After fitting the behavioral data to a popular SSM, we find that the model decision variable is more closely matched to the combined activity of the three components than to their individual activity. Our results emphasize the role of motor variability in shaping RT distributions on perceptual decision tasks, suggesting that physiologically plausible computational accounts of perceptual decision-making must model the concurrent nature of evidence accumulation and motor planning.
Collection
Stanford Research Data
The data package contains 10 anonymized datasets of scalp-recorded EEG in MATLAB (.mat) format. Each .mat file contains EEG data from one experimental subject. Data matrices have been preprocessed and are in the form used as input for classification. Dimensionality reduction/PCA has not been performed.
The data package contains 10 anonymized datasets of scalp-recorded EEG in MATLAB (.mat) format. Each .mat file contains EEG data from one experimental subject. Data matrices have been preprocessed and are in the form used as input for classification. Dimensionality reduction/PCA has not been performed.
Collection
VISTA Lab
This site houses sample data and code for the publication, Takemura, H., Caiafa, C. F., Wandell, B. A., and Pestilli, F. Ensemble tractography (under review). All code in this repository is written in MATLAB (Mathworks) and, together with the included data, can be used to reproduce several of the figures from the publication. Code and data are provided as part of the goal of ensuring that computational methods are reproducible by other researchers. Note: This version of the repository is still under progress, and does not include the LiFE and ET code in the newest release. We are also preparing the GitHub repository for hosting the updated version of the script for reproducing the figure on this paper. Github repository (still under private, but will be available for public upon the publication) https://github.com/vistalab/EnsembleTractography
This site houses sample data and code for the publication, Takemura, H., Caiafa, C. F., Wandell, B. A., and Pestilli, F. Ensemble tractography (under review). All code in this repository is written in MATLAB (Mathworks) and, together with the included data, can be used to reproduce several of the figures from the publication. Code and data are provided as part of the goal of ensuring that computational methods are reproducible by other researchers. Note: This version of the repository is still under progress, and does not include the LiFE and ET code in the newest release. We are also preparing the GitHub repository for hosting the updated version of the script for reproducing the figure on this paper. Github repository (still under private, but will be available for public upon the publication) https://github.com/vistalab/EnsembleTractography
Collection
Stanford Research Data
Comparison of magnetic characteristics of three major active regions before and after major flare.
Comparison of magnetic characteristics of three major active regions before and after major flare.
Database topics
Statistical and Numeric Data; Government Information: International and Foreign
Dataset
1 online resource. Digital: data file.
A single downloadable .csv file provides estimates on "violent deaths" from 2004 onwards. The violent deaths indicator combines national level statistics on homicide and data on fatalities occurred in armed conflict. The database covers more than 189 countries and territories and it is kept constantly updated. Estimates on violent deaths between 2007 and 2012 are at the core of the analysis presented in the third and latest edition of the Global Burden of Armed Violence, launched in May 2015. The database combines data from a wide range of sources that report the number of people died in violent events across both conflict and non-conflict settings. Typical sources are, among others, hospitals, mortuaries, police as well as those organizations that document casualties in areas affected by armed conflict.
A single downloadable .csv file provides estimates on "violent deaths" from 2004 onwards. The violent deaths indicator combines national level statistics on homicide and data on fatalities occurred in armed conflict. The database covers more than 189 countries and territories and it is kept constantly updated. Estimates on violent deaths between 2007 and 2012 are at the core of the analysis presented in the third and latest edition of the Global Burden of Armed Violence, launched in May 2015. The database combines data from a wide range of sources that report the number of people died in violent events across both conflict and non-conflict settings. Typical sources are, among others, hospitals, mortuaries, police as well as those organizations that document casualties in areas affected by armed conflict.
Database topics
Government Information: State and Local; Statistical and Numeric Data
Dataset
1 online resource.
Collection
Software and data produced by Baker Research Group
We identify potential data sources for fling-step and discuss their value, compile a dataset of simulated and recorded ground motions containing fling, extract fling pulses from these ground motions, and derive a predictive model for fling amplitude and period that is compared to existing empirical models. Fling is the result of permanent static offset of the ground during an earthquake, but is usually ignored because ground motion records from accelerometers contain errors that make it difficult to measure static offsets. However, some data sources include fling, such as specially processed recordings, ground motion simulations, and high-rate global positioning systems (GPS). From this data, we extract fling pulses using the pattern search global optimization algorithm. The resulting displacement amplitudes and periods are used to create a new predictive equation for fling parameters, are compared to existing empirical models for pulse period, fling amplitude, and surface displacement along the fault, and are found to match reasonably well.
We identify potential data sources for fling-step and discuss their value, compile a dataset of simulated and recorded ground motions containing fling, extract fling pulses from these ground motions, and derive a predictive model for fling amplitude and period that is compared to existing empirical models. Fling is the result of permanent static offset of the ground during an earthquake, but is usually ignored because ground motion records from accelerometers contain errors that make it difficult to measure static offsets. However, some data sources include fling, such as specially processed recordings, ground motion simulations, and high-rate global positioning systems (GPS). From this data, we extract fling pulses using the pattern search global optimization algorithm. The resulting displacement amplitudes and periods are used to create a new predictive equation for fling parameters, are compared to existing empirical models for pulse period, fling amplitude, and surface displacement along the fault, and are found to match reasonably well.
Collection
Climate Timeseries for the NEON Regions
Historical monthly values of average temperature (degrees C) for the whole United States (including Alaska, Puerto Rico, and Hawaii), contiguous United States, and each of 20 NEON ecoregions (ecoregion numbers described in Anderegg & Diffenbaugh 2015 or NEON website). Data are derived from Hadley Center CRU TS3.22 dataset. See Anderegg & Diffenbaugh 2015 for methods.
Historical monthly values of average temperature (degrees C) for the whole United States (including Alaska, Puerto Rico, and Hawaii), contiguous United States, and each of 20 NEON ecoregions (ecoregion numbers described in Anderegg & Diffenbaugh 2015 or NEON website). Data are derived from Hadley Center CRU TS3.22 dataset. See Anderegg & Diffenbaugh 2015 for methods.
Collection
Climate Timeseries for the NEON Regions
Historical monthly values of precipitation (mm) for the whole United States (including Alaska, Puerto Rico, and Hawaii), contiguous United States, and each of 20 NEON ecoregions (ecoregion numbers described in Anderegg & Diffenbaugh 2015 or NEON website). Data are derived from Hadley Center CRU TS3.22 dataset.
Historical monthly values of precipitation (mm) for the whole United States (including Alaska, Puerto Rico, and Hawaii), contiguous United States, and each of 20 NEON ecoregions (ecoregion numbers described in Anderegg & Diffenbaugh 2015 or NEON website). Data are derived from Hadley Center CRU TS3.22 dataset.
Dataset
1 online resource
At approximately 1 km resolution (30" X 30"), LandScan is the finest resolution global population distribution data available and represents an ambient population (average over 24 hours). The LandScan algorithm uses spatial data and imagery analysis technologies and a multi-variable dasymetric modeling approach to disaggregate census counts within an administrative boundary. Since no single population distribution model can account for the differences in spatial data availability, quality, scale, and accuracy as well as the differences in cultural settlement practices, LandScan population distribution models are tailored to match the data conditions and geographical nature of each individual country and region.
At approximately 1 km resolution (30" X 30"), LandScan is the finest resolution global population distribution data available and represents an ambient population (average over 24 hours). The LandScan algorithm uses spatial data and imagery analysis technologies and a multi-variable dasymetric modeling approach to disaggregate census counts within an administrative boundary. Since no single population distribution model can account for the differences in spatial data availability, quality, scale, and accuracy as well as the differences in cultural settlement practices, LandScan population distribution models are tailored to match the data conditions and geographical nature of each individual country and region.
This collection consists of a published article as well as supplementary data pertaining to global areas that were the subject of interstate territorial conflicts from 1947 to 2000.
This collection consists of a published article as well as supplementary data pertaining to global areas that were the subject of interstate territorial conflicts from 1947 to 2000.
Collection
Stanford Project for Open Knowledge in Epidemiology (SPOKE)
Mathematically modeling the Expansion of the National Salt Reduction Initiative: A Mathematical Model of Benefits and Risks of Population-level Sodium Reduction
Mathematically modeling the Expansion of the National Salt Reduction Initiative: A Mathematical Model of Benefits and Risks of Population-level Sodium Reduction
Collection
Stanford Project for Open Knowledge in Epidemiology (SPOKE)
These files include an R script to download and organize MEPS data to study a nationally-representative panel of patient visits and expenditures, as well as a user-friendly adaptation of the CCM financing model for use by a single clinic or modified as labeled to accept the national MEPS data for several clinics.
These files include an R script to download and organize MEPS data to study a nationally-representative panel of patient visits and expenditures, as well as a user-friendly adaptation of the CCM financing model for use by a single clinic or modified as labeled to accept the national MEPS data for several clinics.