Fracture mechanics., Algorithms., Strains and stresses., Algorithms., Fracture mechanics., and Strains and stresses.
Stress concentrations near grain boundaries, precipitates, and similar micro-heterogeneities nucleate instabilities leading to macroscale fracture. As it is not practical to model each flaw explicitly, their ensemble effect is modeled statistically. Accounting for this aleatory uncertainty requires smaller specimens (e.g., small finite elements) to have generally higher and more variable strengths, which is necessary for the initial failure probability of a finite domain to be unaffected by its discretization into elements. Localization itself, which might be attributed to constitutive instability, requires realistic numerical perturbations to predict bifurcations such as radial cracking in axisymmetric problems. These perturbations, stemming from microscale heterogeneity, are incorporated in simulations by imposing statistical spatial variability in the parameters of an otherwise conventional (deterministic and scale-independent) damage model. This approach is attractive for its algorithmic simplicity and straightforward calibration from standard strength tests. In addition, it results in virtually no loss of efficiency or robustness relative to deterministic models and accommodates general three dimensional loading. Despite these advantages, some significant challenges remain and are discussed. However, it is demonstrated that including aleatory uncertainty with associated scale effects significantly improves predictiveness on large-scale computational domains, where it is impractical to resolve each crack or localization zone. The original document contains color images. Prepared in collaboration with Sandia National Laboratories, Computational Shock and Multiphysics, Albuquerque, NM and the University of Utah, Mechanical Engineering, Salt Lake City, UT. Published in the Int. J. Numer. Meth. Engng., 2014. Sponsored in part by DoE.
Water quality management for agricultural production is a complicated and interesting problem. Hydrological and economic factors must be considered when designing strategies to reduce nutrient runoff from agricultural activities. This dissertation is composed of three chapters that investigate cost-effective ways to mitigate water pollution from agricultural nonpoint pollution sources and explore farmers' incentives when participating in water quality trading programs. Chapter 1 investigates landscape targeting of best management practices (BMPs) based on topographic index (TI) to determine how targeting would affect costs of meeting nitrogen (N) loading goals for Mahantango watershed, Pennsylvania. We use the results from two climate models and the mean of the ensemble of seven climate models to estimate expected climate changes and the Soil and Water Assessment Tool-Variable Source Area (SWAT-VSA) model to predict crop yields and N export. Costs of targeting and uniform placement of BMPs across the entire study area (4.23 km2) are compared under historical and future climate scenarios. We find that with a goal of reducing N loadings by 25%, spatial targeting methods could reduce costs by an average of 30% compared with uniform BMP placement under three historical climate scenarios. Cost savings from targeting are 38% under three future climate scenarios. Chapter 2 scales up the study area to the Susquehanna watershed (71,000 km2). We examine the effects of targeting the required reductions in N runoff within counties, across counties, and both within and across counties for the Susquehanna watershed. We set the required N reduction to 35%. Using the uniform strategy to meet the required N reduction as the baseline, results show that costs of achieving a regional 35% N reduction goal can be reduced by 13%, 31% and 36% with cross-county targeting, within-county targeting and within and across county targeting, respectively. Results from Chapters 1 and 2 suggest that
This dissertation is the compilation of two major innovations that rely on a common technique known as multidimensional scaling (MDS). MDS is a dimension-reduction method that takes high-dimensional data and creates low-dimensional versions. Project 1: Visualizations are useful when learning from high-dimensional data. However, visualizations, just as any data summary, can be misleading when they do not incorporate measures of uncertainty; e.g., uncertainty from the data or the dimension reduction algorithm used to create the visual display. We incorporate uncertainty into visualizations created by a weighted version of MDS called WMDS. Uncertainty exists in these visualizations on the variable weights, the coordinates of the display, and the fit of WMDS. We quantify these uncertainties using Bayesian models in a method we call Informative Probabilistic WMDS (IP-WMDS). Visually, we display estimated uncertainty in the form of color and ellipses, and practically, these uncertainties reflect trust in WMDS. Our results show that these displays of uncertainty highlight different aspects of the visualization, which can help inform analysts. Project 2: Analysis of network data has emerged as an active research area in statistics. Much of the focus of ongoing research has been on static networks that represent a single snapshot or aggregated historical data unchanging over time. However, most networks result from temporally-evolving systems that exhibit intrinsic dynamic behavior. Monitoring such temporally-varying networks to detect anomalous changes has applications in both social and physical sciences. In this work, we simulate data from models that rely on MDS, and we perform an evaluation study of the use of summary statistics for anomaly detection by incorporating principles from statistical process monitoring. In contrast to most previous studies, we deliberately incorporate temporal auto-correlation in our study. Other considerations in our comprehensive assessment
Smart grids allow operators to monitor the grid continuously, detect occurring incidents, and trigger corrective actions. To perform that, they require a deep understanding of the effective situation within the grid. However, some parameters of the grid may not be known with absolute confidence. Reasoning over the grid despite uncertainty needs the consideration of all possible states. In this paper, we propose an approach to enumerate only valid potential grid states. Thereby, we allow discarding invalid assumptions that poison the results of a given computation procedure. We validate our approach based on a real-world topology from the power grid in Luxembourg. We show that the estimation of cable load is negatively affected by invalid fuse state combinations, in terms of computation time and accuracy.