%{search_type} search results

347 catalog results

RSS feed for this result
Book
xiv, 167 leaves, bound.
SAL3 (off-campus storage), Special Collections
Book
1 online resource.
Fecal indicator bacteria (FIB), such as enterococci (ENT) and E. coli (EC), are used as a proxy for pathogens in recreational waters around the world. When concentrations of FIB exceed standards in recreational waters, the beaches are closed or posted with a health advisory. FIB standards are based on epidemiology studies and are designed to be protective of human health. These epidemiology studies typically have been conducted at beaches where the sources of pollution were treated wastewater or urban runoff, presumably contaminated with raw sewage. However, non-fecal reservoirs of FIB have been identified that have no clear connection to human fecal contamination, such as river sediments, sands and decaying aquatic plants (wrack). This dissertation examines two of these non-fecal reservoirs of FIB, beach sands and wrack, and their impacts on coastal water quality. Understanding these impacts has direct implications on current and future beach management decisions. The first research chapter in this dissertation investigates a transport pathway for ENT on beach sands to serve as a source of ENT to the coastal ocean. Although beach sands have been identified as a potential reservoir for FIB and pathogens, little is understood about the mechanisms by which these microbes can move between sands and the ocean. Two transport pathways have been suggested, 'over-beach transport' and 'through-beach transport'. In through-beach transport, microbes are eluted from sands by infiltrating water, transported down through the vadose zone and out to the ocean through submarine groundwater discharge. This study focuses on the first component of through-beach transport utilizing a combination of field and laboratory experiments. The results show that ENT are readily eluted from beach sands during infiltration events in the vadose zone. ENT detachment kinetics and potential mechanisms are revealed through use of a computer model utilizing the results of the laboratory column experiments. Detachment kinetics that are proportional to the rate of change in water content are found to best describe the observed behavior, suggesting that detachment may be caused by thin film expansion and/or air-water interface scouring. The second research chapter in this dissertation presents a two year microbial source tracking (MST) study aimed at identifying the sources of the high FIB concentrations observed at Cowell Beach in Santa Cruz, CA. Cowell Beach consistently has the worst summertime microbial water quality of any beach in California. Local agencies had been unable to identify the source of this pollution but believed that a non-fecal source, namely wrack, was responsible. Potential sources investigated included a river, a storm drain, a wharf, a harbor, sand, wrack and contaminated groundwater. The microbial pollution was identified as originating from a shoreline source, ruling out the river, storm drain, wharf and harbor as relevant sources. Based on a 24 h study and near-shore modelling results, two separate sources were identified as being dominant, sand for ENT and contaminated groundwater for EC. Wrack was found to be only a minor source, contributing less than 2% of the FIB compared to the dominant sources. The final research chapter presented in this dissertation assesses the impact beach grooming for wrack removal has on FIB concentrations at Cowell Beach. Beach grooming of wrack is considered a potential remediation strategy for beaches with high concentrations of FIB in the water and large amounts of wrack on the beach. No study prior to this work had investigated these impacts. The impacts were studied on both seasonal and short-term time scales. Grooming was generally found to have negligible impacts on concentrations of FIB in the ocean at Cowell Beach. Grooming did however increase nutrient concentrations (phosphate, silicate and dissolved inorganic nitrogen) and turbidity in the ocean. Grooming was found to be an ineffective remediation strategy at Cowell Beach. The research presented in this dissertation documents how FIB from beach sands and wrack can impact coastal microbial water quality. This dissertation provides evidence for FIB from sands being transported to the ocean via through-beach transport. This dissertation also provides evidence that wrack is a minor source of FIB to the ocean and that beach grooming for wrack removal does not provide relevant improvements to coastal water quality.
Special Collections
Book
1 online resource.
This dissertation explores several physical processes and the associated scalar transport at the land-sea interface. In particular, it examines three issues with important implications on human health and ecosystems, namely, the surf zone entrainment of pollutants from coastal discharges, the flow and hydrodynamics over giant kelp forests, and the cross-shore transport by internal waves. Chapter 2 develops a novel, quantitative framework for estimating the surf zone entrainment of pollution at a wave-dominant open beach. Using physical arguments, a dimensionless parameter equal to the quotient of the surf zone width, and the cross flow length scale of the discharge. Numerical modeling of a non-buoyant discharge at an alongshore uniform beach with a constant slope is conducted using a wave-resolving hydrodynamic model. Based on 144 numerical experiments, an empirical relationship is established between the surf zone entrainment rate and the dimensionless parameter. The empirical relationship can reasonably explain seven measurements of surf zone entrainment at three coastal discharges. Chapter 3 characterizes the flow and subtidal momentum balance near a giant kelp forest in Santa Barbara, California, from a two-month field study. In the alongshore direction, the kelp forest diverts the predominantly tidal flow towards the outer edge and creates a net offshore flow from the interior. It also significantly changes the vertical profile of the alongshore flow. A simple model which accounts for the vertical distribution of kelp density is proposed, and can reasonably predict the velocity profiles within the kelp forest. The subtidal alongshore momentum balance at 7m isobath is mainly between the bottom pressure gradient force, surface wind stress, and drag imposed by the kelp forest. In the cross-shore direction, the flow is dominated by a vertically-sheared two-layer flow, which is damped by the forest. At the kelp-free region, the subtidal cross-shore momentum balance is mainly between the pressure gradient force, wave radiation stress gradient, and bottom stress. Across the kelp forest, the subtidal momentum balance cannot be resolved by the field data, potentially due to inaccurate measurements of the advection terms. A simplified numerical model of tidal flow over an idealized kelp forest, with no Coriolis, wave, and wind effects, suggests that the major subtidal momentum terms include the pressure gradient force, kelp-induced drag, and nonlinear advection terms. From the numerical model, the drag inside the kelp is estimated to be about 26 times that at the kelp free region. Chapter 4 evaluates the potential impact of the cross-shore internal wave transport on the surf zone water quality at Huntington Beach in southern California. The study presents physical measurements and a comprehensive set of surf zone water quality measurements collected during the summers of 2005 and 2006. Internal waves were found to be an important transport mechanism of nutrient-rich subthermocline waters to the very near shore in the Southern California Bight. Internal waves may also facilitate the transport of fecal indicator bacteria (FIB) into the surf zone, or enhance the persistence of land-derived FIB. Wavelet analysis of water temperature data reveals that internal waves are highly variable, with most of the energy concentrating around diurnal and semi-diurnal frequencies. The arrival of cold subthermocline water within 1km of the surf zone is characterized by strong baroclinic, onshore flow near the bottom of the water column. The bottom, cross-shore, baroclinic current is proposed as a new proxy to measure the shoreward transport potential by internal waves. This internal wave proxy is positively correlated with phosphate concentrations in both years, silicate concentrations in 2005, and fecal indicator bacteria measurements in 2006.
Special Collections
Book
1 online resource.
Conceptual building design involves decisions that have significant life cycle environmental impact and life cycle cost implications. Designers are aware of the importance of creating sustainable buildings, yet mechanisms are lacking that effectively inform designers of the impacts of design decisions, specifically during the conceptual phase. A method is proposed that provides life cycle impact feedback to designers during the conceptual design phase in a way that better enables designers to make decisions leading to less carbon intensive and less costly buildings. Critical to the method is the development of a set of heuristics requiring very few inputs. These heuristics calculate life cycle impacts for many building design alternatives using an automated approach, thereby allowing designers to understand the full range of impacts possible for a given problem formulation. The method is configured to provide feedback specifically for design decisions made sequentially, as is typical of the building design process. In this way, designers understand the life cycle impact implications of each decision and can easily modify decisions in a way that better aligns with their desired performance objectives. The proposed method's primary contributions to knowledge are the reduction of inputs required for life cycle assessment during the conceptual building design stage to only three inputs, the formalization of heuristics corresponding to the three inputs that approximate the life cycle impact of conceptual design decisions, the integration of heuristics with automation in a way that allows for rapid exploration of the full design space, and the sequential presentation of life cycle environmental impact and life cycle cost feedback on conceptual building design decisions.
Special Collections
Book
1 online resource.
Growth of major population centers near seismically active faults has significantly increased the probability of a large earthquake striking close to a big city in the near future. This, coupled with the fact that near-fault ground motions are known to impose larger demands on structures than ground motions far from the fault, makes the quantitative study of near-fault seismic hazard and risk important. Directivity effects cause pulse-like ground motions that are known to increase the seismic hazard and risk in near-fault region. These effects depend on the source-to-site geometry parameters, which are not included in most ground-motion models used for probabilistic seismic hazard assessment computation. In this study, we develop a comprehensive framework to study near-fault ground motions, and account for their effects in seismic hazard assessment. The proposed framework is designed to be modular, with separate models to predict the probability of observing a pulse at a site, the probability distribution of the period of the observed pulse, and a narrow band amplification of the spectral ordinate conditioned on the period of the pulse. The framework also allows deaggregation of hazard with respect to probability of observing the pulse at the site and the period of the pulse. This deaggregation information can be used to aid in ground-motion selection at near fault sites. A database of recorded ground motions with each record classified as pulse-like or non-pulse-like is needed for an empirical study of directivity effects. Early studies of directivity effects used manually classified pulses. Manual classification of ground motions as pulse-like is labor intensive, slow, and has the possibility to introduce subjectivity into the classifications. To address these problems we propose an efficient algorithm to classify multi-component ground motions as pulse-like and non-pulse-like. The proposed algorithm uses the continuous wavelet transform of two orthogonal components of the ground motion to identify pulses in arbitrary orientations. The proposed algorithm was used to classify each record in the NGA-West2 database, which created the largest set of pulse-like motions ever used to study directivity effects. The framework to include directivity effects in seismic hazard assessment, as proposed in this study, requires a ground-motion model that accounts for directivity effects in its prediction. Most of the current directivity models were developed as a correction for already existing ground-motion models, and were fitted using ground-motion model residuals. Directivity effects are dependent on magnitude, distance, and the spectral acceleration period. This interaction of directivity effects with magnitude and distance makes separation of distance and magnitude scaling from directivity effects challenging. To properly account for directivity effects in a ground-motion model they need to be fitted as a part of the original model and not as a correction. We propose a method to include the effects of directivity in a ground-motion model and also develop models to make unbiased prediction of ground-motion intensity, even when the directivity parameters are not available. Finally, following the approach used to model directivity effects, we developed a modular framework to characterize ground-motion directionality, which causes the ground-motion intensity to vary with orientation. Using the expanded NGA-West2 database we developed new models to predict the ratio between maximum and median ground-motion intensity over all orientations. Other models to predict distribution of orientations of the maximum intensity relative to the fault and the relationship between this orientation at different periods are also presented. The models developed in this dissertation allow us to compute response spectra that are expected to be observed in a single orientation (e.g., fault normal, orientation of maximum intensity at a period). It is expected that the proposed spectra can be a more realistic representation of single orientation ground motion compared to the median or maximum spectra over all orientations that is currently used.
Special Collections
Book
1 online resource.
Life safety and collapse prevention have always been primary goals of earthquake engineering. Although the collapse risk of structures has not been explicitly quantified until recently, interest in doing so has risen significantly. A primary reason for this is the advent of performance-based earthquake engineering, which considers uncertainties in the seismic hazard and structural response and seeks to engineer structures that will achieve a desired level of performance in terms of expected monetary losses, downtime and casualties. Collapse risk assessment of a structure involves combining information about the seismic hazard at the site with the behavior of the structure. Typically an intensity measure (IM) is used to describe the level of ground motion shaking and quantify the seismic hazard. The behavior of the structure is then characterized through nonlinear response history analysis by subjecting the structure to many different ground motions at various intensity levels. A collapse fragility curve, which describes how the probability of collapse increases as a function of the ground motion intensity, is constructed based on analysis results and combined with the seismic hazard curve to compute the mean annual frequency of collapse. This dissertation focuses on evaluating the effects of IM selection and computational approach on the computed collapse risk, and contributions are made in the following areas: (1) quantifying the uncertainty in the collapse risk estimate due to the number of ground motions used in structural analysis; (2) characterizing the ability of different IMs to efficiently predict structural collapse and provide reliable estimates of the collapse risk; (3) explaining why certain IMs perform better than others with respect to the previous point; and (4) because the spectral acceleration averaged over a period range has been identified as a promising IM for collapse risk assessment, providing recommendations on the period ranges that maximize the efficiency of this IM as a function of structural properties.
Special Collections
Book
1 online resource.
Fracture in steel structures represents a critical limit state in evaluating the safety and resiliency of civil infrastructure during earthquakes. This importance was demonstrated by the widespread fractures observed in older steel connections during the 1994 Northridge Earthquake, and in modern connections during the 2011 Christchurch Earthquake. The application of traditional crack-tip fracture mechanics to structural design provisions has successfully delayed the onset of Northridge-type brittle fracture. However, the extreme strain capacity in modern ductile connections increases the relevance of ductile fracture. Recent developments in 'local' fracture models have proven successful at predicting ductile fracture under many conditions. However, the application of these models has been limited due to their limited scope and difficulty in evaluation of the necessary continuum parameters. The current objective in the structural engineering community of replacing full-scale experiments with advanced finite element simulations require accurate models and calibration techniques to evaluate cyclic plasticity and fracture predictions. Motivated by the above requirements, the objectives of the present study are to (1) further the understanding of the ductile fracture mechanism for all stress, (2) develop robust methods for the calibration of constitutive parameters and local fracture models in highly plastic materials, and (3) to develop a new damage-based model to predict ductile fracture under all relevant structural conditions states (especially those with low stress triaxiality). These objectives are accomplished through an extensive experimental program, including 48 monotonic and cyclic specimens in geometries designed to effectively interrogate the fracture criteria. A total of six specimen designs are tested, including three original designs developed for the current study. Complementary finite element analyses are used to evaluate the local fracture criteria, and micrographic examination and void cell simulations provide insight into the fracture mechanism at varying stress states. The data from these experiments and the derived fracture model demonstrate the importance of the deviatoric stress state, in addition to the hydrostatic pressure, in the fracture ductility of steel. Specifically, material in a plane strain condition is found to exhibit about 50\% more fracture ductility than material in an axisymmetric stress condition. Through meta-analysis of test data from this and previous studies, ductile fracture is found to be prohibited under negative (compressive) hydrostatic pressure.
Special Collections
Book
1 online resource.
Performance-based earthquake engineering (PBEE) quantifies the seismic hazard, predicts the structural response, and estimates the damage to building elements, in order to assess the resulting losses in terms of dollars, downtime, and deaths. This dissertation focuses on the ground motion selection that connects seismic hazard and structural response, the first two elements of PBEE, to ensure that the ground motion selection method to obtain structural response results is consistent with probabilistic seismic hazard analysis (PSHA). Structure- and site-specific ground motion selection typically requires information regarding the system characteristics of the structure (often through a structural model) and the seismic hazard of the site (often through characterization of seismic sources, their occurrence frequencies, and their proximity to the site). As the ground motion intensity level changes, the target distribution of important ground motion parameters (e.g., magnitude and distance) also changes. With the quantification of contributing ground motion parameters at a specific spectral acceleration (Sa) level, a target response spectrum can be computed using a single or multiple ground motion prediction models (GMPMs, previously known as attenuation relations). Ground motions are selected from a ground motion database, and their response spectra are scaled to match the target response spectrum. These ground motions are then used as seismic inputs to structural models for nonlinear dynamic analysis, to obtain structural response under such seismic excitations. This procedure to estimate structural response results at a specific intensity level is termed an intensity-based assessment. When this procedure is repeated at different intensity levels to cover the frequent to rare levels of ground motion (expressed in terms of Sa), a risk-based assessment can be performed by integrating the structural response results at each intensity level with their corresponding seismic hazard occurrence (through the seismic hazard curve). This dissertation proposes a more rigorous ground motion selection methodology which will carefully examine the aleatory uncertainties from ground motion parameters, incorporate the epistemic uncertainties from multiple GMPMs, make adaptive changes to ground motions at various intensity levels, and use the Conditional Spectrum (CS) as the new target spectrum. The CS estimates the distribution (with mean and standard deviation) of the response spectrum, conditioned on the occurrence of a target Sa value at the period of interest. By utilizing the correlation of Sa values across periods, the CS removes the conservatism from the Uniform Hazard Spectrum (which assumes equal probabilities of exceedance of Sa at all periods) when used as a target for ground motion selection, and more realistically captures the Sa distributions away from the conditioning period. The variability of the CS can be important in structural response estimation and collapse prediction. To account for the spectral variability, aleatory and epistemic uncertainties can be incorporated to compute a CS that is fully consistent with the PSHA calculations upon which it is based. Furthermore, the CS is computed based on a specified conditioning period, whereas structures under consideration may be sensitive to multiple periods of excitation. Questions remain regarding the appropriate choice of conditioning period when utilizing the CS as the target spectrum. To advance the computation and the use of the CS in ground motion selection, contributions have been made in the following areas: The computation of the CS has been refined by incorporating multiple causal earthquakes and GMPMs. Probabilistic seismic hazard deaggregation of GMPMs provides the essential input for such refined CS computation that maintains the rigor of PSHA. It is shown that when utilizing the CS as the target spectrum, risk-based assessments are relatively insensitive to the choice of conditioning period when ground motions are carefully selected to ensure hazard consistency. Depending on the conditioning period, the structural analysis objective, and the target response spectrum, conclusions regarding appropriate procedures for selecting ground motions may differ.
Special Collections
Book
1 online resource.
Biomass burning is the largest source of anthropogenic aerosols in the Southern Hemisphere. In the Amazon Basin, burning is used to clear forests, remove crop residue, and mobilize nutrients. Over the last decade, trends in biomass burning over forest and savanna/agricultural lands in the Amazon have changed dramatically. We find that between the early 2000s and the late 2000s, the ratio of forest to savanna/agricultural fires more than halved over South America, in turn changing the optical properties of aerosols in the region. This change from forest to savanna burning is attributed in part to better forest fire management, changing agricultural practices along the Amazon frontier, and reduced deforestation rates. Interannual precipitation variability over forest and savanna lands is also shown to play an important role. Biomass burning aerosols over the Amazon have a substantial effect on cloud properties and the regional radiative balance. Remote sensing observations of aerosols and clouds over Brazil illustrate that meteorological variability and aerosol-cloud overlap, ignored in previous studies, must be accounted for to correctly determine aerosol-cloud interactions from satellite observations. When accounting for these confounding variables, we find that microphysical aerosol effects, which serve to increase cloud cover and optical thickness, dominate for low levels of aerosol loading (aerosol optical depth (AOD) < 0.3-0.5), whereas radiative effects, which serve to decrease cloud cover and optical thickness, dominate for higher levels of aerosol loading (AOD > 0.3-0.5). We find a similar result using high-resolution nested model simulations over the Amazon Basin, which include physical representations of direct, indirect, semi-direct, and cloud absorption effects. Simulations including and excluding biomass burning emissions are used to establish causation of the remotely sensed correlations. A two-regime relationship, defined by dominance of microphysical aerosol effects at low AODs and dominance of radiative effects at high AODs, is modeled for a variety of cloud variables including cloud optical thickness, cloud liquid droplet number, cloud fraction, and precipitation. These competing effects also exhibit a strong diurnal signal -- microphysical effects dominate in the early morning whereas radiative effects dominate in the late afternoon and night. By finding consistent relationships between remotely sensed observations and modeling results, we conclude that remotely sensed correlations between aerosols and clouds are not largely dominated by retrieval artifacts such as the hygroscopic growth of aerosol particles near clouds, brightening of aerosols near clouds, darkening of clouds below absorbing aerosols, and cloud contamination of aerosol retrievals over the Amazon, and that the complex aerosol-cloud relationships determined in this and previous studies over the Amazon can be attributed to genuine physical interactions between aerosols and clouds. In the Appendix, the same 3-D modeling tools used in the Amazon biomass burning study are applied to assess the health effect from the Fukushima nuclear disaster on March 11th, 2011. Radioactive emissions for the month following the accident are determined from worldwide observations by the Comprehensive Nuclear-Test-Ban Treaty Organization. Modeled worldwide airborne concentrations are used to determine inhalation and external atmospheric exposure, modeled deposition rates are used to determine external ground-level exposure, and ingestion exposure from contaminated food and water is extrapolated from previous Chernobyl studies all assuming a linear no-threshold model of human exposure. We estimate an additional 280 (30--2400) cancer-related mortalities and 390 (50--3800) cancer-related morbidities incorporating uncertainties associated with the exposure-dose and dose-response models used in the study. A hypothetical accident at the Diablo Canyon Power Plant in California, USA, with identical emissions to Fukushima, is studied to analyze the influence of location and seasonality on the impact of a nuclear accident. This hypothetical accident may cause up to ~45% more mortalities than Fukushima despite a lower local population density due to differing meteorological conditions.
Special Collections
Book
1 online resource.
We perform a study of residence time and circulation on the tropical fringing reef on the north shore of Moorea, French Polynesia. This study was motivated by the dependence of many important biological factors on residence time, notably, phytoplankton biomass, nutrient availability and larval recruitment. Three important features that are common to fringing reef systems were identified as important to residence time: 1) the strength of wave-driven flow 2) the dynamics of a jet exiting the reef pass and 3) thermal buoyancy-driven exchange. Through field observations we determine that wave-driven flow is responsible for the majority of the volume flow through the system. Our analysis of the field observations shows that water exiting through the reef passes was often re-entrained by the wave driven flow; this provides an important retention mechanism for reef water. The amount of water retained by a wide reef pass was investigated with field measurements and a simplified numerical model of the field site. We find that alongshore flow, jet strength, jet buoyancy and jet to reef area ratios are all important factors influencing retention. In normal winter field conditions, the amount of water re-entrained ranged from 20- 50 percent of exiting water. Additionally, we find that the exchange in the back bay of the system is primarily determined by variations in depth that create horizontal thermal and therefore buoyancy gradients. The horizontal buoyancy gradients are an important mechanism for exchange in parts of the reef less affected by the wave-driven circulation. The difference in heating between the reef and the ocean also maintains a stratified exchange flow in the pass; the dynamics that determine the interface and mixing at the jet are controlled by this thermal stratification.
Special Collections
Book
1 online resource.
Polybrominated diphenyl ethers (PBDEs) are widely used as flame-retardants and are receiving increasing attention as persistent organic pollutants (POPs) because of their ubiquitous presence and persistence in the environment, bioaccumulation and toxicity properties. Nanoscale zerovalent iron (nZVI) is a strong reducing agent for an array of organic contaminants. This research focused on PBDE reaction kinetics, pathways and mechanisms with several nZVI-relevant remediation materials that may safely mitigate PBDEs. One objective of this research was to synthesize or develop nZVI remediation materials, including nZVI, palladized bimetallic nanoparticles (nZVI/Pd) and nZVI/Pd impregnated activated carbon (nZVI/Pd-AC). A second objective of this research was aimed at providing a more in-depth understanding and evaluation of reaction kinetics, pathways and mechanisms of PBDEs by those materials. To realize these goals, this research characterized the materials synthesized, analyzed the effectiveness of each material on PBDE debromination, and evaluated the effects of catalyst and particle properties, as well as the reaction preferences and mechanisms involved. nZVI was synthesized and characterized to comprehensively assess the degradation rates, preference and mechanisms for reaction with PBDEs. nZVI debrominated the selected PBDEs into lower brominated compounds and diphenyl ether, a completely debrominated form of PBDEs. The effectiveness of nZVI towards debromination increased with increasing bromine substitutes in PBDEs. To assess degradation pathway, the reaction of 2,3,4-tribromo diphenyl ether (BDE 21) was investigated more thoroughly, and a susceptibility of the meta-bromine by nZVI was observed. The stepwise debromination from n-bromo- to (n-1)-bromodiphenyl ether was observed as the dominant reaction process, although simultaneous multistep debromination was likely for di-BDEs that have two bromines adjacent to each other on the same phenyl ring. The heat of formation (Hf) and the energy of the lowest unoccupied molecular orbital (ELUMO) are useful descriptors of relative reaction rates among PBDE homologue groups. A good correlation between PBDE activity and respective ELUMO indicated that the main debromination mechanism by ZVI is direct electron transfer. The effect of particle properties and the catalyst were studied with two commercially available nZVI slurries, N25 and N25S with an organic stabilizer, and palladium (Pd). A main factor contributing to the decrease of laboratory-synthesized nZVI activity was likely the result of the drying and stabilization processes. The organic stabilizer polyacrylic acid on the commercial nZVI slowed the PBDE reduction, probably due to its hindrance effect for sorption and surface reaction. Palladization of nZVI promoted reaction kinetics with an optimum Pd loading at 0.3 Pd/Fe wt%, and changed the reaction preference to para-bromines, resulting in PBDEs with less estrogenic potencies. A wide range of PBDEs that are environmental-abundant were debrominated to DE in one week by N25 and nZVI/Pd, where nZVI/Pd reacts more completely and effectively. Step-wise major PBDE debromination pathways by unamended and palladized Fe0 were compared. In addition to galvanic couple formation between Pd and iron, it was found that a greater role of H atom transfer is induced by Pd. Moreover, steric hindrance and rapid sequential debromination of adjacent bromines play an important role in the pathways for palladized nZVI, indicating the importance of surface precursor complex formation. nZVI/Pd-AC particles were synthesized and characterized to evaluate their effectiveness in PBDE debromination. Difficulty with in-situ synthesis of a significant fraction of zero-valent iron within the micro-porous material was demonstrated. X-ray fluorescence mapping of nZVI/Pd-AC showed that Pd mainly deposits on the outer part of particles, and Fe was present throughout the activated carbon particles. While BDE 21 was sorbed onto activated carbon composites quickly, debromination was slower compared to reaction with freely dispersed nZVI/Pd. According to the distribution of reaction intermediates, BDE 21 reacted on both iron and palladium surfaces. Results demonstrated that activated carbon introduces a retarding effect on the reaction, which is caused by the heterogenous distribution of nZVI and Pd on AC and/or immobilizing of hydrophobic organic contaminants at the sorption sites. Overall, the results of this research suggest that nZVI and nZVI/Pd-assisted debromination is a feasible PBDE remediation material that can fully debrominate PBDEs. Increasing the surface activity of nZVI is essential to debrominate PBDEs effectively. Palladium promotes debromination kinetics and reduces the toxicity of by-product during debromination. Activated carbon (AC) can effectively reduce PBDE concentrations in the liquid phase by strong sorption, though it retards reaction by reducing the availability of PBDEs to nZVI particles impregnated in AC.
Special Collections
Book
1 online resource.
The private sector -- irrespective of industry -- has begun to increasingly embrace the (environmental) sustainability "megatrend". Corporate leaders agree that sustainable development should be explored at the strategic level; however firms continue to struggle with rationally operationalizing sustainability. Part of the challenge of determining the shape and extent to which concepts of sustainability are incorporated into operational decisions is the form and interpretation of environmental information. This includes two key components: i) information which links operational decisions to environmental impact, including how impact may affect the firm; and, ii) what value is placed on prospects that incorporate such information. In this dissertation, I develop a firm-level ecosystem service valuation framework, designed to address these environmental information and representation gaps. Adopting a "systems" approach from industrial ecology, I link firm operational decisions to their impact on owned ecosystems by representing changes in natural capital within the strict requirements of international accounting norms. I develop the valuation framework through four distinct aspects. In the first, I identify and explain how the developed framework addresses the needs of firms to include ecosystem services within operational decision. This enables the groundwork for more detailed investigations in the other three aspects. In the second aspect, I use ecological models in two cases to demonstrate the ability of the framework to provide fundamental knowledge of a given ecosystem service and its operational limit state(s). The first case -- total suspended solids removal from stormwater runoff via soil -- demonstrates the level of ecosystem service characterization required to enable a market-based valuation. The second case -- phosphorus removal from wastewater effluent via estuary -- provides a much richer example by explicitly demonstrating the inclusion of limit states as part of a comprehensive understanding of the performance of an ecosystem service under various operational loadings. I next turn to the third aspect, which provides a method for the application of a market-based value, using the welfare economics concept of functional substitutability. I conclude with a presentation of my fourth aspect, where I compare the unit value of phosphorus removal yielded by traditional accounting and that of thermoeconomics, in order to explore a more rational determination of engineered service value used within functional substitutability for ecosystem service valuation.
Special Collections
Book
1 online resource.
The recent years have seen a tremendous growth in research and developments in science and technology, and an emphasis in obtaining Intellectual Property (IP) protection for one's innovations. Information pertaining to IP for science and technology is siloed into many diverse sources and consists of laws, regulations, patents, court litigations, scientific publications, and more. Although a great deal of legal and scientific information is now available online, the scattered distribution of the information, combined with the enormous sizes and complexities, makes any attempt to gather relevant IP-related information on a specific technology a daunting task. In this thesis, we develop a knowledge-based software framework to facilitate retrieval of patents and related information across multiple diverse and uncoordinated information sources in the US patent system. The document corpus covers issued US patents, court litigations, scientific publications, and patent file wrappers in the biomedical technology domain. A document repository is to be populated with issued US patents, court cases, scientific publications, and file wrappers in XML format. Parsers are developed to automatically download documents from the information sources. Additionally, the parser also extracts metadata and textual content from the downloaded documents and populates the XML repository. A text index is built over the repository using Apache Lucene, to facilitate search and retrieval of documents. Based on the document repository, the underlying methodology to search across multiple information sources in the patent system is discussed. The methodology is divided into two major parts. First, we develop a knowledge-based query expansion methodology to tackle domain terminological inconsistencies in the documents. Relevant knowledge is retrieved from external sources such as domain ontologies. Since our goal is to retrieve a collection of relevant documents across multiple sources, we develop a patent system ontology to provide interoperability between the different types of documents and to facilitate information integration. We discuss the Information Retrieval (IR) framework which combines the knowledge-based query expansion methodology with the patent system ontology to provide a multi-domain search methodology. A visualization tool based on term co-occurrence is developed that can be used to browse the document repository through class hierarchies of domain ontologies. The knowledge-based query expansion methodology is evaluated through formal measures such as precision and recall. A simple term-based search is used as a baseline reference for comparison. Additionally, the results from related works are also used for comparison. A series of common questions asked during patent prior art searches and infringement analysis are generated to evaluate the patent system ontology. A summary of the results and analysis is provided.
Special Collections
Book
1 online resource.
Atmospheric models that solve chemistry in three dimensions (3-D) generally do not explicitly model organic chemistry due to computer time constraints. Organic species are grouped together based on their structures, which can result in inaccuracies for gas- and aqueous-chemistry, gas-to-aqueous transfer, and secondary organic aerosol (SOA) formation because reactivity, diffusion and vapor pressures can differ significantly between species in the same group. Here, we develop an atmospheric box model that uses a near-explicit chemical mechanism, the Master Chemical Mechanism (MCM) version 3.1 (MCM 2002), and an extensive aqueous-phase chemical mechanism, the Chemical Aqueous Phase Reactive Mechanism (CAPRAM 3.0i), in a sparse-matrix Gear-based solver, SMVGEAR II, to solve gas-phase and aqueous-phase tropospheric organic chemistry accurately and quickly enough for 3-D. We first examine the speed and accuracy of solving the MCM v. 3.1 with SMVGEAR II. The MCM has over 13,500 organic reactions and over 4,600 species. SMVGEAR II is a sparse-matrix vectorized Gear solver that reduces computation time significantly on scalar and vector machines, which is necessary for solving such a large mechanism. Although we use a box model for this study, we determine and demonstrate in a separate study that the speed of the MCM with SMVGEAR II allows the MCM to be modeled in 3 dimensions. We validate the MCM by comparing model results with smog chamber data for four organic species -- two alkenes and two aromatics. The model predictions match the smog chamber data very well for all cases except for toluene, where further development of the mechanism is needed. The steps for incorporating the aqueous-phase chemical mechanism and the gas-to-aqueous transfer method into SMVGEAR II are discussed in detail. CAPRAM 3.0i treats aqueous chemistry among 390 species and 829 reactions (including 51 gas-to-aqueous phase reactions). We couple gas- and aqueous-phase species through time-dependent dissolutional growth and dissociation equations. This method is validated with a smaller mechanism against results from a previous model intercomparison. When the smaller mechanism is compared with the full MCM-CAPRAM mechanism, some concentrations are still similar but others differ due to the greater detail in chemistry. We also expand the mechanism to include gas-aqueous transfer of two acids, glycolic acid and glyoxylic acid, and modify the glyoxal Henry's law constant from recent measurements. Glyoxal is important for SOA modeling. The average glyoxal partitioning in the cloud changes from 67% aqueous-phase to 87% aqueous- phase with the modifications. The addition of gas-aqueous transfer reactions increases the average gas-phase percentage of glycolic acid to 19% and of glyoxylic acid to 16%. This gas-phase and aqueous-phase chemistry module is a useful tool for studying detailed air pollution and SOA formation, in clear sky, cloudy, or foggy conditions. The increased use of ethanol in transportation fuels warrants an investigation of its consequences. An important component of such an investigation is the temperature-dependence of ethanol and gasoline exhaust chemistry. We use the model with species-resolved tailpipe emissions data for E85 (15% gasoline, 85% ethanol fuel blend) and gasoline vehicles to compare the impact of each on nitrogen oxides, organic gases, and ozone as a function of ambient temperature and background concentrations with and without a fog, using Los Angeles in 2020 as a base case. We use two different emissions sets -- one a compilation of exhaust and evaporative data taken near 24 ºC and the other from exhaust data taken at -7 ºC -- to determine how atmospheric chemistry and emissions are affected by temperature. We include diurnal effects by examining two day scenarios. We find that, accounting for chemistry and dilution alone without a fog, the average ozone concentrations through the range of temperatures tested are higher with E85 than with gasoline by ~7 part per billion (ppb) at higher temperatures (summer conditions) to ~39 ppb at low temperatures and low sunlight (winter conditions) for an area with a high nitrogen oxide (NOx) to non-methane organic gas (NMOG) ratio. The results suggest that E85's effect on health through ozone formation becomes increasingly more significant relative to gasoline at colder temperatures due to the change in exhaust emission composition at lower temperatures. Although ozone concentrations are not usually a concern for cold climates, the increase in ozone concentrations with E85 may be significant enough that it would exceed 35 ppb, the threshold mixing ratio above which short-term health effects occur. The increased risk of mortality due to short-term exposure to ozone is estimated to be 0.0004 per ppb above the threshold. In some areas, ozone concentrations may even exceed the 8 hr National Ambient Air Quality Standard (NAAQS) for ozone (75 ppb). Acetaldehyde and formaldehyde concentrations are also much higher with E85 at cold temperatures, which is a concern because both are carcinogens. These results could have implications for wintertime use of E85. Peroxy acetyl nitrate (PAN), another air pollutant of concern, increases with E85 by 0.3 to 8 ppbv. The sensitivity of the results to box size, initial background concentrations, background emissions, and water vapor are also examined. We continue this study to investigate the air quality impacts when a morning fog is present under summer and winter conditions. We find that E85 slightly increases ozone compared with gasoline in the presence or absence of a fog under summer conditions but increases ozone significantly relative to gasoline during winter conditions, although winter ozone is always lower than summer ozone. A new finding here is that a fog during summer may increase ozone after the fog disappears, due to chemistry alone. Temperatures are high enough in the summer to increase peroxy radical (RO2) production with the morning fog, which lead to the higher ozone after fog dissipation. A fog on a winter day decreases ozone after the fog. Within a fog, ozone is always lower than if no fog occurs. The sensitivity of the results to fog parameters like droplet size, liquid water content, fog duration, and photolysis are investigated and discussed. The results suggest E85 and gasoline both enhance pollution with E85 enhancing pollution significantly more at low temperatures.
Special Collections
Book
1 online resource.
California is increasing the percentage of its electrical energy supply from renewable energy resources. The motivation to shift from fossil fuel fired electric power plants to renewables is to mitigate the health and environmental consequences of combusting fossil fuels. The primary challenge to supplying the demand for electricity with renewables is the variable and uncertain generation of electric power from renewable energy resources. This dissertation focuses on the contribution of renewable resources themselves to mitigate their own variability and uncertainty through the synergistic combination of co-located offshore wind and wave energy and the quantification of specific time of day impacts of wind power to the California electric power system. Large untapped renewable energy resources of offshore wind and wave energy exist for California to meet its renewable energy goals, and these resources are quantified as time series of electric power production to further explore their benefits. Existing grid integration methodologies are, for the first time, extended to combined offshore wind and wave energy farms in the U.S. and the benefits of co-locating offshore wind turbines and wave energy devices are quantified. The primary benefits of combining offshore wind and wave energy identified are: (1) a reduction in the hours of no power output and a resulting increase in the capacity value of the combined farms to the electric power system; (2) a reduction in the hourly variability of power output which reduces the operating reserve requirement to manage variable power output from renewables; (3) a reduction in transmission capacity required to interconnect an offshore farm which reduces capital costs and creates a farm with a more consistent power output over a smaller range. The variability and uncertainty of onshore wind power are quantified for the California power system when it builds the projected wind capacity to meet its 33% Renewable Portfolio Standard by 2020. The variability and uncertainty are combined with the existing variability and uncertainty in the demand for electric power to identify the net, if any, increase in the system variability and uncertainty that would require additional resources to balance the system given rapid changes (variability) and forecast errors in demand, generation, and transmission (uncertainty). The analysis included a diurnal examination of the variability and uncertainty that more accurately reflects the characteristics of the thermal wind regimes of California. The key results were (1) the California system should see no net increase in variability from that already present in the system; (2) the system will see more, but not greater variability, during the afternoon hours from the pattern of California wind power output; (3) the forecast error will increase for the California system over current forecast errors with the addition of wind power that will require additional resources like operating reserve to manage the large errors, but (4) the daily cycle of these greater forecast errors mitigates some of the challenges they may present because of the state of the power system and the generators online when these errors occur.
Special Collections
Book
1 online resource.
Coral reefs are among the most biologically diverse and economically important ecosystems on the planet. While there are a number of factors that contribute to a healthy coral reef, turbulent mixing generated over corals has been shown to be important. Mixing over corals controls many biologically important processes such as grazing rates by benthic organisms, mass transfer of dissolved constituents, larval dispersal and waste removal. While elevated levels of turbulence are often found right above coral beds, corals are often located in environments with extreme heating throughout the day resulting in high levels of stratification which can limit vertical exchange. Motivated by the importance of the dynamically evolving turbulent structure found over coral reefs, I aimed to quantify the level of mixing found over changing coral reef conditions and compare it to mixing found in surface ocean environments. Three field sites were used for this analysis; a deep water site approximately 1 km off the coast of Eilat, a fringing coral reef (Eilat, Israel) and a back reef site (Palau). High resolution microstructure profiling data in the top 40 m of a deep ocean environment showed the changing dynamics of a surface mixed later and the layer below during heating, cooling, and windy conditions (Eilat, Israel). While this provided a baseline from which to compare coral mixing rates, it also allowed us to compare the varying parameterizations of mixing efficiency and vertical diffusivity. The single value of mixing efficiency (usually 0.17-0.20) was found to over estimate the mixing efficiency in most of the water column regardless of the mixing efficiency parameterization used. We outlined the difficulties with using the different parameterizations (mixing efficiency and vertical diffusivity) under changing conditions and when caution should be taken. Filtering that decreased amplified Thorpe scales in weakly stratified conditions were applied. Additionally, a new averaging method that groups turbulence parameters with similar Thorpe length scales was applied to all data presented. This allows bulk estimates of turbulent Froude and Reynolds numbers over a given mixing region. For this specific data set we found that the Shih et al. [2005] parameterization for mixing efficiency and the Osborn [1980] parameterization for vertical diffusivity were able to calculate the largest number of mixing efficiency estimates while reflecting the changing dynamics of the water column. A six meter tower supporting 6 ADVs over a fringing coral reef in Eilat, Israel provided turbulence data for the second field study. Twenty thermistors spaced along the tower provided temperature and overturning length scales. This field work was unique in that the ADVs were tethered to shore so that data could be viewed in real time and there were no data storage or battery limitations. In addition, fast conductivity and temperature sensors were located on two of the ADVs allowing direct measurement of buoyancy frequency as well as mixing efficiency. Concentrations of phytoplankton at four corners of a defined control volume were also measured during this study allowing the coupling of phytoplankton grazing rates and turbulence quantities over the reef. Unfortunately, noise associated with the fast conductivity sensor limited the applicability of the direct measurements of buoyancy flux. However, preliminary data suggests that the Shih et al. [2005] mixing efficiency parameterization needs to be adjusted at high values of turbulent activity numbers. This site showed that flows over a coral reef are highly turbulent and that surface stratification events reach down to the corals. Comparison of the production of turbulent kinetic energy calculated with Reynolds stress and as the sum of the buoyancy flux and dissipation were in agreement only 40 % of the time. This indicates that advection and transport played a key role in production estimates at this site. In regards to grazing rate, using a more detailed sliced control volume methodology allowed us to put a vertical cap on our flux estimates based on measurements of vertical diffusivity. These biological measurements show that enhanced turbulence near the bed enables high rates of exchange which decrease in the upper part of the water column. The field site in Palau provided a good coral comparison site to the Eilat coral site. Due to its back reef location, velocities at this site were much smaller however, coral coverage was much higher (approximately 70 % vs. approximately 14 % in Eilat) and the corals were much taller (approximately 1 m vs approximately 0.1 m in Eilat). Given the very different field site conditions, dissipation (a key factor in calculating mixing efficiency and vertical diffusivity rates) remained high at this site indicating that coral roughness plays a big part in mixing over corals even when other forcing parameters are decreased. Measurements of turbulence at this site were coupled with measurements of DIC, ALK and pH. The experimental setup at this site is unique in that sixteen tubes were fixed from four corners of the control volume and were extended to a stationary boat. Through the use of continuous pumping, chemical samples were taken continuously throughout the study (approximately 6 days). While turbulence characterization of coral reefs remains the main focus of this work, preliminary results seem to indicate a correlation between DIC and pH measurements with vertical diffusivity. The coupling of biological measurements with turbulence data show mixing is a key mechanism for the health and sustainability of coral reef ecosystems.
Special Collections
Book
1 online resource.
While the potential of renewable energy resources to supply large portions of the United States energy demand has been demonstrated in resource assessments, the variability and uncertainty in renewable resource availability is anticipated to pose technological challenges to large-scale grid integration. This dissertation focuses on the effects of resource intermittency on renewable portfolio performance, particularly for systems with very high penetrations of renewables, in which today's operational heuristics and rules of thumb no-longer apply. We present a renewable portfolio planning tool that designs low cost and low carbon renewable portfolios and utilizes Monte Carlo methods to simulate system operation. The model simulates power output from wind turbines, concentrating solar power plants, rooftop photovoltaics, geothermal plants, hydroelectric plants, and natural gas turbines, while treating resource and demand forecast errors, forced outages, spinning reserves, and a reliability constraint. The model was applied to the California ISO operating area in order to identify a portfolio capable of reliably meeting the 2005-06 demand with an 80% reduction in operational carbon dioxide emissions. The model was also used to investigate several renewable deployment scenarios in order to develop useful parameterizations for portfolio performance as functions of the renewable fleet installed capacities. At low to moderate penetrations, renewable portfolio performance can be predicted by the expected capacity factor. However, at very high penetrations, renewable portfolio performance is depressed by both the need to curtail in hours when renewable power exceeds the demand for electricity and an increasing need for spinning reserves. Complete decarbonization of the closed system under study is found to rely on the deployment of energy storage fleets large enough to decouple in time the availability of renewable power and the demand for electricity. Preliminary results from quasi-stochastic portfolio planning simulations suggest that the competitiveness of energy storage will initially be driven by its ability to provide zero-emissions reserves. Furthermore, it is concluded that for fully decarbonized portfolios, future modeling efforts should focus on the appropriate treatment of longer term uncertainty in renewable resource availability and the effects of information-limited dispatch decisions on optimal planning. The modeling work described in this dissertation also suggests that achieving very high penetrations of renewables will rely on: improved conventional fleet and demand-side flexibility; the inclusion of curtailment controls in PV inverters; new market designs that fully capture the values of online reserve capacity and renewable curtailment; significant investments in transmission and distribution infrastructure; and new communications systems between renewable facilities and intermittency-mitigating technologies like energy storage and demand response systems.
Special Collections
Book
1 online resource.
Offshore wind power has the potential to provide substantial carbon-free electricity near large urban load centers, yet it represents only about 1.5% of the total installed wind power capacity (about 200 gigawatts) worldwide in late 2011, with nearly all of those installations in Europe. Despite thousands of megawatts of potential in the coastal areas of the US and hundreds of megawatts proposed off the US East Coast, a single offshore turbine has yet to be built. The dissertation analyzes the offshore wind energy resource of 74% of the contiguous US at high temporal and spatial resolution using mesoscale weather modeling, to determine the potential of this renewable energy resource to drastically reduce US carbon emissions. It was found that all of the electricity used in California and all of the electricity used on the East Coast could be generated using offshore winds. In addition, unlike their onshore counterparts, which generally peak at night, offshore winds were found to provide ample resource during daytime hours, when electricity demand is highest.
Special Collections
Book
1 online resource.
Due to the pressing challenges from climate change and energy security, clean energy technologies have been widely regarded as providing important channels to reduce carbon emissions and to alleviate the reliance on fossil fuels. It is imperative to analyze the underlying dynamics and mechanisms of the diffusion of clean energy technologies, to identify key factors influencing the diffusion and to evaluate the impacts from the diffusion process. This dissertation empirically analyzes the diffusion of wind energy and energy efficient building technologies, using China and U.S. as examples. Chapter 1 introduces clean energy technologies as well as the key mechanisms, entities and issues involved in the diffusion of these technologies. Chapter 2 quantifies the effect of technology acquisition mechanisms -- purchasing production licenses from foreign manufacturers, joint design with foreign design firms, joint ventures and domestic R& D -- on wind turbine manufacturers 's technology levels (as measured by turbine size, in MW). It also examines the impacts of government policies and manufacturers' business diversification on technology levels. The results from econometric modeling studies indicate that technology acquisition mechanisms are statistically significant factors in influencing both technology upgrading and catch-up. In Chapter 3, learning by doing and learning by searching rates of wind energy in China are quantified. The two types of learning investigated are associated with about 4% price reduction per doubling of installed capacity, providing an estimate of the evolution of the price of wind power, a technology widely used in other markets, which in China has benefited from technology leapfrogging, established supply chains, and operational experience in other countries. This chapter also identifies that wind turbine manufacturing localization and wind farm economies of scale are significantly associated with reductions in the price of wind power in China. Chapter 4 discusses the rebound effects of energy efficiency. A key ongoing debate on energy efficiency is about the extent of the rebound effects: does greater efficiency lead to higher or lower energy use than there would have been without those improvements? Chapter 4 analyzes the rebound effects of energy efficiency in the commercial building sector. Chapter 4 builds a structural model of a building's decision to adopt an energy efficient building technology and subsequent energy demand. The results show that energy efficient technologies save energy after rebound effects. This gives quantitative argument for government to promote the diffusion of energy efficient technologies.
Special Collections
Book
1 online resource.
I have developed a novel and broad-ranging approach to the characterization of complex microbial communities by analyzing the abundance and expression of key functional genes. The method is based on a tiling oligonucleotide DNA microarray, implementing an unprecedented number of probes per gene by tiling probe sequences across genes of interest at 1X--2X coverage. This design favors the avoidance of false positive gene identification in samples of DNA or RNA extracted from complex microbial communities. I have implemented this method using hydrogenase genes to investigate anaerobic microbial communities where H2 is an important intermediate (the Hydrogenase Chip), reductive dehalogenase genes to investigate ecosystems containing organohalide-respiring microorganisms (the Reductive Dehalogenase Chip), and the genome of the colorless sulfur bacterium Thiovulum to study its in-situ gene expression (the Thiovulum Chip). The Hydrogenase Chip revealed key organohalide-respiring and sulfate-reducing microorganisms in laboratory microcosms that simulate the bioremediation of tetrachloroethene and identified Microcoleus chthonoplastes as a key H2-producing microbe in phototrophic microbial mats. The Reductive Dehalogenase Chip revealed population dynamics of the organohalide-respiring strains present in a long-term laboratory microcosm to an unprecedented level of detail, and showed that different Dehalococcoides strains may differ in their sensitivity to hydrogen sulfide. Experiments with the Thiovulum Chip were ultimately unsuccessful, but revealed mRNA degradation that correlated with secondary structure stability in a way that was informative for future environmental transcriptomic experiments. Independent quantitative PCR analysis on selected hydrogenase genes showed that the tiling DNA microarray approach is semiquantitative. We also determined that as microbial community complexity increases, specificity must be traded for sensitivity in analyzing data from tiling DNA microarrays. This work on a range of questions in different ecosystems has determined the necessary conditions for the successful implementation of the tiling DNA microarray approach.
Special Collections