%{search_type} search results

52,870 catalog results

RSS feed for this result
Book
1 online resource.
Turbulent, dispersed multiphase flows are of critical importance in a wide range of application areas. Experimental investigations are indispensable in the study of such flows, as experiments can provide reliable domain-specific knowledge and/or validation for computational fluid dynamics (CFD) tools. However, only a limited set of experimental techniques currently are available for studying particle-laden flows: pointwise measurements provide high temporal resolution but poor spatial coverage, while laser-based techniques can allow for 2D or 3D measurements, but only in geometrically simple flows. Magnetic Resonance Imaging (MRI) is a powerful tool that can provide fully quantitative, 3D experimental data without the need for optical access. Currently, MRI can provide the time-averaged, 3-component velocity and/or scalar concentration fields in turbulent single-phase flows of arbitrary geometric complexity. In recent years MRI has been applied to the study of single-phase flows across a broad range of problems from the engineering, environmental, and medical arenas. MRI data sets are particularly well suited for validating CFD simulations of complex 3D flows because comprehensive data coverage can be obtained in a relatively short time. The present work describes development, validation, and application of a new diagnostic, wherein MRI is used to obtain the 3D mean volume fraction field for solid microparticles dispersed in a turbulent water flow. The new method is referred to as Magnetic Resonance Particle concentration, or MRP. This technique was designed to maintain the same advantages of existing MRI-based techniques: quantitative data can be obtained in 3D for fully turbulent flow in arbitrarily complex geometries. MRP is based on a linear relationship between the MRI signal decay rate and particle volume fraction (Yablonskiy and Haacke, 1994). The MRP method and underlying physics were validated through several studies, increasing in complexity from a single particle suspended in a gel to a fully turbulent channel flow seeded uniformly with particles. The channel flow case showed that the signal decay rate varied linearly with particle volume fraction, and that the measured proportionality constant was within 5% of the value predicted by the theory of Yablonskiy and Haacke (1994). This good agreement was observed for two fully turbulent Reynolds numbers, 6300 and 12,200, and over most of the measurement domain. However, the measured proportionality constant was lower than expected in the the furthest upstream portion of the channel; several potential reasons for this discrepancy were identified, but none could be proven conclusively at this stage. Following the validation experiments, MRP was applied to three application cases drawn from real-world flows of interest. First, the dispersion of two particle streaks in a model human nasal passage was studied. The results showed that almost all particles reaching the upper portions of the nasal passage (e.g., the olfactory region) entered the nose near the nostril tip, even at high breathing rates where the flow was not laminar. The second case involved MRP concentration measurements for a particle streak in a generic gas turbine blade internal cooling passage. Results in this case provided evidence that small dust-like particles ingested into a cooling passage may behave inertially in the presence of fine flow features, such as the recirculation regions behind ribbed flow turbulators. In the final case, the performance of a particle separator device proposed by Musgrove et al. (2009) was quantified using both MRP and a sample-based analysis performed outside the MRI environment. The two techniques were in agreement regarding the poor overall effectiveness of the separator, and the 3D MRP data were used to examine the particle transport physics and suggest potential design improvements. Taken together, results from the three test cases showed that MRP can provide quantitative, 3D particle concentration data in application-relevant flows, leading to unique insights that would not be possible with existing measurement techniques.
Book
1 online resource.
The promise of optical antennas is the ability to tame the light to behave in ways not achievable using traditional optical components. For example, our results here demonstrate that a careful engineering of optical antennas allow the strong, even perfect, absorption of light in ultra-thin geometries, i.e., geometries much thinner than the wavelength of light. Enabled by geometry-sensitive antenna resonances, this absorption behavior can also be realized for a broad selection of colors. A detailed theoretical analysis of the observed perfect absorption phenomenon reveals the role of incoherently interacting degenerate electric and magnetic resonances in overcoming the well-known absorption limit for infinitesimally thin films. With another set of experiments, we show that strongly absorbed optical energy in aluminum nanoantennas can be used to heat them efficiently above their melting temperature and stimulate an explosive exothermic oxidation reaction called melt-dispersion mechanism. Importantly, we see that engineering the specific geometry of the constituent particles allows an unprecedented control of aluminum ignition, both spectrally and spatially, through the fine tuning of the optical antenna resonances.
Book
1 online resource.
Reservoir simulation is an important tool for understanding and predicting subsurface flow and reservoir performance. In applications such as production optimization and history matching, thousands of simulation runs may be required. Therefore, proxy methods that can provide approximate solutions in much shorter times can be very useful. Reduced-order modeling (ROM) methods are a particular type of proxy procedure that entail a reduction of the number of unknown variables in the nonlinear equations. This dissertation focuses on two of the most promising proper orthogonal decomposition (POD)-based ROM methods, POD-TPWL and POD-DEIM. A separate (non-ROM) technique to accelerate nonlinear convergence for oil-water problems is presented in the appendix.
Book
1 online resource.
The playwright Eugene O'Neill (1888-1953) produced a body of work—thirty-one full-length plays, and twenty-one one-act plays—that was ambitious in its stylistic innovations, and daring in its thematic concerns. As recounted by historians, biographers, and critics, O'Neill's private life informed his writing process, with his various ailments serving as prompts for the stage. Whenever the theme of addiction appears, as it does in his final plays The Iceman Cometh (1939), Long Day's Journey Into Night (1941), and A Moon for the Misbegotten (1943), scholars note the autobiographical aspect as generative for his artistic output. Painted as a depressive figure whose dysfunctional upbringing made a lifelong impression on him, current literature on O'Neill celebrates the playwright for his distinctly sincere expression of suffering and strife. O'Neill wrote his final plays during a period of renewed interest in addiction science in the nation following World War II. After the failed enterprise of Prohibition, scientists, politicians, and the public instated a new treatment paradigm that placed responsibility on the individual who drinks problematically. This approach helped solidify addiction as a biological—rather than a social or cultural—phenomenon. As a result, the disease model of addiction offered an identity that was receptive to addiction treatment. While the previous century saw the construction of the addict as a person with a weak will, the disease model of addiction constituted the addict's condition as an illness treatable through performative acts. O'Neill's final plays, then, reflect more than the playwright's direct experience with addiction. They reveal the nation's ambivalence towards the concept of addiction as a disease, and with the addict as a sick person. Spectators in post-WWII America labeled O'Neill's final plays as autobiographical not only because he drew from his personal experiences while writing them, but also because such an approach was seen as critical to the addict's recovery. As a result, theater scholars continue to position O'Neill as an artist who utilized the theatre as a transformative tool to address how addiction impacted his own life. Through his compassionate depictions of characters suffering from a disease, O'Neill's late plays showed how recovery depended upon theatrical acts of self-reflection, self-narrative, and self-actualization. They also reflect how spectators saw autobiography as the discursive mode for recovering from addiction. In this sense, I claim that it was not so much that O'Neill's plays were true to life and accepted as such, but that their content on addiction necessitated a search for disclosure in the first place. Rather than explore how the theatre allowed O'Neill to channel his suffering, I instead consider why live performance serves as a durable site for legitimating addiction as an illness. As O'Neill's late plays show, self-disclosure does not lead to an emancipatory experience devoid of coercion. Despite claims made by the medical field and mutual aid groups of its liberatory potential, these acts of recovery encourage self-governance, and trade on the morally inflected narratives about the addict circulating since the nineteenth century. As the first study to consider the collaborative relationship between Eugene O'Neill and the medical field, I consider how the playwright's representation of addicts directly influenced doctors and scientists who engaged with his work from 1939 until the present.
Book
1 online resource.
Our climate is changing. Although the extent and rate of change are still uncertain, it is abundantly clear that the climate of the future will not resemble the climate of the past and will pose significant risks to people around the world. How well people adapt to address these risks will be determined by their adaptive capacity, their ability to instigate and implement change. Understanding and building adaptive capacity may therefore be key to reducing long-term vulnerability to global change. This dissertation clarifies the concept of adaptive capacity, synthesizes the substantial but largely unconnected body of research on adaptive capacity to date, and introduces a new methodological approach to conducting meta-analyses of adaptation science. I express a definition of adaptive capacity in mathematical terms that summarizes current theories on how adaptability is built, ties the concept to related concepts in adaptation, and poses questions about theoretical limits and thresholds for adaptation. I apply computational text analysis and network analysis tools to develop a concept model of adaptive capacity that identifies and organizes 158 determinants of adaptive capacity into 8 categories according to the functional role they play in building capacity. I propose a modular theory of adaptive capacity, in which all eight functional categories are critical but multiple pathways exist to achieve each function. This modular theory reconciles a theoretical debate in the literature and connects insights from existing theories with empirical findings from the field. I propose a new framework, the Adaptive Capacities Framework (ACF), based on the eight functional categories, that enables assessment of adaptive capacity across scales and within multi-scalar systems. Results demonstrate the fragmented nature of adaptive capacity research to date and propose new directions for future research. The dissertation also provides insights for practitioners seeking to prioritize adaptation efforts.
Book
1 online resource.
The technological revolution that started with digital electronics more than 50 years ago has pushed the limits of scalability in device fabrication. Device features are currently at the nanometer scale, which requires structures to be built with atomic level accuracy. Advances in nanotechnology have opened up an exciting area of research that allows for molecular level control of device surfaces. Organic molecules can provide the tailorability required to continue the progress of semiconductor technologies. Organic functionalization provides a pathway to control the surface at the molecular level. Fundamental understanding of the adsorption phenomena between organic molecules and the surface is critical to achieve a stable inorganic/organic interface for the creation of hybrid nanostructures. This thesis aims to expand our current toolkit on functionalization to molecules that have multiple functionalities that can react with the surface. The reaction mechanism of these molecules is complex as there are several driving forces that can play a role during adsorption and influence the final reaction products. This thesis covers the adsorption of multifunctional molecules on the Ge(100)-2×1 surface using a combination of experimental and theoretical techniques: Fourier transform infrared spectroscopy, X-ray photoelectron spectroscopy, and density functional theory calculations. We explored the adsorption of four different molecules: 1,2,3-benzenetriol (C6H6O3); 1,3,5-benzenetriol (C6H6O3); 2-hydroxymethyl-1,3-propanediol (C4H10O3); pyrazine (C4H6N2). These are the first reported studies of these molecules on the Ge(100) surface. In order to understand the extent to which molecular geometry affects surface coverage, a detailed comparison of the adsorption of two triol molecules was carried out: 1,3,5-benzenetriol which has a rigid phenyl backbone and 2-hydroxymethyl-1,3-propanediol with a flexible alkyl backbone. DFT results showed that the rigid backbone exhibits a higher degree of strain, which translates to a loss of exothermicity in the reaction coordinate. Experiments showed that the flexibility of the alkyl backbone provides higher rotational degrees of freedom and an enhancement in surface coverage. In order to understand the effect of intermolecular interactions, the adsorption of 1,2,3-benzenetriol was explored. Interestingly, we found that at high coverage, intramolecular hydrogen bonding in singly and dually adsorbate products breaks to form intermolecular hydrogen bonding with a nearby adsorbate, which provides enhanced stabilization of the surface adduct. This additional stabilization may lower the reactivity of unreacted functional groups even if an empty nearby Ge site is available for reaction. The distribution of products resulted in primarily bidentate adsorbates, leaving an unreacted moiety at the surface. We also found evidence of coverage and temperature effects on adsorbed pyrazine molecules on the Ge surface. It was observed that this molecule adsorbs on Ge through both carbon and nitrogen moieties and the product distribution changes as a function of coverage and temperature. At low coverage, incoming molecules react primarily through C-cycloaddition reactions and N dative bonds. However, as the density of adsorbates increases, new incoming molecules adsorbed primarily through the nitrogen moiety. Furthermore, as the temperature increases, the product distribution changed from primarily non-activated dative bond to activated cycloaddition products. Overall, the studies in this thesis provide new insight into competition and selectivity in adsorption of multifunctional molecules on the Ge(100)-2×1 surface.
Book
1 online resource.
It is estimated that 2.4% of the US population has an artificial hip or knee as of 2010, and the prevalence of metallic implants continues to grow. Current imaging evaluation of complications near implants relies on X-Ray, which has limited contrast in soft tissues and thus limited sensitivity for early disease. Magnetic Resonance Imaging (MRI) is a safe medical imaging modality known for its excellent soft tissue contrast, which could be a useful tool for accurate, early and non-invasive assessment of complications. However, severe magnetic field variations induced by metal often render conventional MRI techniques non-diagnostic near the implants. Multi-spectral imaging (MSI) techniques resolve metal-induced field perturbations, but they suffer from long scan times that delay their widespread clinical adoption, and residual frequency-encoding artifacts that cause resolution loss close to the metal. This dissertation focuses on techniques to improve MRI near metal in terms of scan efficiency, artifact correction and delineation of implants. First, a signal model of MSI was introduced to compactly represent the signal distribution in the spectral dimension and enable accelerations of MSI scans. The model-based reconstruction was demonstrated to provide 3-fold acceleration beyond conventional acceleration techniques including parallel imaging and partial Fourier reconstruction. Next, a deep-learning-based reconstruction was presented to reduce the reconstruction times of optimization-based reconstruction and improve the reconstructed image quality of accelerated imaging near metal. Then, the frequency-encoding artifacts induced by metal, including signal hyper-intensities, signal oscillations and resolution loss, were analyzed and alternating-gradient MSI acquisitions were introduced to correct these artifacts. Finally, a method for susceptibility mapping inside signal voids was presented to delineate the geometry and material of metallic implants.
Book
1 online resource.
Photochemistry studies chemical reactions caused by absorption of light. Developing theoretical and computational tools for photochemistry will not only help better understand photochemical processes such as photosynthesis and vision, but can also provide guidelines about how molecular photodevices can be better designed. Therefore, the goal of my graduate research is to develop a set of computational tools for studying photochemical processes. Physical systems have a hierarchical structure, i.e. basic particles like nuclei and electrons interact leading to the formation of molecules, and molecules interact and change conformations giving rise to chemical reactions. Naturally, the corresponding theoretical methods should also follow this hierarchy. At the bottom level, we need molecular integrals to describe different types of interactions between basic particles. I introduced the automated code engine (ACE) that generates optimized codes for computing integrals on the graphical processing units, and developed several variants of tensor hyper-contraction (THC) approximations. ACE reduces the computational prefactor of integral evaluations whereas THC reduces the formal scaling. On top of the integrals, we then need electronic structure methods to describe the energies and forces for a molecule at any given nuclear configuration; including electron correlation is the key to having an accurate description. Here, I first developed single reference THC-MP2 to capture the dynamic correlations, and then developed multi-reference THC-CASPT2 method to incorporate static correlations simultaneously. These methods were later generalized to THC-MSPT2 to enable descriptions for excited states and conical intersections, both are critical for photochemistry. Finally, given the electronic structure methods, we then need methods to explore the potential energy surfaces. In particular, critical point search methods locate the important configurations (e.g. Franck-Condon point, conical intersections), while molecular dynamics methods generate trajectories describing how the molecules move and interact with each other. By interfacing the electronic structure methods that I developed with the geomeTRIC geometry optimizer and G-AIMS non-adiabatic dynamics framework, a complete toolbox for understanding photochemistry is provided.
Book
1 online resource.
Silicon Photonics is considered to be essential for the sustained growth of semiconductor industry moving forward. The ubiquitous mobile devices and Internet of Things (IoT) are driving the data needs of the end user exponentially which has led to numerous data centers and perilously large power consumption in each of them. More than 3.4 billion people in the world have access to internet today and this number is increasing steadily day by day. Together, we generate more than 50 Terabytes (50,000 Gigabytes) of internet traffic per second at any given point of time. This number was just 100 Gigabytes per second in 2002 and it is expected to grow much faster going into the future. As for the power consumption, US data centers alone consumed about 70 billion kilowatt-hours of electricity in 2014, representing 2 percent of the country's total energy consumption, according to a study. That's equivalent to the amount consumed by about 6.4 million average American homes that year. This is a 4 percent increase in total data center energy consumption from 2010 to 2014, and a huge change from the preceding five years, during which total US data center energy consumption grew by 24 percent, and an even bigger change from the first half of last decade, when their energy consumption grew nearly 90 percent. It is well established that the copper cables which transfer data from one end of the data-center to the other are the bottlenecks reducing the overall bandwidth of the system and skyrocketing the power consumption on the whole. This bottleneck is getting worse day by day owing to the ever-increasing data needs. Silicon Photonics based 'optical interconnects' are the best solution to remove this bottleneck. Optical interconnects use photons instead of electrons for communication and therefore have the potential to offer very large bandwidths at minimal power consumption. In the very near future all the copper wires in the data-center ecosystem will have to be replaced by these optical interconnects if we must meet the data needs within the prescribed power budget. In order to build such a platform where conventional machines in the data center work in tandem with novel interconnects based on photon-devices, all the optical components need to be integrated seamlessly on a silicon chip. Modulators are the most important optical component of such a platform since they act as optical switches which control the flow of photons. In the first part of the dissertation, a silicon compatible germanium (Ge) electro-absorption modulator with the best reported energy-delay product is demonstrated. The figure-of-merits along with the design principles are discussed in detail while the fabrication methodology is briefly touched upon. Experimentally measured characteristics are then shown to be the best-in-class and ones that match the data requirements of the data-centers with minimal energy consumption. In the second part of the dissertation, we focus on developing an efficient silicon-compatible light emitter based on strained Ge technology. Detailed theoretical calculations lay down a roadmap for room-temperature lasing from Ge. These calculations also prove that the loss mechanisms involved in the light emission process from Ge have been inadequately modeled until now and shows that a particular loss mechanism known as the inter-valence-band absorption is a major barrier in the realization of a strained Ge laser. CMOS compatible fabrication techniques to introduce large uniaxial strain in Ge are then discussed. Finally, a low-threshold Ge laser at a temperature of 83 K is demonstrated. In the final part of the dissertation, the first demonstration of a 'truly' silicon compatible three-dimensional (3D) photonic crystals is discussed. Using the methodology developed, a broadband omnidirectional reflector is also demonstrated on silicon. This methodology is also shown to be particulary well suited for 3D waveguides and optical cavities.
Book
1 online resource.
Accurate observations and descriptions of the role of heterogeneity on water and CO2 transport and immobilization in porous and fractured geologic media are important for understanding and modeling multiphase conditions present in geologic carbon storage reservoirs. The growth in in-situ imaging has lead to remarkable advancements in understanding and quantification of this complex fluid transport behavior. Despite these advancements, commonly used imaging and experimental methods such as clinical computed tomography, micro computed tomography, nuclear magnetic resonance, and optical imaged micromodels each have limitations. Each modality faces individual challenges with observing fluid advection, dispersion, and diffusion, in 3D geologic porous media. In this work, micro-positron emission tomography (micro-PET), in combination with other imaging methods, is utilized to study a number of challenging transport problems in earth science.
Book
1 online resource.
Proteins are frequently characterized as molecular machines, with atomic-level motions driving biological function. Past decades have witnessed a dramatic increase in the tools available to probe these dynamics, but few methods enable us to resolve these collective motions with high spatial resolution. This dissertation investigates the potential of x-ray diffuse scattering from protein crystals to meet this critical need. Specifically, I review the models of correlated disorder that have previously been suggested to account for this signal and describe algorithms for processing the diffuse scattering in experimental diffraction data. These models and algorithms are applied to dissect the physical origins of the diffuse scattering observed from three protein crystals. Though considerable progress is still required for the analysis of diffuse scattering to become a routine biophysical method for studying protein dynamics, the framework and findings described in this dissertation make concrete steps toward that end.
Book
1 online resource.
The trucking industry is an irreplaceable sector of our economy. Over 80% of the world population relies on it for the transportation of commercial and consumer goods. In the US alone, this industry is responsible for over 38% of fuel consumption as it distributes over 70% of our freight tonnage. In the design of these vehicles, particular emphasis has been placed on equipping them with a strong engine, a relatively comfortable cabin, a spacious trailer, and a flat back to improve loading efficiency. The geometrical design of these vehicles makes them prone to flow separation and at highway speeds overcoming aerodynamic drag accounts for over 65% of their energy consumption. The flat back on the trailer causes flow to separate, which generates a turbulent wake. This region is responsible for a significant portion of the aerodynamic drag and currently the most popular solution is the introduction of flat plates attached to the back of the trailer to push the wake downstream. These passive devices improve the aerodynamic performance of the vehicle, but leave opportunities for significant improvement that can only be achieved with active systems. The current procedure to analyze the flow past heavy vehicles and design add-on drag reduction devices focuses on the use of wind tunnels and full-scale tests. This approach is very time consuming and incredibly expensive, as it requires the manufacturing of multiple models and the use of highly specialized facilities. This Dissertation presents a computational approach to designing Active Flow Control (AFC) systems to reduce drag and energy consumption for the trucking industry. First, the numerical tools were selected by studying the capabilities of various numerical schemes and turbulence model combinations using canonical bluff bodies. After various numerical studies and comparisons with experimental results, the Jameson-Schmidt-Turkel (JST) scheme in combination with the Shear-Stress-Transport (SST) turbulence model were chosen. This combination of tools was used to study the effect of AFC in the Ground Transportation System (GTS) model, which is a simplified representation of a tractor-trailer introduced by the US Department of Energy to study the separation behind this type of vehicle and the drag it induces. Using the top-view of the GTS model as a two-dimensional representation of a heavy vehicle, the effect that the Coanda jet-based AFC system has on the wake and integrated forces have been studied. These two-dimensional studies drove the development of the design methodology presented, and produced the starting condition for the three-dimensional Coanda surface geometry and the jet velocity profile. In addition, the influence in wake stability that this system demonstrated when operating near its optimum drag configuration, allowed for the decoupling of time from the three-dimensional design process. A design methodology that minimizes the number of required function evaluations was developed by leveraging insights obtained from previous studies; using the physical changes in the flow induced by the AFC system to eliminate the need for time integration during the design process; and leveraging surrogate model optimization techniques . This approach significantly reduces the computational cost during the design of AFC drag reduction systems and has led to the design of a system that reduces drag by over 19% and power by over 16%. In the US trucking fleet alone, these energy savings constitute 8.6 billion gallons of fuel that will not be burned and over 75 million tons of CO2 that will not be released into the atmosphere each year.
Book
1 online resource.
Understanding the aerodynamic interactions between turbines in a wind farm is essential for maximizing power generation. In contrast to horizontal-axis wind turbines (HAWTs), for which wake interactions between turbines in arrays must be minimized to prevent performance losses, vertical-axis wind turbines (VAWTs) in arrays have demonstrated beneficial interactions that can result in net power output greater than that of turbines in isolation. These synergistic interactions have been observed in previous numerical simulations, laboratory experiments, and field work. This dissertation builds on previous work by identifying the aerodynamic mechanisms that result in beneficial turbine-turbine interactions and providing insights into potential wind farm optimization. The experimental data presented indicates increased power production of downstream VAWTs when positioned offset from the wake of upstream turbines. Comparison with three-dimensional, three-component flow measurements demonstrates that this enhancement is due to flow acceleration adjacent to the upstream turbine, which increase the incident freestream velocity on appropriately positioned downstream turbines. A low-order model combining potential flow and actuator disk theory accurately captures this effect. Laboratory and field experiments were used to validate the model's predictive capabilities, and an evolutionary algorithm was deployed to investigate array optimization. Furthermore, changes in upstream turbine performance are related to variations in the surrounding flow field due to the presence of the downstream rotor. Finally, three-dimensional vortex interactions behind pairs of VAWTs are observed to replenish momentum in the array's wake. These effects are described along with their implications for wind farm design.
Book
1 online resource.
This dissertation investigates various affective influences on decision making under risk and uncertainty. The first essay of the dissertation shows that the relationship between physiological arousal and risk-taking is more nuanced than previously found. The results demonstrate that lower levels of physiological arousal increase sensitivity to expected values of risky prospects, leading to more adaptive decisions, instead of increasing or decreasing risk seeking across the board. The second essay of the dissertation elucidates how anticipated guilt can lead to choices of uncertain options over certain ones, establishing how choosing uncertain outcomes can serve as a guilt reduction mechanism. Finally, the third essay investigates neural affective correlates of consumer disengagement from consumption episodes that have uncertain rewards that are known only as they unfold over time and exhibits how data from a neural focus group can improve forecasts of market-level behavior.
Book
1 online resource.
This dissertation explores the role of affect in sociolinguistic style. Styles -- or clusters of socially meaningful linguistic features -- are central to the projection of social types or personae. Linguistic styles convey behaviors and stances associated with these personae, and reflect and reproduce macro-social categories like gender, age, race and class. But styles also index affect; we can imagine displays of a Valley Girl's exasperation, a surfer's laid-back attitude, or a politician's cheerful smarm. Such qualities are more than ephemeral moods; they are durative dimensions of stylistic practice. Further, because bodily practices like posture, comportment, and facial expression are taken to display emotions and attitudes, the expression of affect is an embodied phenomenon involving multiple semiotic channels. To that end, here I examine both linguistic and bodily practice, demonstrating that we gain a richer understanding of meaning-making by incorporating affect into our theory of style. My data are drawn from a year of fieldwork at a public arts high school in the San Francisco Bay Area. Through ethnographic, phonetic, and visual analysis of interviews with 24 students, I show how speakers construct styles around affective qualities like 'chill' or 'tough.' These styles correspond to students' orientation to their artistic pursuits and to the institution more broadly. An analysis of variation in young men's use of creaky voice quality, speech rate, and seated interview posture reveals that these features are used to display energetic affective styles of 'chill' on the one hand, and its locally-rendered ideological opposite, 'loud', on the other. And these styles position students in the social landscape; high-energy or 'loud' affect corresponds with more institutionally-oriented stances, whereas 'chill' affect corresponds with a less institutionally-oriented stance (albeit one deeply invested in artistic pursuit, outside the scope of the school's curriculu). A second analysis focuses on the tandem use of retracted /l/ and a raised variant of the LOT vowel by students in the technical theater (or 'tech') discipline. Unlike other disciplines, these students engage in manual labor, constructing sets and operating equipment for school productions, and are described by their peers as 'handy' and 'badass' - producing a cumulative image of embodied toughness. Notably, these two variables are both characterized by a retracted tongue dorsum. I suggest that tech students share a general articulatory setting which conditions their use of otherwise unrelated phonetic features, and that this articulatory setting indexes tech students' embodied toughness. In a final analysis, I explore the connection between contextualized interactional meaning and more durative enregistered meanings of three of these variables: creaky voice quality, retracted /l/, and raised LOT. In other words, I ask whether speakers use creaky voice to convey chill, or retracted /l/ to convey toughness, in situated interactional moments. I explore the potential social meanings of these features as used by two speakers in ethnographic interviews. Some extreme realizations of these features do emerge in moments when tough or chill affective displays are particularly salient. However, this is not the case for all such tokens, suggesting that variables need not always index specific meanings in interaction in order for holistic, thematic meanings to become enregistered within a community. Taken together, these analyses show that linguistic variation and bodily comportment are used to convey affect in stylistic practice. This work demonstrates that a more explicit focus on the intertwining semiotics of affect can enrich our understanding of the socio-indexical potential of linguistic variation.
Book
1 online resource.
Training a machine learning model today involves minimizing a loss function on datasets that are often gigantic, and so almost all practically relevant training algorithms operate in an online manner by reading in small chunks of the data at a time and making updates to the model on-the-fly. As a result, online learning, a popular way to analyze optimization algorithms operating on datastreams, is at the heart of modern machine learning pipelines. In order to converge to the optimal model as quickly as possible, online learning algorithms all require some user-specified parameters that reflect the shape of the loss or statistics of the input data. Examples of such parameters include the size of the gradients of the losses, the distance from some initial model to the optimal model, and the amount of variance in the data, among others. Since the true values for these parameters are often unknown, the practical implementation of online learning algorithms usually involves simply guessing (called ``tuning''), which is both inefficient and inelegant. This motivates the search for parameter-free algorithms that can adapt to these unknown values. Prior algorithms have achieved adaptivity to many different unknown parameters individually - for example one may adapt to an unknown gradient sizes given a known distance to the optimal model, or adapt to the unknown distance given a known bound on gradient size. However, no algorithm could adapt to both parameters simultaneously. This work introduces new lower bounds, algorithms, and analysis techniques for adapting to many parameters at once. We begin by proving a lower bound showing that adapting to both the size of the gradients and distance to optimal model simultaneously is fundamentally much harder than adapting to either individually, and proceed to develop the first algorithm to meet this lower bound, obtaining optimal adaptivity to both parameters at once. We then expand upon this result to design algorithms that adapt to more unknown parameters, including the variance of the data, different methods for measuring distances, and upper or lower bounds on the second derivative of the loss. We obtain these results by developing new techniques that convert non-parameter-free optimization algorithms into parameter-free algorithms. In addition to providing new and more adaptive algorithms, the relative simplicity of non-parameter-free algorithms allows these techniques to significantly reduce the complexity of many prior analyses.
Book
1 online resource.
Linkage disequilibrium (LD) is the non-random association of alleles at different genetic loci. This dissertation consists of three projects that relate to the analysis and application of LD on various topics within population and statistical genetics. Various measures of LD have been proposed in the literature, each with different arguments favoring its use. Chapter 2 employs a theoretical approach to examine mathematical properties of five different measures of LD. These results help place the use of various LD statistics into their proper contexts, and provide a mathematical basis for comparing their values. Next, the presence of LD in genomes can be leveraged for a number of different applications in statistical genetics. Chapter 3 examines one such example in genetic imputation. Specifically, we ask the question of how to optimally select a subset of a study sample for sequencing when choosing an internal reference panel for imputation, in order to maximize the eventual imputation accuracy. We compare two algorithms—maximizing phylogenetic diversity (PD) and minimizing average distance to the closest leaf (ADCL)—and conclude that while both algorithms give better imputation results as compared to randomly selecting haplotypes to be included in the reference panel, imputation accuracy is the highest when minimizing ADCL is used as the method for panel selection. Finally, LD in genomes can produce genetic signatures that may be suggestive of certain demographic processes. Genetic linkage results in the preservation of homozygous segments in the genome that are produced as the result of genomic sharing, which can then be detected as runs of homozygosity (ROH). Chapter 4 analyzes the distribution of ROH lengths in a sample of worldwide Jewish and non-Jewish populations, and employs a model-based clustering method to classify the ROH in a given population into three classes (short, intermediate, and long) based on length. Furthermore, for a subset of the Jewish populations in this study, we were able to obtain estimates of demographic rates of consanguinity (as indicated by the rates of close-relative unions). We find that the level of consanguinity in those populations is predictive of long ROH, thus finding genetic signatures of mating patterns that existed in a population's history. Making use of theoretical, computational, and statistical approaches, these chapters together provide a wide-ranging account of different aspects of LD, as related to their respective applications within the field.
Book
1 online resource.
Analytical chemistry is a metrological science that develops, optimizes and applies analytical measurements in order to solve complex problems and to facilitate educated and effective decision-making processes. Throughout the history of science, analytical chemists have expanded the field beyond routine characterization of the compositions samples into a much broader discipline through the development of new analytical methods for scientific advances, improvement upon established methods, and extension of existing methods to completely new sample types. In addition, many aspects of analytical chemistry have evolved through time, such as analytical instruments, reagents, detection limits, dimensions of analytical information, and the more recent introduction of mathematical models, computer science, and big data into chemical analyses. Regardless of these evolutions, the fundamental principle of analytical chemistry, that is, to use analytical measurements as universal vehicles to obtain information, persists throughout the history of analytical chemistry. This dissertation introduces a three-step "analytical chemistry approach" to solve scientific problems based on the fundamental principle of analytical chemistry. The three steps include: (1) frame a research question; (2) identify analytical method(s) that can acquire data to answer the research question; (3) use the analytical method(s) to obtain data and answer the research question. This dissertation demonstrated the remarkable versatility of the analytical chemistry approach by applying it to solve a wide spectrum of scientific problems, ranging from bioorthogonal catalysis with therapeutics and diagnostics applications to soil organic carbon characterization with fundamental impacts on the global carbon cycle, highlighting the paramount importance of analytical chemistry in solving problems and advancing science. Chapter 1 is an introduction to analytical chemistry, the evolution of the field, and the three-step analytical chemistry approach used throughout this dissertation. Chapter 2 showed how the analytical chemistry approach was used to develop a general method to evaluate metal-catalyzed reactions in living systems. In this chapter, a Ru-based bioorthogonal pre-catalyst was used to activates a caged aminoluciferin probe in cellular environments. Upon catalytic cleavage, the activated aminoluciferin is turned over by its target enzyme, luciferase, in cells to produce a bioluminescence readout. With the ability to amplify and/or target imaging readouts, this system opens up many new opportunities in research, imaging, diagnostics, and therapy. By using the three-step analytical chemistry approach, key factors that affect product distribution for the catalytic reaction was found, and the location of that the catalytic reaction was identified to be extracellular. Chapter 3 and Chapter 4 of this dissertation demonstrated the versatilely of the analytical chemistry approach by shifting focus from bioorthogonal catalysis to soil organic carbon. In Chapter 3, the analytical chemistry approach was applied to develop the SOC-fga method, which combines Fourier-transform infrared spectroscopy (FT-IR) and bulk carbon X-ray absorption spectroscopy (XAS) to quantitatively characterize the compositions of soil organic carbon (SOC) across a subalpine watershed in East River, CO, without going through traditional alkaline extractions and chemical treatments that alter SOC compositions. A large degree of variability in SOC functional group abundances was observed between sites at different elevations. The ability to identify the composition of organic carbon in soils quantitatively across biological and environmental gradients will greatly enhance our ability to resolve the underlying controls on SOC turnover and stabilization. Chapter 4 built on the findings of Chapter 3 to further evaluate the SOC-fga method with density fractionation and cross polarization/magic angle spinning (CP/MAS) 13C NMR spectroscopy. This chapter summarized the strengths and weaknesses of the SOC-fga method and 13C NMR and set up a platform to launch future work on SOC turnover mechanisms. Finally, Chapter 5 concluded this dissertation with summaries of findings, future directions, and ending remarks on analytical chemistry.
Book
1 online resource.
This thesis investigates two independent aspects of spacetime symmetries. The first part of my thesis is about the angular momentum conservation law in light-front quantum field theory. We prove the light-front Poincare invariance of the angular momentum conservation law and the helicity sum rule for relativistic composite systems. We show that the light-front wavefunction (LFWF), which describes the internal structure of a bound state, is in fact frame independent, in contrast to instant form wavefunctions. In particular, we demonstrate that j3, the intrinsic angular momentum projected onto the light-front direction, is independent of the bound state's 4-momentum and the observer's Lorentz frame. The frame independence of j3 is a feature unique to the front form. The angular momentum conservation law leads directly to a nonperturbative proof of the constraint A(0)=1 and the vanishing of the anomalous gravitomagetic moment B(0)=0. Based on the conservation of angular momentum, we derive a selection rule for orbital angular momentum which can be used to eliminate certain interaction vertices in QED and QCD. We also generalize the selection rule to any renormalizable theory and show that there exists an upper bound on the change of orbital angular momentum in scattering processes at any fixed order in perturbation theory. The second part of my thesis investigates an extended conformal symmetry for Abelian gauge theory in general dimensions. Maxwell theory in d \neq 4 spacetime dimensions is an example of a scale-invariant theory which does not possess conformal symmetry -- the special conformal transformation (SCT) explicitly breaks the gauge invariance of the theory. We construct a non-local gauge-invariant extension of the SCT, which is compatible with the BRST formalism and defines a new symmetry of the physical Hilbert space of the Maxwell theory for any dimension d \geqslant 3. We prove the invariance of Maxwell theory in d \geqslant 3 by explicitly showing that the gauge-invariant two-point correlation functions, the action, and the classical equation of motion are unchanged under such a transformation.
Book
1 online resource.
Animal Empires: The Perfection of Nature Between Europe and the Americas, 1492-1615 demonstrates how Renaissance patrons, naturalists, and husbandmen developed useful but dangerous ideas to make sense of natural diversity -- nobility, race, and species -- during the consolidation of the sixteenth-century Spanish Empire. Using the three major techniques at their disposal-relocation, cultivation and training, and selective breeding-elites began a colossal experiment. They sought to create an improved version of Christian nature both in European courts and then, on a larger scale than ever before imagined, in the Americas. Case studies focus on breeding theories and practices in Mantua, Naples, Madrid, Peru, and the Valley of Mexico. Starting in the heart of Renaissance Italy and ending high in the Andes, this project integrates disparate fields of investigation -- ranging from Renaissance aesthetics and animal studies to the histories of the Spanish Empire and of biology -- to reveal an ideal of nature grandly envisioned and prosaically enacted through imperial conquest.