Search results

RSS feed for this result

153,090 results

Collection
Undergraduate Honors Theses, Graduate School of Education
Despite the emergence of formal “critical whiteness studies” in academia over the last twenty years, there currently remains a dearth of research on the teaching of whiteness studies. Additionally, there remains a lack of formal engagement with the topics of race and whiteness, especially by white students in universities across the United States. This paper investigates the potential for implementing anti-racist critical whiteness studies at Stanford University, seeking to understand how the field of whiteness studies is taught on college campuses and what Stanford students think about studying race and whiteness. The paper analyzes results from a survey of 200 Stanford students that aimed to collect student reactions to studying race and whiteness, as well as students’ impressions of their own qualification and comfort in discussing race with friends and family. Results indicate a strong desire among the Stanford student body to study race, and provide an argument for Stanford University to establish critical whiteness studies within a framework of anti-racism education. The paper proceeds to analyze eighteen syllabi for classes on whiteness, concluding that while there are many ways to teach about whiteness, there is a distinct and common approach which uses high-level concepts like race as a social construction with intersectional implications and a universalized understanding of whiteness to promote the visibility of whiteness to students in the class. There remain incredible tensions around analyzing racism without centering or privileging whiteness. However, taken together, Stanford student’s endorsement of race and whiteness studies and the sample syllabi’s formal structure for teaching about anti-racist critical whiteness provide a case for Stanford University to implement such a curriculum.
Book
1 online resource.
At the restriction point (R), mammalian cells irreversibly commit to divide. R has been viewed as a point in G1 after growth factor signaling initiates a positive feedback loop of Cdk activity. However, recent studies cast doubt on this model by claiming R occurs prior to positive feedback activation in G1 or even before completion of the previous cell cycle. Here we reconcile these results and show that whereas many commonly used cell lines do not exhibit a G1 R, primary fibroblasts have a G1 R that is defined by a precise Cdk activity threshold and the activation of cell cycle-dependent transcription. A simple threshold model, based solely on Cdk activity, predicted with more than 95% accuracy whether individual cells had passed R. That a single measurement accurately predicted cell fate shows that the state of complex regulatory networks can be classified by a few critical protein activities.
Collection
Undergraduate Honors Theses, Graduate School of Education
This paper examines the impact of an elementary charter school that targets low-income, black and Hispanic boys in Florida. Using student-level demographic data, charter school students were matched with traditional public school students that shared the same demographic characteristics and were within close range of the starting test score in third grade. The results showed the following impact on the difference in scores from third to fourth grade: for math, attending the charter school is associated with testing .72 standard deviations lower than students in the traditional public school; for reading, the charter school is associated with testing .19 standard deviations lower on the test. Both of these results are statistically significant. This model can be used in different scenarios in order to understand where certain demographics of students are succeeding, and can thus lead to further causal research to identify and disseminate success tactics and pedagogies throughout the school system.
Book
1 online resource.
Additive manufacturing (AM) is the automated layer-wise fabrication of 3D objects directly from geometrical computer models. In the music technology lab, AM affords the rapid testing of enclosures and components for both sound capturing and sound producing instruments. These instruments, typically outsourced to manufacturing hubs, are now produced near the site of musical experimentation through desktop fabrication. Parametric modeling and rapid fabrication with AM accelerate the design cycle for the production of instruments for music research. AM alters the site of manufacturing, the duration between the digital sketch and its materialization, and the material constitution of the instruments we might deploy; these shifts have consequences for what gets made and in turn alter artistic practices that integrate such tool-making tools. To illustrate these affordances of AM, I describe a suite of instruments within a taxonomy of usage categories that move from reproduction of known instruments, to the augmentation of found ones and finally a phase of invention. Invention arises not simply from putting such machines in the service of novel ideas but rather from carefully examining the material outcomes of the layering process itself. The term AM is used in place of numerous alternatives to emphasize this anisotropic grain of the printed object. Although the additive method announces a powerful flexibility in the shapes it sheds, its performance in various acoustic and electroacoustic scenarios depends in part on this grain. The practice, therefore, navigates trade-offs between malleable fabrication methods for organizing material and the quality of the resulting forms for organizing sound.
Book
1 online resource.
Ongoing advances in DNA sequencing technologies have undoubtedly shifted genomics into the realm of big data. To cope with this data explosion and enable rapid advances in biology and medicine, we must develop scalable and efficient methods for genomic data analysis that can leverage both domain specific knowledge, as well as the latest computing platforms, such as multi-core and accelerator-based architectures and cloud computing. This thesis presents several algorithms developed with this goal in mind, focusing on key tasks performed in resequencing studies, metagenomics, and cancer genomics. First, we consider the ubiquitous and compute-intensive task of read mapping. We propose the read mapping algorithm BALAUR that can outsource a significant portion of the computation to the public cloud preserving the privacy of genomic data. BALAUR uses the MinHash technique and a coarse-grained voting scheme to map the reads, relying on the high degree of similarity between the reads and their best reference matches. BALAUR has a similar runtime to state-of-the-art read mappers in short read mapping and achieves significant speedups over existing approaches in long read mapping. We also propose the read mapping algorithm BWBBLE that maps reads to a large collection of genomes with the goal of eliminating reference bias and boosting mapping accuracy. BWBBLE creates a compressed linear representation of the collection, taking advantage of the high redundancy across the genomes. BWBBLE then uses this representation, along with an adaptation of the Burrows-Wheeler Transform search algorithm to efficiently map the reads (since all the reads can be treated independently, this is an embarrassingly parallel task). This results in significant reduction in space and runtime requirements, compared to mapping to each genome individually. Next, we consider several resource-intensive tasks in metagenomics. We introduce the framework GATTACA for fast and accurate metagenomic binning. GATTACA uses a lightweight approach for estimating co-abundance profiles across a cohort of metagenomic samples (subsequently used as features for binning), replacing read mapping with a kmer counting procedure. This approach can result in up to several orders of magnitude speedup in abundance estimation; it has also been parallelized for shared memory systems. Creating compact indices of metagenomic samples, GATTACA enables easy reuse across studies via fast downloads and small disk footprints. GATTACA also incorporates a fast procedure for metagenomic sample comparison based on MinHash that can be used for cohort selection. Finally, we focus on cancer genomics, specifically the critical task of inferring tumor heterogeneity and evolution. We introduce a novel algorithm, LICHeE, that uses variant frequencies from deep sequencing data across multiple tumor samples to infer the underlying cancer lineage tree and decompose the samples into subclonal populations. LICHeE is fast and efficiently scales with the number of input samples and variants, relying on a combinatorial formulation of the task as a search for spanning trees in a constraint network.
Book
1 online resource.
Biofilms are multicellular communities consisting of microorganisms enmeshed in an extracellular matrix of biopolymers. The matrix provides the community structure and cohesiveness and allows it to adhere to a variety of interfaces. Formation of a biofilm is advantageous to the microbial community, as it provides protection from external assaults (desiccation, oxidizing agents, predation), protection from host immune defenses, facilitates close cell-to-cell interactions for DNA exchange, and creates nutrient gradients that give rise to metabolic diversity within the community. These factors allow biofilms to persist in a variety of settings, ranging from large-scale industrial equipment to medical implants in the human host. In fact, many infections are now appreciated to be biofilm-related and are difficult to treat by traditional means such as antibiotics. To combat unwanted biofilms, a current strategy is to take a biophysical approach and interfere with the biofilm structure by disrupting the extracellular matrix. This strategy could revoke the survival advantages provided to the microorganisms by existing in the biofilm community. It also avoids the life-or-death pressure placed on microorganisms by traditional antibiotic treatment that gives rise to drug resistant mutations. However, to achieve this goal of targeting the extracellular matrix, we require an improved understanding of the underlying mechanical properties of the biofilm structure. In this work, we describe the use of modified rheological methods to quantify mechanical interactions relevant at all stages of the biofilm lifecycle, including: initial microbial adhesion to interfaces, maturation of the biofilm structure, and microbial dispersal. A Live Cell Monolayer Rheometer (LCMR) was used to study adhesion of uropathogenic Escherichia coli to bladder epithelial cells, the initial step in bladder infection. Quantitative mechanical measurements defined the contributions of bacterially produced type 1 pili, curli, and cellulose to bladder cell adhesion, and revealed an important role for cellulose in mediating these interactions. This novel use of live cell rheology can be expanded to study a variety of other relevant host-pathogen interactions. In a separate study, interfacial shear rheology was used to study the maturation of biofilms formed at the air-liquid interface by Vibrio cholerae, the causative agent of cholera. It was discovered that out of several known extracellular matrix components in the V. cholerae biofilm, a specific matrix protein called Bap1 contributed significantly to maintenance of biofilm elasticity, biofilm hydrophobicity, and development of a mature biofilm structure. Finally, mechanical measurements relevant to biofilm dispersal were performed using a custom-built device to apply large deformations to Bacillus subtilis biofilms formed at the air-liquid interface. These measurements revealed that biofilms exhibit non-uniform deformation due to inhomogeneous mechanical properties within the structure and can have both viscoelastic and viscoplastic characteristics. Together, these studies produced new tools in the field of biofilm mechanics and provided quantitative measurements of mechanical interactions relevant to all stages of the biofilm lifecycle.
Book
1 online resource.
In this work, we introduce and apply several new techniques for oil/gas reservoir optimization under uncertainty. As the first contribution, we develop a general methodology for optimal closed-loop field development (CLFD) under geological uncertainty. CLFD involves three major steps: optimizing the field development plan based on current geological knowledge, drilling new wells and collecting hard (well) data and production data, and updating multiple geological models based on all of the available data. In the optimization step, the number, type, locations and controls for new wells (and future controls for existing wells) are optimized using a hybrid Particle Swarm Optimization -- Mesh Adaptive Direct Search algorithm. The objective in the examples presented is to maximize expected (over multiple realizations) net present value (NPV) of the overall project. History matching is accomplished using an adjoint-gradient-based randomized maximum likelihood (RML) procedure. Different treatments are presented for history matching Gaussian and channelized models. Because the CLFD history matching component is fast relative to the optimization component, we generate a relatively large number of history matched models. Optimization is then performed using a representative subset of these realizations. We introduce a systematic optimization with sample validation (OSV) procedure, in which the number of realizations used for optimization is increased if a validation criterion is not satisfied. The CLFD methodology is applied to two- and three-dimensional example cases. Results show that the use of CLFD increases the NPV for the `true' (synthetic) model by 10% --70% relative to that achieved by optimizing over a large number of prior realizations. The CLFD framework includes several components, and different approaches for history matching, optimization, model selection and economic evaluation can be applied. In our second contribution, we address the problem of selecting a subset of representative geological realizations from a large set. Towards this goal, we introduce a general framework, based on clustering, for selecting a representative subset of realizations for use in simulations involving `new' sets of decision parameters. Prior to clustering, each realization is represented by a low-dimensional feature vector that contains a combination of permeability-based and flow-based quantities. Calculation of flow-based features requires the specification of a (base) flow problem and simulation over the full set of realizations. Permeability information is captured concisely through use of principal component analysis. By computing the difference between the flow response for the subset and the full set, we quantify the performance of various realization-selection methods. The impact of different weightings for flow and permeability information in the cluster-based selection procedure is assessed for a range of examples involving different types of decision parameters. These decision parameters are generated either randomly, in a manner that is consistent with the solutions proposed in global stochastic optimization procedures such as GA and PSO, or through perturbation around a base case, consistent with the solutions considered in pattern search optimization. We find that flow-based clustering is preferable for problems involving new well settings (e.g., time-varying well bottom-hole pressures) or small changes in well configuration, while both permeability-based and flow-based clustering provide similar results for (new) random multiwell configurations. We also investigate the use of efficient tracer-type simulations for obtaining flow-based features, and demonstrate that this treatment performs nearly as well as full-physics simulations for the cases considered. The various procedures are applied to select realizations for use in production optimization under uncertainty, which greatly accelerates the optimization computations. Optimization performance is shown to be consistent with the realization-selection results for cases involving new decision parameters. In the third contribution, we introduce a methodology for the joint optimization of economic project life and well controls. We present a nested formulation for this joint optimization problem where we maximize NPV, subject to the constraint that the rate of return of operations is greater than the minimum attractive rate of return (MARR) or hurdle rate. The methodology provides the optimal project life and the optimal well controls such that the maximum NPV is obtained at the end of the project life, and the rate of return of the project is essentially equal to MARR. Application of this procedure, enables avoiding situations where NPV increases slowly in time, but the benefit relative to the capital employed is extremely low. We demonstrate the successful application of this treatment for production optimization for two- and three-dimensional reservoir models.
Book
1 online resource.
This dissertation studies operational and policy-making problems that arise in agricultural supply chains. In the first two chapters, we explore the role of government interventions in poverty alleviation and promotion of sustainable production, respectively. In the third chapter, we investigate farmers' land allocation decisions in the face of a diversification option. Chapter 1 is motivated by the policy makers' efforts in alleviating poverty and maintaining food security by supporting poor farmers in developing countries. We investigate the effectiveness of three types of interventions, price support, cost support and yield enhancement efforts, as well as different policy implementation methods such as announcing the total budget or the unit support, in terms of their impact on farmers' incomes, consumer surplus, and return on government spending. We show that price and cost support interventions are equivalent if the total budget is public information. On the other hand, if the government announces the unit support, the benefit to different stakeholders along the agricultural supply chain depends on the market distortion created by the intervention. Specifically, in this case, price support results in greater distortion, benefiting consumers more than cost support whereas the converse holds for farmers. Furthermore, we find that under yield enhancing efforts, farmers may incur losses due to the interplay of several market and crop characteristics. Lastly, we show that interventions cannot always generate positive return from the government's perspective. Chapter 2 explores the role of policy instruments in promoting sustainable practices in agricultural production. We investigate the effectiveness of a number of policy instruments, i.e., taxes and subsidies, in terms of their impact on adoption of sustainable practices, producers' profits, consumer surplus and return on government spending. Our findings indicate that while using only taxes encourages the adoption of sustainable production, social welfare decreases as a result. Utilizing only subsidies outperforms policies that involve both taxes and subsidies in achieving higher social welfare but the converse is true in achieving a higher adoption rate. We show that zero-expenditure policies result in a decline in social welfare unless producers face financial constraints in making the costly transition to sustainable practices. Finally, we conduct a numerical study using data on conventional and organic egg production in Denmark and make policy recommendations in order to achieve the target adoption rate set by the Danish government. Chapter 3 investigates farmers' production decisions when facing different crop options. We investigate the value of crop diversification for farmers and the impact of this flexibility on the supply chain. Our results indicate that farmers' land allocation decisions depend on the total farm space (capacity) available and that diversification may not be the equilibrium outcome if the capacity is low. We find that as the profitability gap between the alternative crops increases, monoculture outweighs diversification for higher capacity values. When diversification is the equilibrium outcome, yield variability of both crops adversely affects farmers and the supply chain. On the other hand, buyers are better off when the alternative crop has high yield variability. In fact, buyers may benefit from an increase in the yield variability of the crop they buy if farmers are incentivized to limit the production of the alternative crop in order to extract the maximum revenue from the market.
Book
1 online resource.
The reentry blackout phenomenon affects most spacecraft entering a dense planetary atmosphere from space, due to a plasma layer that surrounds the spacecraft. This plasma layer is created by the ionization of ambient air due to shock and frictional heating created by the moving reentry vehicle, and, in some cases is further enhanced due to contamination by ablation products. The highly mobile electrons in the plasma cause a strong attenuation of incoming and outgoing electromagnetic waves, including those used for command and control, communication, and telemetry over a period referred to as the ``blackout period''. The blackout period may last up to several minutes, and at reentry speeds that may be of the order of 10 km/s, poses a serious safety hazard for the payload on board the spacecraft, especially for human spaceflight. In this work, we present a method for alleviation of reentry blackout using electric fields in a pulsed fashion. We study the reentry plasma's interaction with electronegative voltage pulses using computer simulations that incorporate models of the plasma's response to the applied electric field and interactions between the plasma sheath and the spacecraft surface. The simulations show how one can create pockets of depleted electron density in the reentry plasma sheath that may be used as ``communication windows'', thereby circumventing reentry blackout. Several parametric sweeps are also performed in order to design a blackout alleviation system. Finally, we present a discussion of experimental efforts to verify the simulation results and conclude with a conceptual design for a reentry communications blackout alleviation system based on the exclusive use of electric fields.
Collection
Undergraduate Theses, Department of Biology, 2016-2017
Illnesses that elicit an innate maternal immune activation (MIA) during pregnancy can expose the fetal brain to proinflammatory cues that diffuse across the placenta. Fetal exposure to MIA has been shown to disrupt normal neurodevelopment, and is linked to the progression of a variety of neurodevelopmental disorders, including autism, schizophrenia, cerebral palsy, and mental retardation. However, the diverse range of neurological abnormalities induced by MIA that underlie its role in altering neurodevelopment have yet to be identified. Some studies have suggested that MIA can induce abnormalities in the abundance and distribution of neuronal subtypes in the cortex, while others demonstrate systematic brain atrophy across many brain regions. These cellular changes could contribute to overall circuit dysfunction, perhaps contributing to the pathogenesis of disease. My interests are in the specific relationship between MIA and schizophrenia. MIA-induced changes in neuronal subtypes have been documented in the cerebral cortex, but there is no information available on the effects of MIA on neuronal subtypes within the striatum, an additional region thought to be critical in the etiology of schizophrenia. This study specifically investigates MIA’s effect on the abundance of a particular type of inhibitory interneuron that has been implicated in schizophrenia, parvalbumin-positive interneurons (PVIs), which play an important role in modulating striatal output by delivering inhibition to striatal medium spiny projection neurons (MSNs). As the selective activation of MSNs is thought to be critical to behavior selection and cognition, deficits in striatal PVIs could contribute to abnormal cognitive processing loops initiated in the striatum, perhaps contributing to dysregulation of behaviors implicated in schizophrenia. My study indicates that MIA does indeed result in a reduction in the density of PVIs in the striatum, perhaps analogous to the lowered PVI density observed in the cortex of schizophrenics. This suggests a role for MIA in striatal patterning that could contribute to dysregulation or pathology by altering the cellular composition of schizophrenia-related neural networks.
Book
1 online resource.
A very large number of practical optimization problems have been expressed as minimization of a convex objective function over a nonconvex set, specifically discrete sets such as the set of integer points. These problems arise in a variety of fields such as statistics and modeling, data analysis, and control. Two general approaches have been used for solving mixed integer optimization problems. One approach is to solve the problem globally using methods such as branch-and-bound, which suffer from exponential worst case time complexity. Another approach is to use heuristics such as semidefinite relaxation hierarchies which terminate in polynomial time, but do not guarantee finding the global solution. In this dissertation, we discuss a generic system of heuristic solutions for such problems. We describe an implementation of these methods in a package called NCVX, as an extension of CVXPY, a Python package for formulating and solving convex optimization problems. Our heuristics, which employ convex relaxations, convex restrictions, local neighbor search methods, and the alternating direction method of multipliers (ADMM), require the solution of a modest number of convex problems, and are meant to apply to general problems, without much tuning. Several numerical examples such as regressor selection, circle packing, the traveling salesman problem, and factor analysis modeling are studied. We also describe a fast optimization algorithm for approximately minimizing convex quadratic functions over the intersection of affine and separable constraints. We discuss the favorable computational aspects of our algorithm, which allow it to run quickly even on very modest computational platforms such as embedded processors. We study numerical examples including hybrid vehicle control and power converter control, and we compare our methods with existing open source and commercial alternatives. Finally we discuss how randomized linear programming heuristics can be used to solve graph isomorphism problems. We motivate our heuristics by showing guarantees under some conditions, and present numerical experiments that show effectiveness of these heuristics in the general case.
Collection
Undergraduate Theses, Department of Biology, 2016-2017
Anthropogenic climate change is causing coral reefs around the world to rapidly deteriorate and disappear, in large part due to thermal stress from increased ocean temperatures. Recent evidence suggests that certain coral species may be better at adapting and acclimatizing to thermal changes than others. Corals form mutualistic relationships with diverse clades of dinoflagellate algal symbionts, in the genus Symbiodinium (spp.). Symbiotic plasticity within this relationship allows the corals to adapt to changes in their environment – there have been many well-documented cases of symbiont clade switching within coral colonies in response to thermal heat stress. The majority of coral reef research, especially as it pertains to symbiont switching, is short-term and therefore, doesn’t provide much insight as to the large-scale global changes that are likely to occur in the wake of climate change. By evaluating symbiont switching in coral colonies around the island of Ofu in American Samoa this study aimed to (i) determine the frequency with which symbiont clades shift within coral colonies, (ii) better understand the influence of climatic factors in promoting the switching of symbiont clades specifically with respect to mass bleaching events, and (iii) provide some of the first temporally significant evidence for symbiont switching in situ. By using DNA extraction and amplification techniques, samples taken from the same coral colonies over the span of several years were assessed for hosted symbiont type and analyzed with respect to species, location, and season; there were also genetically identical replicates that were transplanted to different locations. It was found that corals exposed to greater fluctuations in daily temperature are more likely to host heat resistant symbionts. Additionally, some coral colonies appear to be able to switch their symbiont composition over short time periods (~ 2 months) effectively undergoing rapid adaptation to environmental changes. These findings align with what is a growing consensus that two simultaneous factors, heat resistant symbionts and the ability to rapidly acclimatize to changes in the environment, are necessary for corals to resist heat stress. Understanding the mechanisms that confer differences in resistance responses between corals to environmental stress may allow us to better protect coral reef ecosystems as climate change intensifies.
Book
1 online resource.
Cellular state is an old concept. However, scientists have only recently begun the systematic manipulation of cells to characterize and understand the functions of myriad states. As biotechnology advances enable innovative and large-scale measurements on cellular components, new biostatistical tools are required to make sense of the increased data size and complexity, which in turn augment our knowledge of cellular states. In this dissertation, I discuss my contributions to the study of cellular states from the theory and computation angles: 1) modeling and inference of regulatory gene networks with systems of nonlinear deterministic and stochastic differential equations; 2) partition-assisted clustering analysis of high-dimensional single-cell mass cytometry data; and 3) the alignment of subpopulations of cells across cytometry samples by similarity in the associated network structures. These contributions cement a platform that furthers the discussion of cellular states by framing it in both mechanistic and quantitative terms. This platform adds layers of biostatistical knowledge to Biosciences and enhances the discovery of cellular state properties.
Book
1 online resource.
Recent mobile vision applications demand energy-efficient real-time object detection. Specialized hardware design is needed to push the limits of both performance and energy-efficiency. While such hardware has been demonstrated for backend detection, current imager frontends consume a significant fraction of total system energy. Therefore, additional system-level energy savings may be achieved by reducing the energy requirements of frontend image capture. At the same time, it is crucial that the energy saving techniques used do not significantly degrade object-detection performance. This dissertation studies the effects of frontend imager parameters on object detection performance and energy consumption. A simulation framework, including a largescale RAW image database for object detection, is developed. And simulation results quantifying the tradeoff between pixel bitdepth and HOG-based object detection performance are presented. A custom version of HOG features based on 2-bit pixel ratios is introduced, and shown to achieve superior object detection performance for the same estimated energy compared to conventional HOG features. A frontend hardware implementation capable of extracting these features at multiple scales is proposed, and a system-level energy analysis is performed. This energy analysis suggests a potential 19X reduction in I/O energy and 3.3X reduction in backend detection energy compared to conventional object detection pipelines.
Book
1 online resource (42 pages) : color illustrations Digital: text file.
Collection
Government Information United States Federal Collection
Book
1 online resource.
This thesis studies the class of NP-hard mixed-integer optimization problems with quadratic objective function and constraints that are not necessarily convex. We propose the Suggest-and-Improve framework, which encapsulates and generalizes a number of known techniques and provides heuristics for computing approximate solutions to quadratically constrained quadratic programs, for which no specialized methods are available. Using numerical examples, we show that the Suggest-and-Improve framework can yield strong upper and lower bounds on the optimal value. We also show a provable bound on the semidefinite relaxation applied to the NP-hard problem of minimizing a convex quadratic function over the integer lattice. Finally, we discuss the family of concave quadratic inequalities that can be added to general mixed-integer quadratic programs. This technique allows the Suggest-and-Improve framework to be applied to any mixed-integer quadratically constrained quadratic problems.
Book
1 online resource.
This dissertation consists of three pieces related by the use of recursion as rhythm generator, the idea of assemblage as a piece made out of different simultaneous pieces, and an intuitive investigation on flexible structures.
Book
1 online resource.
Proteins must achieve their native conformations in order to function and avoid aberrant interactions within the cell. The folded state is formed rapidly for proteins with simple topologies. However, the folding of many large proteins with complex folds is assisted by the diverse array of molecular chaperones. The chaperonins are a unique class of essential protein chaperones found in all domains of life. These complexes are comprised of two 7-9 membered rings that undergo dramatic conforma- tional changes upon ATP binding and hydrolysis. Two classes of chaperonins exist, termed group I and group II. Group I chaperonins exist in bacteria and endosymbiotic organelles, while group II chaperonins are found in all eukaryotes and archaea. Both families promote the folding of substrates in an ATP dependent manner by encapsulating them within discrete central chambers. This thesis focuses on detailing the mechanism of a model group II chaperonin from the archaea Methanococcus maripaludis. Work was performed to define the native folding substrates of the complex as well as to detail the cooperative mechanism that controls all group II chaperonin cycling. A key allosteric interface was identified using a mathematical approach that predicts functionally important residues based on patterns of covariation found in multiple sequence alignments of a protein. Biochemical dissection of mutations at this interface reveal that the chaperonins have evolved to be less cooperative than attainable. Early evidence will be presented that suggests the N- and C-terminal tails of the chaperonin likely serve as coordinators of nucleotide cycling.
Book
1 online resource.
Three dimensional complex chip architectures are gaining more widespread usage as computing devices continue to shrink. These complex architectures are necessary in order to continue scaling down the size of each individual transistor and make them more efficient. In manufacturing such complex architectures, traditional material deposition methods such as physical or chemical vapor deposition are no longer as effective as they are somewhat directional. A method that is gaining widespread use is atomic layer deposition (ALD). ALD allows multiple advantages such as exquisite thickness control of deposited films, three dimensional film conformality and control over composition. However, certain useful films grown via ALD have an initial growth nucleation stage which is not well understood. This limitation prevents us from growing the thinnest possible pinhole-free films. Gaining an understanding of the nucleation stage would allow us to continue scaling our devices down further and even perform selective area deposition. In order to shed some light on the nucleation stage, in this dissertation, we have utilized the unique capability of a custom built combined scanning tunneling microscope (STM) and ALD system to observe topographically the nucleation stage of ALD in-situ. We used existing wet etch techniques to create atomically flat hydrogen terminated silicon, and created a method for using remote plasma to create atomically flat suboxide terminated silicon, both for use as suitable flat substrates for STM observation of ALD nucleation. With these atomically flat substrates available to us, we observed, using a variety of characterization techniques, the nucleation stage of two ALD systems: ALD ZnO and ALD Ru. Though we selected two specific systems to study, this technique could be further used on any other desired chemistries for study. We further demonstrate the ability of this STM-ALD tool to perform both bottom up and top down lithography, by activating sites for ALD, or decomposing ALD precursors at specific locations, or by etching away deposited layers using the STM tip.
Book
1 online resource.
Understanding the kinetics of shock-compressed SiO2 is of great importance for mitigating optical damage for high-intensity lasers and for understanding meteoroid impacts. Experimental work has placed some thermodynamic bounds on the formation of high-pressure phases of this material, but the formation kinetics and underlying microscopic mechanisms are yet to be elucidated. In this study, by employing multi-scale molecular dynamics studies of shock-compressed fused silica and quartz, we find that silica transforms into a poor glass former that subsequently exhibits ultrafast crystallization within a few nanoseconds. We also find that, as a result of the formation of such an intermediate disordered phase, the transition between silica polymorphs obeys a homogeneous reconstructive nucleation and grain growth model. We construct a quantitative model of nucleation and grain growth, and compare its predictions with high-pressure silica crystal grain sizes observed in laser-induced damage and meteoroid impact events. Moreover, we have studied the quantum nuclear effects for high-pressure silica crystallization. While quantum nuclear effects play important roles in shock-induced chemical reactions and phase transitions, they are absent in classical atomistic shock simulations. To address this shortcoming, we couple the shock simulation with a colored-noise Langevin thermostat. We find that this semiclassical approach gives shock temperatures as much as 7% higher than classical simulations near the onset of crystallization in silica. We have also studied the impact of this approach on the kinetics of crystallization and the position of high-pressure silica melt line.