Search results

RSS feed for this result

133,425 results

Book
1 online resource.
We study two new cryptographic primitives inspired by recent advances in multilinear maps: private constrained pseudorandom functions (PRFs) and order-revealing encryption (ORE). We show how these primitives have direct applications in searchable symmetric encryption, watermarking, deniable encryption, private information retrieval, and more. To construct private constrained PRFs, we first demonstrate that our strongest notions of privacy and functionality can be achieved using indistinguishability obfuscation. Then, for our main constructions, we build private constrained PRFs for bit-fixing constraints and for puncturing constraints from concrete algebraic assumptions over multilinear maps. We also construct the first implementable ORE scheme that provides what is known as ``best-possible'' semantic security. In our scheme, there is a public algorithm that given two ciphertexts as input, reveals the order of the corresponding plaintexts and nothing else. Our constructions are inspired by obfuscation techniques, but do not use obfuscation. Finally, we also show how to build efficiently implementable ORE from PRFs, achieving a simulation-based security notion with respect to a leakage function that precisely quantifies what is leaked by the scheme.
Book
1 online resource.
Beginning in 1986 with the discovery of LaBaCuO by Bednorz and Muller, the field of high temperature (high-Tc) superconductivity remains the biggest intellectual challenge in condensed matter physics, with theoretical (and sometimes experimental!) consensus being elusive for almost thirty years. The underlying problem of understanding how large numbers of interacting particles can form various ordered states is tremendously daunting. This thesis begins with a historical overview of superconductivity, beginning with experimental discoveries, followed by an introduction of some theoretical concepts. In particular we discuss some details about electron-phonon coupling. As the systems being studied become more complex, the tools needed to fabricate and study them must also increase in complexity. Pushing forward new experimental discoveries in high-Tc therefore requires advancing both materials growth and characterization. The characterization techniques we introduce in this thesis are angle resolved photoemission spectroscopy (ARPES) and resonant inelastic x-ray scattering (RIXS). We will discuss ARPES in detail, but present only a small work on RIXS in a later chapter, as a detailed treatment of the latter is beyond the scope of this thesis. Next we introduce the fabrication technique of molecular beam epitaxy (MBE), and discuss the design and implementation of an experimental chamber capable of performing in-situ ARPES studies of films grown via MBE. The capabilities of this chamber are demonstrated through growth and measurement of films from two classes of materials: topological insulators and iron-based superconductors. In topological insulators, we show the capability of using a thermal cracking chalcogenide source to grow intrinsically doped films, which may be useful for future studies and devices. Recently, it had been discovered that single-unit-cell-thick (1UC) iron selenide (FeSe) films grown on strontium titanate (STO) demonstrate a large increase in superconducting transition temperature compared to bulk iron selenide. We use our MBE-ARPES chamber to grow and study FeSe films of varying thicknesses down to 1UC. We then discuss spectroscopic signatures of cross-interfacial coupling between electrons in the 1UC iron selenide and the STO substrate. This electron-phonon coupling is unprecedented and until recently had not be resolved with such clarity in any other solid state system. It is furthermore unusual in that it can enhance superconductivity in many different channels. We calculate the enhancement of Tc in 1UC FeSe/STO due to this coupling and find good agreement with experimental results, and suggest that such coupling can be broadly used to enhance Tc in other films. The thesis concludes with some future prospects and directions for study. We make the case that MBE-ARPES, and more generally the improved fabrication and characterization allowed by it, will be important to definitively elucidate the interactions between particles that make up novel phases in the solid state. Lastly, in the appendix we discuss some properties of a new six-axis in-vacuum manipulator, as well as cover some theoretical details of the FeSe and RIXS experiments.
Book
1 online resource.
The diversity in the kinds of vehicles that are appearing in the commercial space transportation sector raises questions regarding the applicability of the licensing procedures and methodologies that are in place to protect public safety. These licensing procedures are designed to limit risks to public safety in case of a space vehicle explosion. Concerns arise because the methods currently used are derived from expendable launch vehicles (ELVs) developed during the Space Shuttle era, and thus they might not be fully applicable to future vehicles, which include new types of ELVs, suborbital vehicles, reusable launch vehicles (RLVs), and a number of hybrid configurations. This dissertation presents a safety analysis tool, called the Range Safety Assessment Tool (RSAT), that quantifies the risks to people on the ground due to a space vehicle explosion or breakup. This type of problem is characterized by the complexity and uncertainty in the physical modeling. RSAT has been used to analyze both launch and reentry scenarios and can be applied to many possible vehicle configurations. The Space Shuttle Columbia accident was modeled with RSAT, and the results were compared with simulations performed by the Columbia Accident Investigation Board (CAIB). A methodology to perform sensitivity and optimization studies is also presented. This methodology leverages previous work done in active subspaces and Gaussian process regression to generate surrogate models. The proposed sensitivity and optimization methodologies were used to analyze a commercial ELV. The results show that the methodology can handle a large number of stochastic inputs and identify opportunities to decrease risk.
Book
1 online resource.
The frontier challenges that must be solved before brain-machine interfaces (BMIs) can be used as clinically useful motor prostheses differ depending on the degree of function being restored. Two-dimensional cursor control (i.e., for communication) has recently reached high levels of peak performance in pre-clinical studies, but translation is hampered by less than reliable performance due to unstable neural signals. Meanwhile, control of robotic arms remains poor, despite some impressive glimpses at what the future could be, because we lack fundamental understanding of how the brain incorporates the BMI into its motor schema. This hampers our ability to accurately decode intended arm movements. My dissertation focused on both sets of problems in pre-clinical macaque BMI studies. Chapters 2 and 3 provide solutions for improving BMI robustness. I first describe a machine learning approach to building decoder algorithms that are robust to the changing neural-to-kinematic mappings that plague translational BMI efforts. We developed a multiplicative recurrent neural network decoder that could exploit the large quantities of data generated by a chronic BMI — data that has heretofore gone unused. I then describe a neural engineering approach for increasing the device lifespan by providing high performance control even after losing spike signals. I developed a method for decoding local field potentials (LFPs) as a longer-lasting alternative or complimentary BMI control signal. This led to the highest-performing LFP-driven BMI and the first 'hybrid' BMI which decoded kinematics from spikes and LFPs together. Chapter 4 looks ahead to challenges that will be encountered when BMI-controlled limbs operate in the physical world by describing how error signals impact ongoing BMI control. I perturbing the kinematics of monkeys performing a BMI cursor task and found that visual feedback drove responses starting 70 ms later in the same motor cortical population driving the BMI. However, this initial response did not cause unwanted BMI output because it was limited to a decoder null space in which activity does not affect the BMI. When activity changed in output-potent dimensions starting 115 ms after perturbation, it caused corrective BMI movement. This elegant arrangement may hint at a broader computational strategy by which error processing is separated from output.
Book
1 online resource.
There is a strong consensus that the climate is changing, that human activities are the dominant cause of this change, and that continued climate change will have negative impacts on human societies. To analyze energy and climate policy remedies, researchers have developed a diverse collection of integrated assessment models (IAMs) that represent the linked energy, economic, and earth systems in an interdisciplinary framework. Some IAMs are cost-benefit models designed to compute optimal policy interventions, while others are cost-effectiveness models used to determine the technology pathways that enable an emissions or climate goal to be achieved at least cost. Although IAM representations of technological change are critical determinants of model outcomes, underlying processes are poorly understood and models typically feature fairly crude formulations. The goal of the three projects that constitute this dissertation is to develop more advanced representations of technological change that capture a wider range of endogenous drivers. Scenario analyses based on these representations reveal their implications for energy and climate policy, as well as technology transitions this century. Chapter 2 describes the development of a system of technology diffusion constraints that endogenously respects empirically observed spatial diffusion patterns. Technologies diffuse from an advanced core to less technologically adept regions, with adoption experiences in the former determining adoption possibilities in the latter. Endogenous diffusion constraints are incorporated into the MESSAGE framework and results suggest that IAMs based on standard exogenous diffusion formulations are overly optimistic about technology leapfrogging potential in developing countries. Findings also demonstrate that policies which stimulate initial deployment of low-carbon technologies in advanced economies can be justified from a global common goods perspective even if they fail the cost-benefit test domestically. In Chapter 3, learning-by-doing is formulated as a firm-level rather than an industry-level phenomenon. Wind and solar PV manufacturers strategically choose output levels in an oligopoly game with learning and inter-firm spillovers. This game-theoretic representation of renewable technology markets is coupled to MESSAGE so that the energy system planner can only invest in wind and solar PV capacity at the equilibrium prices the market would charge for the desired quantities. Findings illustrate that the most ambitious emissions reduction pathways include widespread solar PV diffusion, which only occurs if competitive markets and spillovers combine to reduce prices sufficiently. The relationship between price and cumulative capacity is similar to that between unit cost and cumulative capacity under competitive markets, but a combination of market power, strong climate policy, and weak spillovers can cause prices to rise with cumulative capacity even though unit costs decline. The bilevel modeling framework of Chapter 4 is built to determine the optimal combination of technology-push and demand-pull subsidies for a given technology policy application. Firms (inner agents) solve a two-stage stochastic profit maximization problem in which they choose process and product R& D investments in the first stage, then choose output levels in the second stage. The policymaker (outer agent) seeks to identify the combination of policies that induces the firms to reach an equilibrium with the highest possible expected welfare. Numerical simulation results show that technology policy can enhance welfare under a wide range of parameter settings. Spillovers reduce product R& D expenditures but generally improve welfare by making R& D more effective. Welfare decreases with competition in the no-policy case, but increases with competition if optimal technology policies can be imposed. Each of the three projects focuses on a distinct aspect of technological change, but the formulations developed for these studies reflect several important themes: endogenous mechanisms, multiple decision-making agents, game-theoretic interactions, market power, spillovers, regional heterogeneity, and uncertainty. While the research presented in this dissertation advances the modeling of technological change, a number of formidable challenges remain. The final chapter discusses some of these challenges and ideas for future research to address them.
Book
1 online resource.
"'Agents Wanted': Sales, Gender, and the Making of Consumer Markets in America, 1830-1930" is a history of capitalism and a gender history that explores both a business model and women's conflicted engagement with it. The agency method of distributing consumer goods became widespread during the nineteenth century. With a gathering force in the antebellum decades and real abandon after the Civil War, entrepreneurs recruited individuals into agency networks and assigned them territories in which to cultivate demand for new kinds of mass-produced consumer goods—lavishly illustrated books, family magazines, engravings, patent medicines, and more. Agents not only persuaded people to buy but as independent contractors they also shouldered risks and carried out quotidian economic practices that enabled businesses to function. This dissertation examines three sites where agency distribution was particularly visible—the periodical, subscription book, and patent medicine industries. The agency economy recruited diverse participants into the work of selling. It offered possibilities not only to men struggling to make their way in a changing economy but also to women. In the gender-segmented and highly unequal nineteenth-century labor market, it provided a rare venue that valued the labor of men and women equally. It kindled hopes for economic independence and offered a tool for salvaging a productive home-based family economy. While most agents were men, women's minority perspective illuminates how the system functioned and how appeals to older cultural values both facilitated new economic developments and came under pressure. Women found bridges into agency work in cultural practices of hospitality, patronage, and charity to widows and via fraternal networks. They experienced obstacles as well, including negative class and moral associations. After 1870, their selling coincided with increased agitation for woman suffrage and temperance and a movement towards a freer and more commodified sexuality. These historical conjunctions of politics, sexuality, and economics informed women's interaction with selling and entrepreneurs' efforts to attract sales workers. The agency model changed over time. In the periodical industry, distinct distribution channels developed, including a system of clubbing that rewarded women's sales labor with consumer goods. After the Civil War, entrepreneurs, including E.C. Allen, elaborated agency, using advertising, merchandise premiums, and inexpensive second-class postal rates to recruit masses of agents and transform their names into commodities. With agents' help, periodical publishers built nationwide readerships, platforms that other entrepreneurs used to fulfill distribution dreams. A case study of the Viavi Company shows a patent medicine concern and its female sales workers shaping agency into direct selling—the purview of companies like Avon Products, Inc.—and in the process forwarding a commercial maternalism. Cultural representations of sellers played a role in agency transformations. Stereotypes of male and female book agents informed women's approach to selling while working to limn sales as a male pathway to business success. The comedic trope of the female drummer, or commercial traveler, evolved in ways that helped to alleviate concerns about women's ability to balance work and domestic life. In revisiting this nearly forgotten business meaning of the word "agency, " this project makes gender central to the new history of capitalism and illuminates the importance of the small-scale actions of sometimes unlikely economic actors.
Book
1 online resource.
Bubble generation and air entrainment on ocean surfaces and behind ships are complex phenomena which usually accompany turbulent flows. Non-linear wave-breaking events entrain air and generate turbulence. Turbulence consequently fragments the entrained air into smaller bubbles. This process drastically increases the flux of air into the oceans and rivers, which is important for both aerating the water bodies and reducing greenhouse gases from the atmosphere. Wave breaking and bubble generation behind ships also have important effects on the hydrodynamics of ships and on their performance. The bubbly flow as a result of ship passage generates ship trails which remain for several minutes thereafter. Although turbulence is responsible for the fragmentation of larger bubbles into smaller ones, it cannot be the cause of the generation of micron-size bubbles. These bubbles are observed in ship wakes and natural waves and are associated with liquid-liquid impact events. These phenomena, due to their complexity, are far from being completely understood. In addition, there is missing quantitative connection between the large-scale non-linear wave-breaking events and the micron-size bubble generation as a result of impact events. There is a large-scale separation between these two phenomena which makes elucidation of the problem very challenging. The aim of this study is to use direct numerical simulations of turbulent hydraulic jumps as canonical representation of non-linear breaking waves, to study the air entrainment and large bubble generation. Furthermore, this study provides statistics of liquid-liquid impact events, which are precursors to micro-bubble generation in these flows. As far as we know, the present work is the first direct numerical simulation of turbulent hydraulic jumps, as well as the first attempt to obtain interface impact statistics in a stationary turbulent breaking wave. In addition to bubble generation, we investigate turbulence statistics such as mean and turbulent velocity fluctuations, Reynolds stress tensors, turbulence production terms, energy spectra and one-dimensional energy budget of the flow. Finally, we present investigation of the effect of relevant non-dimensional parameters such as Weber number and Reynolds number on both large bubbles and impact statistics in these flows.
Book
1 online resource.
Functional Magnetic Resonance Imaging (fMRI) is a powerful noninvasive tool that extends MRI technology to mapping brain activity. FMRI is used to measure brain activity by detecting vascular changes associated with neuronal activation. Currently, the most widely used methods to acquire fMRI images are T2*-weighted Gradient- Echo (GRE) sequences. These methods exhibit excellent sensitivity to blood oxygenation level-dependent (BOLD) contrast. However, GRE sequences require a long echo time (TE) for good BOLD sensitivity and use long, single-shot readouts for efficiency, resulting in signal dropout and image distortion in regions near air-tissue interfaces such as the orbitofrontal cortex and inferior temporal regions. Recent studies have shown that pass-band steady-state free precession (pb-SSFP) fMRI is a promising alternative. Pb-SSFP fMRI has several advantages over conventional GRE-echo-planar imaging (EPI) acquisitions, including small-vessel BOLD sensitivity, reduced image distortion, and reduced signal dropout from susceptibility field gradients due to the short TE and rapid acquisition. However, banding artifacts remain a challenge for whole-brain imaging, as current solutions are impractical for many functional studies. Recently, an improved pb-SSFP fMRI technique called alternating-SSFP (alt- SSFP) was proposed. This technique permits whole-brain, banding-artifact-free-SSFP fMRI in a single scan. However, many challenges need to be overcome to make the method practical and robust for human fMRI studies. Therefore, a complete and practical alt-SSFP fMRI image acquisition sequence and image reconstruction method is developed for whole-brain fMRI. First, methods regarding RF catalyzation, k-space trajectory design, and parallel imaging are developed for alt-SSFP to ensure signal stability, achieve whole-brain coverage, and maintain sufficiently high temporal resolution. In addition, the alt-SSFP sequence's inherent bright fat signal combined with the echo-planar k-space trajectory causes chemical-shift artifacts. A short spatial-spectral RF pulse is designed to reduce artifacts associated with the bright fat signal and increase temporal SNR for alt-SSFP fMRI. Lastly, the alternate banding patterns of alt-SSFP are used to improve the conditioning of parallel imaging for image reconstruction in a method called Extended Parallel Imaging, which would allow greater acceleration for higher temporal and/or spatial resolution. Artifact-suppressed images from breath-hold and visual stimulus studies show that the alt-SSFP fMRI method permits whole-brain imaging with excellent blood oxygen level-dependent sensitivity and fat suppression. In addition, image reconstruction with Extended Parallel Imaging increases temporal SNR for alt-SSFP fMRI and improves activation maps in highly accelerated cases. These combined developments result in a practical pb-SSFP fMRI method capable of functional imaging in regions currently inaccessible to conventional fMRI acquisition methods. This could potentially become a powerful tool for better understanding how different parts of the brain are interconnected, and for studying the brain in its entirety.
Book
1 online resource.
This dissertation examines the group of American painters who lived and worked in Düsseldorf, Germany, in the decades before the American Civil War. By emphasizing that such important works of art as Emanuel Leutze's "Washington Crossing the Delaware, " Richard Caton Woodville's "War News From Mexico, " and Albert Bierstadt's "Roman Fish Market, Arch of Octavius" were all produced in this small city on the Rhine, "Amerikanischer Malkasten" seeks to reframe the historiography of American art before the Civil War by arguing that the American experience in Düsseldorf was not marginal or incidental to that story, but rather essential to it. In recent years Americanists have rejected or refined narratives of American art that assert its isolated, nationalistic character in favor of ones that see American art as deeply engaged with the wider world. However, the American Civil War remains something of a dividing line for many scholars, who view the generation of artists who came to maturity in the 1840s and 1850s as essentially provincial in character, by contrast with later generations of cosmopolitans. Through in-depth case studies of Leutze, Woodville, and Bierstadt, this dissertation upends this tradition periodization in two ways. First, it shows how these individual artists engaged with a wide variety of transatlantic thought, including theatrical meolodrama; the interrelated discourses of "Bildung" and Self-Culture; and the marketing o the Wild West, in particular its American Indian inhabitants. Second, and most importantly, it articulates a vision of American art before the Civil War that emphasizes not only its cosmopolitanism, but also the ways in which these and other artists were possessed of an essentially outward-facing and communal ethos, which I contrast with the more idiosyncratic and personal vision favored by later generations. Ultimately, "Amerikanischer Malkasten" redefines both the history of American art of the antebellum period and how historians of American art conceive of--and speakabout--that history.
Book
1 online resource.
Can we enable anyone to create anything? The physical computing tools of a rising Maker Movement are enabling the next generation of artists, designers, clinicians, and children, to create complex electronic prototypes. However, technical novices often struggle with the circuitry and programming required to make a smart device. Affordable sensors, actuators and novice microcomputer toolkits are the building blocks of the field we refer to as Creative Computing - and here we examine the core properties of toolkits specifically designed to enhance creative problem solving. In this dissertation I explore the question: "How can we support technical novices in crossing the gap between idea and electronic prototype?" In doing so, I document the tradeoffs that influence the usability of an electronics toolkit, demonstrate the ability to systematically measure the prototyping experience with design tools, and illustrate a significant increase in a novice designer's ability and confidence with electronics through a one-hour design exercise. We examine each of these areas through a series of prototyping experiments with novices, and show how toolkits combining (1) modular hardware, (2) hackable software, and (3) accessible low-resolution materials such as paper, can encourage novices to: make more prototypes, generate more novel ideas, and increase creative confidence.
Book
1 online resource.
Various cellular processes are dependent on the regulation of protein activities. In particular, some cellular processes, including migration, division and differentiation, require sophisticated coordination of protein activities in space and time. Therefore, a method capable of perturbing protein activities with precise spatial and temporal control is indispensable for understanding cellular behaviors. However, conventional means, such as genetic or pharmacological perturbation, are either relatively slow to implement, difficult to design, or spatiotemporally uncontrollable. Approaches using inducible chemical genetic enable the control of protein activities by using chemical inducers to trigger engineered proteins. However, these approaches suffer from irreversibility and cannot achieve spatial control. The emerging area of optogenetic actuators provides opportunities to temporally and spatially regulate protein activities. Several pairs of light-induced dimerization proteins have been developed, including the light-oxygen-voltage domain, phytochrome B and cryptochrome 2 (CRY2). CRY2 and its binding partner CIB1 are the optogenetic dimerizers utilized in my research due to their rapidness, reversibility and no need for exogenous cofactor. In my thesis, I develop the optogenetic strategies to optically manipulate two processes: signaling pathway and organelle activity, both of which are determined by protein activities. The light-induced CRY2/CIB1 interaction is complicated by the light-induced CRY2 homo-oligomerization. Therefore, in this thesis, I also study the dual characteristics of CRY2 homo-oligomerization and hetero-dimerization. This thesis consists of three chapters. In the first chapter, I demonstrate the strategies designed to optically activate the RAF/ERK and AKT/FOXO signaling pathways with temporal control, which can help to dissect and resolve the signaling pathways in a quantitative manner. Next, I construct an optogenetic method that exploits light to manipulate organelle distribution and reshaping with reversibility and subcellular spatial precision. This method will be useful to establish a direct linkage between organelle distribution/shaping and cellular functions. Finally, I characterize the CRY2 homo-oligomerization and its interplay with CRY2/CIB1 hetero-dimerization. The results can serve as a guide to the usage of CRY2/CIB1 system.
Book
1 online resource.
This dissertation explores the fraught relationship between art and science during the Cold War through the work of artist, designer, and visual theorist Gyorgy Kepes (1906-2001). Faced with a crisis of confidence in the contemporary relevance of the arts, Kepes cultivated collaborations with the sciences at Chicago's New Bauhaus and especially at the Massachusetts Institute of Technology (MIT), where he taught from 1946 until his retirement in 1974. Kepes evocatively termed his interdisciplinary mission "interthinking" and "interseeing." This study examines the aesthetics and politics of these powerful, and still timely, ideas. It asks: Can the "two cultures"—art and science—actually work together for a common purpose? Or are they fundamentally incompatible, condemned to mutual skepticism of their respective motivations and methodologies? What is the purpose of art in a world dominated by science and technology? Scholars have previously described Kepes's ambitions as a reactionary glorification of the imagery and ideology of science. This study does not ignore the regressive associations of Kepes's work, but it also considers its progressive potential and provides a more nuanced, and conflicted, account of Kepes's contradictory practice. Using new archival evidence, this dissertation puts forth two major arguments. First, it demonstrates how Kepes developed a hitherto unrecognized paradigm for aesthetic practice in a scientific context: the "artist under technocracy." This figure operated within rather than against a scientific institution; seeking refuge, he retreated from the art studio to the research laboratory. Second, it situates Kepes as the major artistic figure within a startling constellation of experts engaged in sophisticated war and weapons research. While many artists renounced this technocratic culture—many also encouraged Kepes to do so—Kepes instead chose to remain part of it. He attempted to shift and shape this technocratic culture from a unique position under it. This study tracks these themes across four chronological chapters and examines Kepes's study of camouflage during World War II; his development of visual design at MIT in the 1950s; his work in the 1960s on an unfinished and unpublished magnum opus he called "The Light Book"; and the Center for Advanced Visual Studies that he founded at MIT in 1967. "Artist under Technocracy" provides a genealogy for interdisciplinary phenomena that are commonplace in both the academy and the art world today, such as the study of visual culture and the use of new media.
Book
1 online resource.
In pursuit of a fully sustainable energy economy, there has been increased effort to develop artificial photosynthesis technologies capable of creating clean fuels and chemicals from greenhouse gases with only the energy of the sun. Photoelectrochemical cells have been developed since the 1970's as all-in-one devices capable of exactly that task, but progress has been impeded by fundamental challenges. Principal among these is the trade-off that the most chemically stable materials due to a wide bandgap are necessarily the most inefficient at converting solar energy. In 2011, our group proposed a novel solution to this problem with a nano-layered composite structure capable of achieving both ends: atomic layer deposited (ALD) metal oxide protected cells where a top layer is chemically passivating and a bottom layer is efficient at solar energy conversion. In this work, I have carried out a rigorous study of ALD-TiO2 protection for metal-insulator-silicon devices, interrogating each layer of the structure and building state-of-the-art analytical models for performance of each component, as well as for the cell as a whole. The catalyst layer was studied first, demonstrating that this protection layer technology is a general solution, applicable with a wide range of catalysts. Second, the protection layer was probed and I discovered bulk-limited leaky conduction for the ALD-TiO2. I proposed a model of trap-mediated hopping conduction in the TiO2 in series with tunneling through the ultrathin SiOx, a model which has become the center point of much ongoing research and remains the accepted theory in the field for the anomalous, hole-conductive TiO2. Third, I probed the SiOx/Si interface, showed that conduction across the SiOx was well described by tunneling, and developed models that investigate the transition from leaky to capacitive structures. From here I developed engineering solutions including ALD-SiO2 and oxygen scavenging methods for fabricating ultrathin SiOx layers in these water splitting devices. In studying the device as a whole, I have developed a general theory for a so-called 'leaky capacitor' and applied this to understand the photovoltage loss observed in insulator-protected devices. This understanding allowed for the development of general design principles for maximizing photovoltage in MIS cells of varying architecture between so-called Type 0 photoelectrochemical cells and fully separated PV-electrolyzer systems. In applying these design principles, I have demonstrated the highest reported photovoltage to date for single junction silicon water splitting cells both in a so-called Type 1 Schottky junction and Type 2 pn junction photoanode. Taken altogether, this work represents a complete scientific investigation and understanding of protected silicon photoelectrode operation, and the corresponding engineering advances culminating in record photoanode performance.
Book
1 online resource.
Most microbes live as multi-cellular communities termed biofilms. This lifestyle protects microbes against harsh conditions including antibiotic treatment and host immune responses. Within biofilms, microbial cells are entangled in a self-secreted extracellular matrix (ECM) that is rich in biopolymers such as fibrillar proteins and polysaccharides. This extracellular material is key to the characteristic properties of biofilms. Despite the prevalent roles that biofilms play in infections, a molecular-level understanding of the insoluble matrix components or the interactions between the ECM components has yet to be described. Biofilms and ECM are neither soluble nor crystalline, which poses challenges to analysis by traditional biochemical techniques. Solid-state nuclear magnetic resonance (NMR) is uniquely suited to study such complex systems because it provides quantitative information about chemical composition and also the spatial relationships of the components without requiring degradative sample preparation. Using solid-state NMR, we previously elucidated that the insoluble ECM produced by a uropathogenic strain of Escherichia coli called UTI89 is composed of two biopolymers: a functional amyloid called curli and modified cellulose. The purpose of this study is to elucidate quantitative information about microbial biofilm composition and structural information about biofilm constituents. Additionally, we aim to achieve an understanding of how the chemical and biophysical properties of specific ECM components contribute to the overall ECM architecture. To this end, we pursued three intersecting avenues with a primary focus on the bacterial strain UTI89, although we also determined quantitative parameters of additional microbial biofilms. In the first approach, we explored the dye binding properties of the biofilm constituent and functional amyloid called curli. The ability to specifically stain ECM components has been a key step in traditional investigations of biofilms, and we sought to provide a foundation to similarly study curli. In the second approach, we developed a means to spectroscopically annotate chemically complex ECM composition of the important human pathogens Vibrio cholerae and Aspergillus fumigatus using solid-state NMR. Finally, we provided novel biophysical and structural details of specific ECM components to better understand how these biopolymers interact to form robust ECM networks. Looking forward, we have begun to utilize solid-state NMR to provide a global accounting of the architecture of the UTI89 ECM. Together these studies have provided important quantitative parameters of biofilm composition and structural information of ECM components. Our analysis has wide-ranging implications for understanding the fundamental mechanisms of biofilm formation and for the development of functional biopolymeric materials.
Book
1 online resource.
Parallel transmit (PTx) systems have been proposed to compensate for B1 (RF) field non-uniformity, to improve excitation pulse performance, and to minimize specific absorption rate (SAR), mostly for high field MRI. These systems require prior knowl- edge of the radiofrequency (RF) field of each channel to perform calibration and B1 shimming to cancel out spatial B1 variations. A second class of applications emerging for PTx at 1.5T and 3T is in interventional MRI and implant RF safety. The goal here is to minimize RF coupling to insulated conductive structures such as guide- wires, pacemakers, and deep brain stimulator leads. Lastly, dynamically polarized 13C imaging requires prior knowledge of spatial flip angles to optimize SNR, with flip angle errors resulting in costly re-polarization times. In all these cases, the imaging work-flow would be improved if B1 mapping could be avoided or minimized. Existing B1 mapping methods su↵er from a variety of issues including long scan times, limited B1 dynamic range, and high specific absorption rate (SAR). Moreover, experimental B1 mapping techniques assume nothing about the coil geometry or location, yet the coil structure is known a priori. The goal of this work is to create a "good enough" estimator of B1 fields without mapping, using techniques that enable registration between the physical and simula- tion domains. It is demonstrated that co-registration by fluorine or proton fiducials for coil localization, combined with on-coil RF current sensing, provide the necessary inputs to scale and transform a pre-computed library of B1 simulations into the phys- ical domain to estimate the B1 maps of a known coil geometry with a simple rapid pre-scan calibration. In this dissertation, it is then shown that the estimated B1 maps can be used to calculate complex RF shim weights in an RF shimming application without perform- ing MRI B1 mapping. All the experiments in this dissertation were performed on a GE 1.5T scanner using a Medusa console and a cylindrical four-channel transmit/receive array prototype. Our group is particularly interested in local transmit array applications at 1.5T and 3T for RF implant safety, and MRI interventions, where the intent is to minimize the extent of RF coupling and exposure. In both applications, local transmit arrays could minimize coupling, but the flexible or variable array layouts and load impedance variations call for B1+ calibration aids to simplify their use in setting local RF shim weights. To explore these issues, the feasibility of extending forward and reverse po- larization method is investigated. It is shown that pre-spoiler gradients combined with reverse polarization can significantly suppress the background signal and suc- cessfully visualize the conductive structure inside a body. Finally, it is shown that a 3D model of the conductive structure can be extracted by applying edge detection filter to reversed polarized images.
Book
286 pages : 19 illustrations ; 23 cm.
Green Library
Book
1 online resource.
This thesis is concerned with understanding the biogeography of the human oral microbiota, defining the types and extent of spatial patterns observed in the communities of the oral cavity, as well as the underlying causal mechanisms governing those patterns. First, I provide an overview of spatial ecology and, in a review, discuss how applying the context, principles and methods of spatial ecology to the study of the human microbiota will enable us as a field to move beyond the descriptive statistics that have dominated our biogeographic surveys since the time of Leeuwenhoek (Chapter 1). Second, I review extant literature surveying the biogeography of the human microbiome, of all major body site habitats, providing a critical review that yields insight into the limitations of existing biogeographic surveys (Chapter 2). Third, I identify an ecological gradient that structures the microbial communities that inhabit the exposed surfaces of teeth and which runs between the front of the mouth and its back (Chapter 3). This is the first demonstration of an ecological gradient, that is not a successional gradient, in the human oral cavity, a departure from previous reports that communities are categorically distinct from one another, varying simply by tooth number or tooth class. Fourth, I demonstrate that the anterior to posterior ecological gradient observed for supragingival communities is shared by communities inhabiting multiple tissues, including the alveolar mucosa, keratinized gingiva, and buccal mucosa, highlighting the importance of examining multiple spatial scales in studies of the human microbiota (Chapter 4). Fifth, I tested the hypothesis that reduced salivary flow homogenizes the observed spatial structure of microbial communities (Chapter 5). This is the first demonstration that, in healthy humans, salivary flow mechanistically drives the segregation of human oral microbial communities, maintaining the spatial architecture of bacterial communities across teeth in healthy humans. Taken together, these studies provide novel insight into the biogeography of the human oral microbiota, and how it pertains to human health and disease.
Book
1 online resource.
This dissertation is concerned with the governmental effects of the technocratic, often avowedly apolitical choices that inform the design and implementation of public health initiatives in postcolonial Africa. It seeks to shed light on the interactions between the mutual making of 'global health' and 'local' authority in specific national historical context and the consequences of these processes for ordinary people's lived experiences as beneficiaries, recipients, participants in, 'targets' of health promotion, and as citizens. The dissertation is based on 18 months of ethnographic and archival research on rural health promotion in Malawi. Specific chapters address the intersections of global health initiatives and local governance in the areas of home hygiene promotion and sanitation surveillance, maternal health, community participation in primary health care, and vaccine refusals.
Book
1 online resource.
Relating the remotely-sensed elastic properties of rock to fluid saturation has been a debated (if not unanswered) question of rock physics. This question often arises during a time-lapse seismic interpretation where the goal is to understand where and how much of injected or original pore fluid is located (Landro 2001, Lumley 2001). The Gassmann (1951) fluid substitution equation remains the cornerstone of fluid substitution. The problem of applying this equation at partial saturation is that one needs to provide the effective bulk modulus of the mixture of the fluid phases. One way of arriving at this effective bulk modulus is to assume that the fluid phases coexist at the pore scale and each pore contains parts of water and 1- parts of gas, where is the water saturation. In this uniform-saturation case, the harmonic average of the bulk moduli of the fluid phases can be used as the effective bulk modulus of the mixture. Because harmonic averaging produces the lower bound for the bulk modulus of the mixture, the uniform saturation assumption produces the lower bound for the bulk modulus of rock at partial saturation (Mavko et al., 2009). However, this assumption is not always valid. For example, Domenico (1976) demonstrated in the laboratory that the elastic properties of partially saturated rock deviate from those computed from this theory. Later, this finding was confirmed by, e.g., Cadoret (1993) and Brie (1995). In the former work, the CT scans of the partially saturated limestone samples revealed that the fluid phases were distributed in patches whose size was much larger than the individual pore size. Coincidentally, the measured velocity in these samples exceeded that predicted by the uniform saturation assumption. In the latter work, a similar situation was found in well log data, especially well pronounced in gas sands. In contrast to uniform saturation, the situation revealed by Cadoret (1993) is called patchy saturation. Several empirical (e.g., Domenico, 1976; and Brie et al., 1995) and theoretical (e.g., Dvorkin and Nur, 1998; and Sengupta, 2000) equations help relate to the elastic properties of a rock volume with patchy saturation. These equations typically produce the upper bound for the anticipated bulk modulus of partially saturated rock. The difference between these bounds (as computed, respectively, from the uniform and patchy saturation assumptions) can be large, especially in soft rock and at high . To assess this uncertainty, Knight et al. (1998) offer a process-driven theory based on the premise of capillary pressure equilibrium in wet rock subject to gas injection. This theory is physics based and does not require an a-priori assumption about fluid distribution in the pore space. In fact, it produces the fluid phase distribution in the rock as a function of water saturation. It also does not require any adjustable parameters and can directly use measurable rock properties, such as porosity, permeability, density, and the elastic P- and S-wave velocities. This theory is based on experiment-supported understanding of how fluid invades porous rock. Sen and Dvorkin (2011) have compared and contrasted the existing popular methods of fluid substitution with the theory developed in Knight et al. (1998), showing that it may be particularly useful for decreasing the uncertainty associated with large elastic bounds that may make saturation analysis difficult. In addition to understanding the fluid distribution, it is crucial to understand the effect of the fluid distribution on elastic properties and its behavior with frequency. Data for energy exploration is collected from the ultrasonic frequency (laboratory measurements) all the way up to near-zero frequency (seismic data). Porous fluids give rise to frequency dependence of velocity and amplitude attenuation due to dispersion. The measured data cannot be compared directly due to this frequency dependence. In order to correctly analyze seismic data, especially when using laboratory or well log data as a constraint, it is critical to consider how the elastic wave velocity behaves with frequency. The question still remains of how to accurately relate seismic properties to fluid saturations, particularly in reservoirs where there are fluids with significantly contrasting elastic properties (ex. Gas reservoirs, steam flood injections, CO2 injections). In addition, with current workflows, it is necessary to pre-assign an assumed fluid distribution (uniform or patchy). This can lead to significantly different saturation analysis results, and it can be difficult to determine which distribution type is most appropriate for the reservoir of interest. Here, I summarize a methodology to improve the process of fluid substitution and saturation analysis of seismic data. Preliminary work has been done by Sen and Dvorkin (2011) to show the benefits of applying the methodology established in Knight et al.(1998) to understand the elastic behavior of fluid distributions, and model behavior which falls between the uniform and patchy saturation bounds based on the capillary pressure equilibrium concept. The thesis is three fold, consisting of the following: (1) development of a rock physics model that can describe the fluid distribution within porous media and relate the behavior of elastic wave velocity with frequency, (2) use of the model to match existing laboratory data measured by Cadoret (1992) and (3) application of the workflow to a real seismic data set (BHP Macedon Reservoir) to derive a probabilistic gas saturation map. In part (1) we develop a method to incorporate frequency dependence into the original methodology from Knight et al. (1998) by introducing the concepts of measurement scale and model scale. As the procedure calls for subdividing the model reservoir, we reference the measurement scale to be the scale at which measured heterogeneity data is available (typically core or well log). The scale we wish to model then is a larger one, typically the seismic. The frequency decreases as we go from core to seismic, and can be characterized by a diffusion length, which has an inverse relationship with frequency. We employ the diffusion length to constrain which parts of the subdivided reservoir will be observed as uniform or patchy. Thus for a single reservoir, we can determine the expected relationships between velocity and saturation at any given frequency. In this section we also briefly investigate the sensitivity of the CPET workflow to spatial information, and provide a means for incorporating a variogram into the workflow. In part (2) we apply the above outlined workflow from part (1) to several laboratory measurements from Cadoret (1992). Velocity versus saturation was available for each core at 3 different ultrasonic frequencies, along with appropriate heterogeneity data. We were able to reproduce much of the observed laboratory behavior using the workflow outlined in part (1), and came away with a few particularly significant observations. First and foremost, the porosity distribution plays a major role in the curvature of the velocity versus saturation profiles. A bimodal distribution creates very different behavior as compared to a uniform distribution, all other variables held equal. Realizing this, we can glean important information about the porosity distribution simply from the shape of the curves. Additionally we find that the permeability plays a role in the transitional shape of these curves, as well as the expected irreducible water saturation. We have developed a system based on the lessons learned from matching the Cadoret (1992) data that can help a person quickly glean several insights about the reservoir in question simply from observing a single velocity-saturation curve. We also note that our workflow does appear to have a limitation, highlighted by this particular application, in that it does not appropriately account for the possibility of squirt flow. Squirt flow can occur when taking ultrasonic measurements, and for one of the Cadoret (1992) samples we were unable to obtain a match at the highest recorded frequency. While this mis-match could be the result of some other experimental procedures or a specific heterogeneity pattern at a very small scale, we suspect that the CPET workflow is unable to capture squirt flow behavior, as we were able to obtain a match for the two lower frequencies. Thus, we exercise caution in using CPET as the only model when working with very high frequency data, and recommend the possibility of coupling it with a squirt-flow model. Lastly, in part (3) we apply the CPET workflow to the BHP Macedon reservoir data set. The data consists of four angle stacks, which we used to perform both pre-stack and post-stack inversions. The CPET workflow was then used to create an impedance-saturation model at the appropriate frequency for the Macedon reservoir. This CPET model can then be used as a "key" to obtain a saturation map from velocity (and other parameters). This process created a good saturation match at the two provided well locations, with more details in variation observed in the pre-stack data analysis, likely due to the large range in angle. The Macedon data set was ideal for this kind of analysis as there were not significant changes in lithology across the reservoir, which can heavily influence observed velocities. We outline the steps and data required to create these saturation maps, and indicate the importance and effect of any given assumptions in the workflow. In addition, an error analysis is conducted to determine the sensitivity of impedance to saturation for the specific case of the Macedon.
Book
1 online resource.
A series of laboratory experiments was conducted to study the formation of internal boluses through the run-up of periodic internal wave-trains on a uniform slope/shelf topography in a two-layer stratified fluid system. In the experiments, the forcing parameters of the incident waves (wave amplitude and frequency) are varied for a constant slope angle and layer depths. Simultaneous particle image velocimetry (PIV) and planar laser-induced fluorescence (PLIF) measurements are used to calculate high resolution, two-dimensional velocity and density fields. Over the range of wave forcing conditions, four bolus formation types were observed: backward overturning into a coherent bolus, top breaking into a turbulent bolus, top breaking into a turbulent surge, and forward breaking into a turbulent surge. Wave forcing parameters, including a wave Froude number Fr, a wave Reynolds number Re, and a wave steepness parameter ka0, are used to relate initial wave forcing to a dominant bolus formation mechanism. Bolus characteristics, including the bolus propagation speed and turbulent components, are also related to wave forcing. Results indicate that for Fr > 0.20 and ka0 > 0.40, the generated boluses become more turbulent in nature. As wave forcing continues to increase further, boluses are no longer able to form.