Search results

133,836 results

View results as:
Number of results to display per page
Book
1 online resource.
Are existing RGB-D reconstruction pipelines ready for broad deployment? To answer this question and assist future research, I present a dataset of thorough RGB-D videos of more than 10,000 objects, produced in the wild by paid operators for the purpose of reconstruction. An evaluation of existing RGB-D reconstruction approaches reveals that they often fail on the collected data, in part due to insufficiently robust loop closure detection and global optimization. I thus present a new approach to object and indoor scene reconstruction from RGB-D video. The key idea is to combine geometric registration of fragments with robust global optimization based on line processes. Geometric registration is error-prone due to sensor noise, which leads to aliasing of geometric detail and inability to disambiguate different surfaces in the scene. The presented optimization approach disables erroneous geometric alignments even when they significantly outnumber correct ones. The objective has a least-squares form and is optimized by a high-performance pose graph solver. Experimental results demonstrate that the presented approach substantially increases the accuracy of reconstructed models on challenging real-world sequences.
Are existing RGB-D reconstruction pipelines ready for broad deployment? To answer this question and assist future research, I present a dataset of thorough RGB-D videos of more than 10,000 objects, produced in the wild by paid operators for the purpose of reconstruction. An evaluation of existing RGB-D reconstruction approaches reveals that they often fail on the collected data, in part due to insufficiently robust loop closure detection and global optimization. I thus present a new approach to object and indoor scene reconstruction from RGB-D video. The key idea is to combine geometric registration of fragments with robust global optimization based on line processes. Geometric registration is error-prone due to sensor noise, which leads to aliasing of geometric detail and inability to disambiguate different surfaces in the scene. The presented optimization approach disables erroneous geometric alignments even when they significantly outnumber correct ones. The objective has a least-squares form and is optimized by a high-performance pose graph solver. Experimental results demonstrate that the presented approach substantially increases the accuracy of reconstructed models on challenging real-world sequences.
Collection
Undergraduate Theses, Department of Biology, 2014-2015
Electrocorticography can be used to measure and understand electrical potentials on the brain surface which underlie cognitive function of the cerebral cortex. In order to derive understanding between cortical function and structure, the precise locations of the electrodes must be known. However, due to deformation of the brain surface that occurs during craniotomy, the positions of electrodes cannot be determined from alignment of CT with MRI scans, and additional corrective measures become necessary. While current leading methods rely on computing linear projections from the deformed post-operative brain onto the non-deformed preoperative brain, these methods experience limitations in areas of the brain with more variable curvature. In order to account for variable curvature, the post-operative brain was biomechanically simulated using finite element methods in order to map and invert brain surface deformation. This approach was tested on 4 patients who were candidates for epilepsy surgery at Stanford Hospital. The algorithm localized electrodes with average error of 3.70 +/- 0.15 mm, (median +/- s.e.) measured as radial arc length in the plane of the cortical surface. While this error is slightly higher than that of competing methods, the standard error is lower, suggesting improved robustness. Additionally, this error it is measured in the plane of the brain as opposed to the plane of intraoperative photography, removing spatial distortion induced from 3D projection and providing a more realistic margin of error. Gold-standard electrode positions were determined by manually coregistering intraoperative photography with pre-operative MRI and back-projecting electrodes onto a three-dimensional mesh. This algorithm is entirely automatic, and provides a fast (< 1 minute) processing pipeline for electrode localization and display. These results suggest possible future viability of the algorithm for use in clinical and research settings, where it may improve spatial precision of electrocorticography in epilepsy surgery and cognitive research.
Electrocorticography can be used to measure and understand electrical potentials on the brain surface which underlie cognitive function of the cerebral cortex. In order to derive understanding between cortical function and structure, the precise locations of the electrodes must be known. However, due to deformation of the brain surface that occurs during craniotomy, the positions of electrodes cannot be determined from alignment of CT with MRI scans, and additional corrective measures become necessary. While current leading methods rely on computing linear projections from the deformed post-operative brain onto the non-deformed preoperative brain, these methods experience limitations in areas of the brain with more variable curvature. In order to account for variable curvature, the post-operative brain was biomechanically simulated using finite element methods in order to map and invert brain surface deformation. This approach was tested on 4 patients who were candidates for epilepsy surgery at Stanford Hospital. The algorithm localized electrodes with average error of 3.70 +/- 0.15 mm, (median +/- s.e.) measured as radial arc length in the plane of the cortical surface. While this error is slightly higher than that of competing methods, the standard error is lower, suggesting improved robustness. Additionally, this error it is measured in the plane of the brain as opposed to the plane of intraoperative photography, removing spatial distortion induced from 3D projection and providing a more realistic margin of error. Gold-standard electrode positions were determined by manually coregistering intraoperative photography with pre-operative MRI and back-projecting electrodes onto a three-dimensional mesh. This algorithm is entirely automatic, and provides a fast (< 1 minute) processing pipeline for electrode localization and display. These results suggest possible future viability of the algorithm for use in clinical and research settings, where it may improve spatial precision of electrocorticography in epilepsy surgery and cognitive research.
Collection
Undergraduate Theses, School of Engineering
Imagine being able to build and train a parser over any domain of knowledge, starting from zero training data, over the course of a few hours. In this thesis, my research goal is to develop a new functionality-driven methodology in an attempt to achieve this. My attempt uses a domain-general grammar and a domain-specific seed lexicon to generate logical forms paired with canonical utter- ances that capture the meaning of the logical forms. By construction, the domain-general grammar ensures complete coverage of the desired set of compositional operators. I then use crowdsourcing to paraphrase these canonical utterances into natural utterances. The resulting data is used to train the semantic parser. I further study the role of compositionality in the resulting paraphrases. Finally, I test our methodology on eight domains and show that I can build an adequate semantic parser in just a few hours.
Imagine being able to build and train a parser over any domain of knowledge, starting from zero training data, over the course of a few hours. In this thesis, my research goal is to develop a new functionality-driven methodology in an attempt to achieve this. My attempt uses a domain-general grammar and a domain-specific seed lexicon to generate logical forms paired with canonical utter- ances that capture the meaning of the logical forms. By construction, the domain-general grammar ensures complete coverage of the desired set of compositional operators. I then use crowdsourcing to paraphrase these canonical utterances into natural utterances. The resulting data is used to train the semantic parser. I further study the role of compositionality in the resulting paraphrases. Finally, I test our methodology on eight domains and show that I can build an adequate semantic parser in just a few hours.
Collection
Undergraduate and Graduate Theses, Department of Anthropology
This paper focuses on the lawyers and employees of a legal service nonprofit in the Bay Area, analyzing the ways in which the lawyers experience everyday tensions within the nonprofit as a result of their interactions with their clients and with the larger nonprofit and justice systems. Other research has studied the nonprofit and public service sector, with a focus on the difficulties of nonprofit work given our country’s greater focus on and support of private industries. However, less research has been done on law in the nonprofit sector and I hope to use this study to explore how legal services function in the nonprofit sector. My research was completed over the summer of 2014, with some work continued during the fall of 2014. The research incorporates formal interviews and informal conversations with the lawyers and other employees of La Institución de los Servicios Legales, a nonprofit in the Bay Area (), along with some of my own observations and experiences throughout the summer. This work was funded by a grant from Stanford University and is being used in an honors thesis that I will submit for the Anthropology department at Stanford in May. Through my research, I found that the nonprofit lawyers faced both financial and psychological hardships through their interactions with their clients. The lawyers engaged with their clients, by trying to build trust with them, but the legal system set things up so that there had to be a power dynamic between the lawyer and their client. This dynamic, along with the legal system in general, created some psychological tensions for the lawyers. Additionally, the structure of the nonprofit sector meant that financial struggles were also a concern for the lawyers on a daily basis, making it harder for them to do their job consistently and continuously. Although the lawyers and employees spoke positively about their jobs, they all also expressed anxiety about their jobs, noting financial and psychological stressors as their main concerns. Concerns over supporting themselves and their families financially were most prominent for these employees, particularly for those who lived within the boundaries of the expensive Bay Area city that we were in. I started this project in part because I am interested in practicing law in a nonprofit setting. I think that those clients who visit organizations like La Institución de los Servicios Legales are deserving of high-quality legal advice and services, and I would like to offer that kind of support at some point in my future professional career. However, it is quite apparent, particularly based on my research, that lawyers and employees in this kind of a nonprofit, and many other nonprofits around the country, are not receiving the kind of financial and emotional support needed for them to be successful in their jobs. Though this research has not deterred me from seeking a career in the nonprofit field, I hope that it can contribute to further research concerning what can be done for these employees, in order to alleviate their everyday adversities.
This paper focuses on the lawyers and employees of a legal service nonprofit in the Bay Area, analyzing the ways in which the lawyers experience everyday tensions within the nonprofit as a result of their interactions with their clients and with the larger nonprofit and justice systems. Other research has studied the nonprofit and public service sector, with a focus on the difficulties of nonprofit work given our country’s greater focus on and support of private industries. However, less research has been done on law in the nonprofit sector and I hope to use this study to explore how legal services function in the nonprofit sector. My research was completed over the summer of 2014, with some work continued during the fall of 2014. The research incorporates formal interviews and informal conversations with the lawyers and other employees of La Institución de los Servicios Legales, a nonprofit in the Bay Area (), along with some of my own observations and experiences throughout the summer. This work was funded by a grant from Stanford University and is being used in an honors thesis that I will submit for the Anthropology department at Stanford in May. Through my research, I found that the nonprofit lawyers faced both financial and psychological hardships through their interactions with their clients. The lawyers engaged with their clients, by trying to build trust with them, but the legal system set things up so that there had to be a power dynamic between the lawyer and their client. This dynamic, along with the legal system in general, created some psychological tensions for the lawyers. Additionally, the structure of the nonprofit sector meant that financial struggles were also a concern for the lawyers on a daily basis, making it harder for them to do their job consistently and continuously. Although the lawyers and employees spoke positively about their jobs, they all also expressed anxiety about their jobs, noting financial and psychological stressors as their main concerns. Concerns over supporting themselves and their families financially were most prominent for these employees, particularly for those who lived within the boundaries of the expensive Bay Area city that we were in. I started this project in part because I am interested in practicing law in a nonprofit setting. I think that those clients who visit organizations like La Institución de los Servicios Legales are deserving of high-quality legal advice and services, and I would like to offer that kind of support at some point in my future professional career. However, it is quite apparent, particularly based on my research, that lawyers and employees in this kind of a nonprofit, and many other nonprofits around the country, are not receiving the kind of financial and emotional support needed for them to be successful in their jobs. Though this research has not deterred me from seeking a career in the nonprofit field, I hope that it can contribute to further research concerning what can be done for these employees, in order to alleviate their everyday adversities.
Book
1 online resource.
A 20-minute documentary film was created to accelerate the rate at which Nepalese communities retrofit or rebuild their local public school buildings to be life-safe in the event of a major earthquake. It features local Nepalese as role models who have already strengthened their schools, and is based on the theory of communicating actionable risk, diffusion of innovation theory, and social cognitive theory. Public schools in Kathmandu Valley with buildings in need of seismic work were assessed for eligibility in the study. Of these, 16 were selected and matched into 8 pairs based on seismic condition of the buildings. One school in each pair was randomly assigned to see the intervention film and the other to see an attention placebo control film on an unrelated topic. Pre and post observations were recorded from 761 adult participants, using a questionnaire created for this purpose. Comparisons between the two groups of schools were made with the school as the unit of analysis. When compared to the control schools, the schools whose community members saw the retrofit intervention film statistically significantly increased their: 1) knowledge of specific actions to take in support of earthquake-resistant construction, 2) belief in the feasibility of making buildings earthquake-resistant, 3) willingness to support seismic strengthening of the local school building, and 4) likelihood to recommend to others that they build earthquake-resistant homes. This outcome suggests that employing a film featuring community members who have already taken the desired action increases factors that may accelerate adoption of risk reduction actions by others who are similar to them.
A 20-minute documentary film was created to accelerate the rate at which Nepalese communities retrofit or rebuild their local public school buildings to be life-safe in the event of a major earthquake. It features local Nepalese as role models who have already strengthened their schools, and is based on the theory of communicating actionable risk, diffusion of innovation theory, and social cognitive theory. Public schools in Kathmandu Valley with buildings in need of seismic work were assessed for eligibility in the study. Of these, 16 were selected and matched into 8 pairs based on seismic condition of the buildings. One school in each pair was randomly assigned to see the intervention film and the other to see an attention placebo control film on an unrelated topic. Pre and post observations were recorded from 761 adult participants, using a questionnaire created for this purpose. Comparisons between the two groups of schools were made with the school as the unit of analysis. When compared to the control schools, the schools whose community members saw the retrofit intervention film statistically significantly increased their: 1) knowledge of specific actions to take in support of earthquake-resistant construction, 2) belief in the feasibility of making buildings earthquake-resistant, 3) willingness to support seismic strengthening of the local school building, and 4) likelihood to recommend to others that they build earthquake-resistant homes. This outcome suggests that employing a film featuring community members who have already taken the desired action increases factors that may accelerate adoption of risk reduction actions by others who are similar to them.
Book
1 online resource.
In this thesis, an integrated 2×2 CMOS transceiver at 60GHz with energy harvesting capability in the transmitter mode is demonstrated. The two dipole antennas are implemented on-chip and are shared between the receiver and transmitter modes using a T/R switch. The radio supports On-Off-Keying (OOK) modulation and a programmable data rate of 38 to 2450Mb/s at a BER of less than 5×10−4. The power consumption of the transmitter scales with data rate. For a communication distance of 5cm, the transmitter consumes 100μW to 6.3mW for a data rate of 38 to 2450Mb/s, respectively. For a communication distance of 10cm, the transmitter consumes 260μW to 11.9mW for a data rate of 38 to 2450Mb/s, respectively. This yields an energy efficiency of 2.6pJ/b at 5cm and 4.9pJ/b at 10cm. The energy harvesting circuits operate at 2.45GHz with an average efficiency of 33%. The harvesting antenna and its matching components are off-chip. The complete transceiver including the energy harvesting block and on-chip antennas occupies 1.62mm2 in 40nm CMOS, which is the smallest 60GHz radio with on-chip antennas to date. This work is the first implementation of a 60GHz transmitter with scalable data rate and power that has the energy harvesting capability.
In this thesis, an integrated 2×2 CMOS transceiver at 60GHz with energy harvesting capability in the transmitter mode is demonstrated. The two dipole antennas are implemented on-chip and are shared between the receiver and transmitter modes using a T/R switch. The radio supports On-Off-Keying (OOK) modulation and a programmable data rate of 38 to 2450Mb/s at a BER of less than 5×10−4. The power consumption of the transmitter scales with data rate. For a communication distance of 5cm, the transmitter consumes 100μW to 6.3mW for a data rate of 38 to 2450Mb/s, respectively. For a communication distance of 10cm, the transmitter consumes 260μW to 11.9mW for a data rate of 38 to 2450Mb/s, respectively. This yields an energy efficiency of 2.6pJ/b at 5cm and 4.9pJ/b at 10cm. The energy harvesting circuits operate at 2.45GHz with an average efficiency of 33%. The harvesting antenna and its matching components are off-chip. The complete transceiver including the energy harvesting block and on-chip antennas occupies 1.62mm2 in 40nm CMOS, which is the smallest 60GHz radio with on-chip antennas to date. This work is the first implementation of a 60GHz transmitter with scalable data rate and power that has the energy harvesting capability.
Collection
Undergraduate Theses, Department of Biology, 2014-2015
β-catenin functions in the contexts of (1) cell development as a transcription factor for Wnt target genes and (2) cell-cell adhesion as an adaptor in cadherin-based adherens junctions. Mutations in β-catenin or in molecules regulating its localization and stability are associated with inappropriately triggered cell growth, weakened cell adhesion, and oncogenesis. Such correlation underscores the need to understand the molecular mechanisms by which β-catenin’s localization, and thereby its function, is regulated. An increasing body of evidence has suggested that the extent to which β-catenin partakes in either function may be influenced by phosphorylation. In particular, tyrosine phosphorylation of β-catenin has been qualitatively shown to modulate the affinities of two key binding partners at adherens junctions: cadherin and α-catenin. This study seeks to quantitatively assess the impact of tyrosine phosphorylation at specific sites on β-catenin’s affinity for cadherin and α-catenin. Through in vitro thermodynamic measurements by isothermal titration calorimetry (ITC) with β-catenin Y to E phosphomimics we show that tyrosine phosphorylation at Y142 and Y654 weakens β-catenin’s affinity for α-catenin and cadherin respectively. These results help construct a model of β-catenin regulation and may offer future therapeutic approaches for cancer.
β-catenin functions in the contexts of (1) cell development as a transcription factor for Wnt target genes and (2) cell-cell adhesion as an adaptor in cadherin-based adherens junctions. Mutations in β-catenin or in molecules regulating its localization and stability are associated with inappropriately triggered cell growth, weakened cell adhesion, and oncogenesis. Such correlation underscores the need to understand the molecular mechanisms by which β-catenin’s localization, and thereby its function, is regulated. An increasing body of evidence has suggested that the extent to which β-catenin partakes in either function may be influenced by phosphorylation. In particular, tyrosine phosphorylation of β-catenin has been qualitatively shown to modulate the affinities of two key binding partners at adherens junctions: cadherin and α-catenin. This study seeks to quantitatively assess the impact of tyrosine phosphorylation at specific sites on β-catenin’s affinity for cadherin and α-catenin. Through in vitro thermodynamic measurements by isothermal titration calorimetry (ITC) with β-catenin Y to E phosphomimics we show that tyrosine phosphorylation at Y142 and Y654 weakens β-catenin’s affinity for α-catenin and cadherin respectively. These results help construct a model of β-catenin regulation and may offer future therapeutic approaches for cancer.
Collection
Center for International Security and Cooperation (CISAC) Interschool Honors Program in International Security Studies
Seventy years ago, Germany was a country known more for conquering its neighbors than cooperating with them. But today, Germany is a country known for its economic power, not military might. Some people argue that Germany today is in fact too pacifist, too reluctant to use force even responsibly. With German reunification in 1990, a new foreign policy developed that was closely aligned with the old West German foreign policy. Both were characterized by a domestic culture driven by multilateralism and pacifism. German foreign policy in the 1990s was defined by the prioritization of multilateralism and pacifism; does this domestic culture still guide German foreign policy today? This thesis examines the question of what has governed German foreign policy thus far in the 21st century. In order to answer this question, the thesis develops two frameworks through which to analyze three case studies on the German decision to deploy force abroad – Afghanistan, Iraq, and Libya. The first framework is based on the theory of neorealism. The second framework is based on the theory of domestic politics and culture. This thesis argues that the framework of domestic politics accurately explains 21st century German foreign policy. Neorealism does not. German foreign policy today is governed by domestic politics, which is itself governed by domestic culture. There are two critical aspects of German domestic culture today. First, Germany today prioritizes the principle of pacifism over the principle of multilateralism. Second, Germany today prioritizes pacifism over that of humanitarianism. Overall, thus far in the 21st century Germany seems to be a country that prioritizes the assurance of never-again German militarism over that of never-again German unilateralism or never-again mass international human right abuses.
Seventy years ago, Germany was a country known more for conquering its neighbors than cooperating with them. But today, Germany is a country known for its economic power, not military might. Some people argue that Germany today is in fact too pacifist, too reluctant to use force even responsibly. With German reunification in 1990, a new foreign policy developed that was closely aligned with the old West German foreign policy. Both were characterized by a domestic culture driven by multilateralism and pacifism. German foreign policy in the 1990s was defined by the prioritization of multilateralism and pacifism; does this domestic culture still guide German foreign policy today? This thesis examines the question of what has governed German foreign policy thus far in the 21st century. In order to answer this question, the thesis develops two frameworks through which to analyze three case studies on the German decision to deploy force abroad – Afghanistan, Iraq, and Libya. The first framework is based on the theory of neorealism. The second framework is based on the theory of domestic politics and culture. This thesis argues that the framework of domestic politics accurately explains 21st century German foreign policy. Neorealism does not. German foreign policy today is governed by domestic politics, which is itself governed by domestic culture. There are two critical aspects of German domestic culture today. First, Germany today prioritizes the principle of pacifism over the principle of multilateralism. Second, Germany today prioritizes pacifism over that of humanitarianism. Overall, thus far in the 21st century Germany seems to be a country that prioritizes the assurance of never-again German militarism over that of never-again German unilateralism or never-again mass international human right abuses.
Collection
Undergraduate Theses, Department of Biology, 2014-2015
Introduction: The GABAA receptor (GABAAR) is an attractive, tractable drug discovery target. It remains unclear how native neural circuits of the hippocampus respond to drugs in this highly clinically relevant class. The CA1 region of the hippocampus is crucial for learning, memory and cognition, thus a key brain region to screen GABAergic compounds that may influence these processes. We developed a novel screen for GABAAR ligands, including general anesthetics, by measuring field inhibitory postsynaptic potentials (fIPSPs) in the CA1 area. While allowing many of the advantages of an in vitro preparation, field recordings mirror their in vivo counterparts, and unlike intracellular recordings, are minimally invasive to the neuron, typically remaining stable for many hours. Methods: 24-28 day old Sprague Dawley rats were anesthetized and decapitated. Brains were dissected and submerged in chilled artificial cerebrospinal fluid (ACSF). 400 μm thick coronal slices were cut and maintained in ACSF bubbled with 95% O2 and 5% CO2. fIPSPs were evoked through a bipolar tungsten stimulating electrode placed in the stratum pyramidale (SP) of the CA1 region and recorded by microelectrode 300-400 μm away in the SP of CA1. GABAergic fIPSPs were isolated with NMDA and AMPA receptor antagonists (d-APV, NBQX, kynurenic acid). fIPSP dependence on GABAAR was confirmed by blockade in high dose GABAAR antagonist, picrotoxin (PTX). GABAergic ligands were applied to slices, and their effects on magnitude and decay kinetics of the fIPSP were measured. Ligands tested include: propofol, isoflurane, midazolam, diazepam, flumazenil and furosemide (FUR) and PTX. Results: Hippocampal GABAergic inhibition can be classified by its duration and sensitivity to allosteric modulators like benzodiazepines (BZPs). We characterized the CA1 fIPSP with compounds known to affect these parameters; a subset of our data is summarized here. FUR, a selective antagonist of GABAA-fast, dose-dependently reduces fIPSP amplitude and prolongs its decay, suggesting that the fIPSP is largely mediated by GABAA-fast synapses. In comparison, PTX, a non-selective GABAAR antagonist, depressed evoked fIPSP amplitude without modifying the fIPSP decay. fIPSPs are also sensitive to BZPs, including midazolam and diazepam, both of which enhanced fIPSP amplitude, and prolonged decay time. Flumazenil, a BZP antagonist, blocked these effects. Conclusions: This method for studying synaptic inhibition has major advantages over conventional electrophysiological techniques: 1) it is extracellular, so key intracellular signaling molecules remain intact, 2) it detects changes in both tonic and phasic GABAAR mediated signaling, and finally 3) it is more stable and technically easier than whole-cell recording. Combining this fast, minimally cell invasive, neural population based approach affords a unique opportunity to assay multiple lead compounds for anesthetic efficacy in an intact, well characterized neural circuit with clear relevance to learning, memory and cognition.
Introduction: The GABAA receptor (GABAAR) is an attractive, tractable drug discovery target. It remains unclear how native neural circuits of the hippocampus respond to drugs in this highly clinically relevant class. The CA1 region of the hippocampus is crucial for learning, memory and cognition, thus a key brain region to screen GABAergic compounds that may influence these processes. We developed a novel screen for GABAAR ligands, including general anesthetics, by measuring field inhibitory postsynaptic potentials (fIPSPs) in the CA1 area. While allowing many of the advantages of an in vitro preparation, field recordings mirror their in vivo counterparts, and unlike intracellular recordings, are minimally invasive to the neuron, typically remaining stable for many hours. Methods: 24-28 day old Sprague Dawley rats were anesthetized and decapitated. Brains were dissected and submerged in chilled artificial cerebrospinal fluid (ACSF). 400 μm thick coronal slices were cut and maintained in ACSF bubbled with 95% O2 and 5% CO2. fIPSPs were evoked through a bipolar tungsten stimulating electrode placed in the stratum pyramidale (SP) of the CA1 region and recorded by microelectrode 300-400 μm away in the SP of CA1. GABAergic fIPSPs were isolated with NMDA and AMPA receptor antagonists (d-APV, NBQX, kynurenic acid). fIPSP dependence on GABAAR was confirmed by blockade in high dose GABAAR antagonist, picrotoxin (PTX). GABAergic ligands were applied to slices, and their effects on magnitude and decay kinetics of the fIPSP were measured. Ligands tested include: propofol, isoflurane, midazolam, diazepam, flumazenil and furosemide (FUR) and PTX. Results: Hippocampal GABAergic inhibition can be classified by its duration and sensitivity to allosteric modulators like benzodiazepines (BZPs). We characterized the CA1 fIPSP with compounds known to affect these parameters; a subset of our data is summarized here. FUR, a selective antagonist of GABAA-fast, dose-dependently reduces fIPSP amplitude and prolongs its decay, suggesting that the fIPSP is largely mediated by GABAA-fast synapses. In comparison, PTX, a non-selective GABAAR antagonist, depressed evoked fIPSP amplitude without modifying the fIPSP decay. fIPSPs are also sensitive to BZPs, including midazolam and diazepam, both of which enhanced fIPSP amplitude, and prolonged decay time. Flumazenil, a BZP antagonist, blocked these effects. Conclusions: This method for studying synaptic inhibition has major advantages over conventional electrophysiological techniques: 1) it is extracellular, so key intracellular signaling molecules remain intact, 2) it detects changes in both tonic and phasic GABAAR mediated signaling, and finally 3) it is more stable and technically easier than whole-cell recording. Combining this fast, minimally cell invasive, neural population based approach affords a unique opportunity to assay multiple lead compounds for anesthetic efficacy in an intact, well characterized neural circuit with clear relevance to learning, memory and cognition.
Book
1 online resource.
The ever-increasing global demand for energy coupled with a growing awareness of the deleterious effects of fossil fuels have led to an explosive growth in the development of renewable energy technologies. Solar photovoltaics and wind turbines are attractive options; however, they are intermittent and localized in nature, which necessitates the use of energy storage devices to achieve energy time-shifting and allow for maximum market penetration of renewables into the grid. Unitized regenerative fuel cells (URFCs), based on H2O-H2-O2 interconversion chemistries, are an interesting class of energy storage devices that could potentially enable such a future. However, commercial URFCs typically utilize expensive platinum group metals as catalysts or operate at extremely high temperatures. The development of an alkaline anion exchange membrane (AEM) coupled with earth-abundant catalysts could enable a pathway towards a low-temperature, cost-effective URFC. In order to demonstrate the viability of the AEM-URFC concept, we first developed catalysts for each of the four reactions involved, namely the hydrogen evolution and oxidation reactions (HER, HOR) and the oxygen reduction and evolution reactions (ORR, OER). We adapted a synthesis procedure from the literature to produce a carbon-supported Ni catalyst that displays high activities for the HER, the HOR and the OER. Also, we modified the synthesis procedure of an O2-bifunctional MnOx catalyst previously developed in our lab such that the catalyst retained its high activity yet can be readily integrated into commercial URFC setups. Next, we integrated the Ni and MnOx catalysts with a commercial AEM into an existing cell setup, and the resultant prototype device obtained round trip efficiencies of 34-40 % at 10 mA/cm2. This first report of a precious-metal free AEM-URFC opens up new possibilities for enabling cost-effective and widespread deployment of renewable electricity. We pinpointed the degradation of carbon on the O2 electrode as one of the factors leading to a loss of device performance upon repeated cycling. Therefore, we developed a carbon-free, precious-metal-free O2 electrode by electrodepositing MnOx onto a stainless steel substrate followed by high-temperature calcination to achieve the desired MnOx phase. Fundamental electrochemical testing in addition to device testing reveal that this electrode exhibits superior stability compared to a carbon-containing O2 electrode, thereby revealing a pathway towards longer-lasting AEM-URFCs. Next, the knowledge gained from developing the AEM-URFC prototype was extended to polymer electrolyte membrane electrolyzers. Precious-metal Pt is typically used at the cathode to drive the HER; however, molybdenum sulfides have shown promise as an active and stable class of non-precious HER catalysts. Despite recent efforts to develop high-performance molybdenum sulfides, there has been limited work showing the effectiveness of these catalysts in operating devices. Hence, we synthesized three distinct molybdenum sulfides via facile routes specially designed for catalyst-device compatibility and demonstrated that these catalysts might potentially replace Pt as the cathode catalyst in commercial electrolyzers. The AEM-URFC device can serve as a platform for incorporating catalysts that display even better performance for any of the four reactions mentioned above. To that end, we investigated two distinct classes of catalysts, namely heteroatom-doped carbons and Ni-based mixed oxides. Heteroatom-doped carbon-based catalysts have received enormous attention due to their low costs and high activities for the ORR. However, there has been limited work on their performance for the OER and their compatibility with existing commercial setups. Therefore, we developed an NH3-activated N-doped carbon catalyst via a facile synthesis route and showed that this catalyst displays high performance both in a three-electrode electrochemical cell and also in an operating RFC device, thereby demonstrating the feasibility of N-doped carbon replacing Pt and Ir as the O2 catalysts in commercial RFCs. Also, literature reports have shown that substrates play a crucial role in the OER activity of Co and Mn oxides. We looked at the case of NiCeOx, which displays low activity for the OER when supported on a typical glassy carbon substrate. However, we discovered that the presence of a thin film of Au results in a significant boost in the OER performance of NiCeOx. The resultant geometric and specific activities are superior to those of the best catalysts reported in the literature, and is an interesting potential candidate to drive the OER in an AEM-URFC. Metal alloys typically deliver higher performance oxygen catalysis compared to their pure metal counterparts. However, metal alloys are typically more complicated and a fundamental understanding of how these catalysts operate in-situ is critical in the development of next-generation catalysts that deliver better performance. As such, we developed a bifunctional MnNiOx catalyst and utilized a newly-developed synchrotron technique to simultaneously detect changes in the Mn and Ni electronic structures via in-situ via X-ray emission spectroscopy. The obtained data highlights the effects of adding a second element to the chemical state and the resultant catalytic activity of the original catalyst. In summary, this dissertation discusses a broad spectrum of issues with AEM-URFCs, from fundamental catalysis to actual device operation. This work provides an important step towards a potentially commercial-level, precious-metal-free URFC for cost-effective energy storage to help scale the use of intermittent renewable energy.
The ever-increasing global demand for energy coupled with a growing awareness of the deleterious effects of fossil fuels have led to an explosive growth in the development of renewable energy technologies. Solar photovoltaics and wind turbines are attractive options; however, they are intermittent and localized in nature, which necessitates the use of energy storage devices to achieve energy time-shifting and allow for maximum market penetration of renewables into the grid. Unitized regenerative fuel cells (URFCs), based on H2O-H2-O2 interconversion chemistries, are an interesting class of energy storage devices that could potentially enable such a future. However, commercial URFCs typically utilize expensive platinum group metals as catalysts or operate at extremely high temperatures. The development of an alkaline anion exchange membrane (AEM) coupled with earth-abundant catalysts could enable a pathway towards a low-temperature, cost-effective URFC. In order to demonstrate the viability of the AEM-URFC concept, we first developed catalysts for each of the four reactions involved, namely the hydrogen evolution and oxidation reactions (HER, HOR) and the oxygen reduction and evolution reactions (ORR, OER). We adapted a synthesis procedure from the literature to produce a carbon-supported Ni catalyst that displays high activities for the HER, the HOR and the OER. Also, we modified the synthesis procedure of an O2-bifunctional MnOx catalyst previously developed in our lab such that the catalyst retained its high activity yet can be readily integrated into commercial URFC setups. Next, we integrated the Ni and MnOx catalysts with a commercial AEM into an existing cell setup, and the resultant prototype device obtained round trip efficiencies of 34-40 % at 10 mA/cm2. This first report of a precious-metal free AEM-URFC opens up new possibilities for enabling cost-effective and widespread deployment of renewable electricity. We pinpointed the degradation of carbon on the O2 electrode as one of the factors leading to a loss of device performance upon repeated cycling. Therefore, we developed a carbon-free, precious-metal-free O2 electrode by electrodepositing MnOx onto a stainless steel substrate followed by high-temperature calcination to achieve the desired MnOx phase. Fundamental electrochemical testing in addition to device testing reveal that this electrode exhibits superior stability compared to a carbon-containing O2 electrode, thereby revealing a pathway towards longer-lasting AEM-URFCs. Next, the knowledge gained from developing the AEM-URFC prototype was extended to polymer electrolyte membrane electrolyzers. Precious-metal Pt is typically used at the cathode to drive the HER; however, molybdenum sulfides have shown promise as an active and stable class of non-precious HER catalysts. Despite recent efforts to develop high-performance molybdenum sulfides, there has been limited work showing the effectiveness of these catalysts in operating devices. Hence, we synthesized three distinct molybdenum sulfides via facile routes specially designed for catalyst-device compatibility and demonstrated that these catalysts might potentially replace Pt as the cathode catalyst in commercial electrolyzers. The AEM-URFC device can serve as a platform for incorporating catalysts that display even better performance for any of the four reactions mentioned above. To that end, we investigated two distinct classes of catalysts, namely heteroatom-doped carbons and Ni-based mixed oxides. Heteroatom-doped carbon-based catalysts have received enormous attention due to their low costs and high activities for the ORR. However, there has been limited work on their performance for the OER and their compatibility with existing commercial setups. Therefore, we developed an NH3-activated N-doped carbon catalyst via a facile synthesis route and showed that this catalyst displays high performance both in a three-electrode electrochemical cell and also in an operating RFC device, thereby demonstrating the feasibility of N-doped carbon replacing Pt and Ir as the O2 catalysts in commercial RFCs. Also, literature reports have shown that substrates play a crucial role in the OER activity of Co and Mn oxides. We looked at the case of NiCeOx, which displays low activity for the OER when supported on a typical glassy carbon substrate. However, we discovered that the presence of a thin film of Au results in a significant boost in the OER performance of NiCeOx. The resultant geometric and specific activities are superior to those of the best catalysts reported in the literature, and is an interesting potential candidate to drive the OER in an AEM-URFC. Metal alloys typically deliver higher performance oxygen catalysis compared to their pure metal counterparts. However, metal alloys are typically more complicated and a fundamental understanding of how these catalysts operate in-situ is critical in the development of next-generation catalysts that deliver better performance. As such, we developed a bifunctional MnNiOx catalyst and utilized a newly-developed synchrotron technique to simultaneously detect changes in the Mn and Ni electronic structures via in-situ via X-ray emission spectroscopy. The obtained data highlights the effects of adding a second element to the chemical state and the resultant catalytic activity of the original catalyst. In summary, this dissertation discusses a broad spectrum of issues with AEM-URFCs, from fundamental catalysis to actual device operation. This work provides an important step towards a potentially commercial-level, precious-metal-free URFC for cost-effective energy storage to help scale the use of intermittent renewable energy.
Book
1 online resource.
Light field microscopy is a high-speed computational imaging method, which enables reconstruction of a 3-dimensional volume from a light field image. Unlike standard imaging systems, the light field microscope uses a microlens array to capture both spatial and angular information about the incoming light in a single image, from which a volume can be reconstructed computationally. Among volumetric imaging methods in microscopy, light field microscopy is unique in that it allows imaging large volumes, spanning hundreds of microns in depth, at a speed limited only by the camera frame rate. Due to this unique advantage it has recently been adapted to in vivo imaging of neural activity, enabling biologists a glimpse into an organism's brain while in action. Despite its advantages, the light field microscope still suffers from a major limitation - its lateral spatial resolution is not uniform across depth. Some depths, particularly at the center of the imaged volume, where the microscope is focused at, show very low resolution which hinders its use in applications. This non-uniform resolution stems from the way the light field microscope samples the volume: at the center of the volume samples are angularly discriminative but spatially redundant, hence for isotropically absorptive or emissive volumes, they cannot support high resolution reconstructions. We present a method that, for such isotropic volumes, significantly improves the resolution profile of the light field microscope across depth and enables accurate control over it, thereby overcoming the limitations of traditional light field design. The key to our approach is using a technique called wavefront coding to control properties of the point spread function of the microscope. By including phase masks in the optical path we a create a wavefront coded light field microscope that samples the 3-dimensional volume more uniformly than the standard light field microscope, solving the low resolution problem at the center of the imaged volume and improving the resolution at its borders. We derive an extended optical model for the wavefront coded light field microscope and propose design guidelines and a performance metric which we use to choose adequate parameters for good phase masks. To validate our approach, we show simulated data and experimental resolution measurements, and demonstrate the wavefront coded light field microscope's utility for biological applications.
Light field microscopy is a high-speed computational imaging method, which enables reconstruction of a 3-dimensional volume from a light field image. Unlike standard imaging systems, the light field microscope uses a microlens array to capture both spatial and angular information about the incoming light in a single image, from which a volume can be reconstructed computationally. Among volumetric imaging methods in microscopy, light field microscopy is unique in that it allows imaging large volumes, spanning hundreds of microns in depth, at a speed limited only by the camera frame rate. Due to this unique advantage it has recently been adapted to in vivo imaging of neural activity, enabling biologists a glimpse into an organism's brain while in action. Despite its advantages, the light field microscope still suffers from a major limitation - its lateral spatial resolution is not uniform across depth. Some depths, particularly at the center of the imaged volume, where the microscope is focused at, show very low resolution which hinders its use in applications. This non-uniform resolution stems from the way the light field microscope samples the volume: at the center of the volume samples are angularly discriminative but spatially redundant, hence for isotropically absorptive or emissive volumes, they cannot support high resolution reconstructions. We present a method that, for such isotropic volumes, significantly improves the resolution profile of the light field microscope across depth and enables accurate control over it, thereby overcoming the limitations of traditional light field design. The key to our approach is using a technique called wavefront coding to control properties of the point spread function of the microscope. By including phase masks in the optical path we a create a wavefront coded light field microscope that samples the 3-dimensional volume more uniformly than the standard light field microscope, solving the low resolution problem at the center of the imaged volume and improving the resolution at its borders. We derive an extended optical model for the wavefront coded light field microscope and propose design guidelines and a performance metric which we use to choose adequate parameters for good phase masks. To validate our approach, we show simulated data and experimental resolution measurements, and demonstrate the wavefront coded light field microscope's utility for biological applications.
Collection
Center for International Security and Cooperation (CISAC) Interschool Honors Program in International Security Studies
China faces significant water security challenges, many of which will be aggravated by increasing energy demands and climate change. This thesis evaluates the magnitude of the water security threats and the effectiveness of current water policies in China. The analysis demonstrates that China understands the significance of the water scarcity threat but also that its current policies are inadequately managing the country's water resources. As China’s energy sector continues to expand, it will need to balance limited water resources among agriculture, industry, energy, and other needs of the people. All of these complex issues surface in the northwest province of Xinjiang, which has political instability and ethnic conflict, severely limited water resources, an important agricultural industry, and the Tarim shale basin which is attractive for hydraulic fracturing. To manage water resources effectively, China will need to reform the provincial decision-making process, starting with new incentives for provincial leaderships. China must find ways to incentivize provincial leaders to prioritize water management, regulate industry to improve water efficiency and reduce pollution, develop energy strategies that factor in water withdrawals, and alleviate ethnic tensions in Xinjiang that could be exacerbated by water insecurity. Failure to manage water resources efficiently through effective water policies could seriously constrain China's future economic development.
China faces significant water security challenges, many of which will be aggravated by increasing energy demands and climate change. This thesis evaluates the magnitude of the water security threats and the effectiveness of current water policies in China. The analysis demonstrates that China understands the significance of the water scarcity threat but also that its current policies are inadequately managing the country's water resources. As China’s energy sector continues to expand, it will need to balance limited water resources among agriculture, industry, energy, and other needs of the people. All of these complex issues surface in the northwest province of Xinjiang, which has political instability and ethnic conflict, severely limited water resources, an important agricultural industry, and the Tarim shale basin which is attractive for hydraulic fracturing. To manage water resources effectively, China will need to reform the provincial decision-making process, starting with new incentives for provincial leaderships. China must find ways to incentivize provincial leaders to prioritize water management, regulate industry to improve water efficiency and reduce pollution, develop energy strategies that factor in water withdrawals, and alleviate ethnic tensions in Xinjiang that could be exacerbated by water insecurity. Failure to manage water resources efficiently through effective water policies could seriously constrain China's future economic development.
Book
1 online resource.
The enormous size and cost of current state-of-the-art accelerators based upon conventional radio-frequency (RF) technology has spawned a great interest in developing new acceleration concepts that are more compact and economical. Micro-fabricated dielectric laser accelerators (DLAs) are an attractive approach as such structures can support accelerating fields one to two orders of magnitude higher than RF cavity-based accelerators. DLAs use commercial lasers as a power source, which are smaller and less expensive than RF klystrons that power today's accelerators. In addition, DLAs are fabricated via mass-producible, low cost, lithographic techniques. However, despite several DLA structures being proposed recently, no successful demonstration of acceleration in these structures had been shown until this work. This thesis reports the first observation of high-gradient (exceeding 300 MeV/m) acceleration of electrons in a DLA. Relativistic (60 MeV) electrons are energy modulated over 563 optical periods of a fused silica grating structure, powered by a 800 nm wavelength mode-locked Ti:Sapphire laser. The observed results are in agreement with analytical models and electrodynamic simulations. By comparison, conventional modern linear accelerators operate at gradients of 10-30 MeV/m; and the first linear RF cavity accelerator was 10 RF periods (1 m long) with a gradient of approximately 1.6 MV/m. Our results set the stage for the development of future multi-staged DLA devices composed of integrated on-chip systems. This would enable compact table-top MeV to GeV scale accelerators for security scanners and medical therapy, university-scale x-ray light sources for biological and materials research, portable medical imaging devices, and would substantially reduce the size and cost of a future multi-TeV scale collider.
The enormous size and cost of current state-of-the-art accelerators based upon conventional radio-frequency (RF) technology has spawned a great interest in developing new acceleration concepts that are more compact and economical. Micro-fabricated dielectric laser accelerators (DLAs) are an attractive approach as such structures can support accelerating fields one to two orders of magnitude higher than RF cavity-based accelerators. DLAs use commercial lasers as a power source, which are smaller and less expensive than RF klystrons that power today's accelerators. In addition, DLAs are fabricated via mass-producible, low cost, lithographic techniques. However, despite several DLA structures being proposed recently, no successful demonstration of acceleration in these structures had been shown until this work. This thesis reports the first observation of high-gradient (exceeding 300 MeV/m) acceleration of electrons in a DLA. Relativistic (60 MeV) electrons are energy modulated over 563 optical periods of a fused silica grating structure, powered by a 800 nm wavelength mode-locked Ti:Sapphire laser. The observed results are in agreement with analytical models and electrodynamic simulations. By comparison, conventional modern linear accelerators operate at gradients of 10-30 MeV/m; and the first linear RF cavity accelerator was 10 RF periods (1 m long) with a gradient of approximately 1.6 MV/m. Our results set the stage for the development of future multi-staged DLA devices composed of integrated on-chip systems. This would enable compact table-top MeV to GeV scale accelerators for security scanners and medical therapy, university-scale x-ray light sources for biological and materials research, portable medical imaging devices, and would substantially reduce the size and cost of a future multi-TeV scale collider.
Collection
Undergraduate Theses, School of Engineering
Many species of bats rely on echolocation—a biological sonar system—to carry out essential survival tasks such as hunting and orienting in their nighttime environments. How bats echolocate so successfully despite considerable physical and acoustic obstacles is a fascinating problem in signal processing. Because bats often live in large groups, they encounter substantial acoustic interference from echolocating conspecifics that may impede their own ability to echolocate. In the present study, I examine the theoretical effect of interference from conspecifics and congenerics on the target detection ability of two cohabiting species of vespertilionid bat found in California (Myotis californicus and M. yumanensis). I use the wideband ambiguity function, a tool for analyzing human-made radar and sonar systems, to assess whether it is theoretically possible for a bat to shift its call in frequency to reduce the effect of conspecific interference. I apply the same methods to investigate the effect of interfering calls from M. californicus on the target detection ability of M. yumanensis and vice versa. Results indicate that these two species experience considerable interference from both within- and between-species calls. Incorporating data from the field suggest that a bat’s calls are shaped such that most interference an individual is likely to encounter manifests as a reduction in the bat’s ability to accurately locate a target rather than as an increase in likelihood that the bat will mistake one object for another—an adaptation that may prevent the bat from becoming disoriented or losing track of an acquired target. Differences in calls between M. californicus and M. yumanensis may reflect adaptations to environmental clutter. Further research should focus on the implications of congeneric interference for the spatial and temporal distribution of these species.
Many species of bats rely on echolocation—a biological sonar system—to carry out essential survival tasks such as hunting and orienting in their nighttime environments. How bats echolocate so successfully despite considerable physical and acoustic obstacles is a fascinating problem in signal processing. Because bats often live in large groups, they encounter substantial acoustic interference from echolocating conspecifics that may impede their own ability to echolocate. In the present study, I examine the theoretical effect of interference from conspecifics and congenerics on the target detection ability of two cohabiting species of vespertilionid bat found in California (Myotis californicus and M. yumanensis). I use the wideband ambiguity function, a tool for analyzing human-made radar and sonar systems, to assess whether it is theoretically possible for a bat to shift its call in frequency to reduce the effect of conspecific interference. I apply the same methods to investigate the effect of interfering calls from M. californicus on the target detection ability of M. yumanensis and vice versa. Results indicate that these two species experience considerable interference from both within- and between-species calls. Incorporating data from the field suggest that a bat’s calls are shaped such that most interference an individual is likely to encounter manifests as a reduction in the bat’s ability to accurately locate a target rather than as an increase in likelihood that the bat will mistake one object for another—an adaptation that may prevent the bat from becoming disoriented or losing track of an acquired target. Differences in calls between M. californicus and M. yumanensis may reflect adaptations to environmental clutter. Further research should focus on the implications of congeneric interference for the spatial and temporal distribution of these species.
Book
1 online resource.
This dissertation presents novel methods for simulating incompressible and compressible flow on a multitude of Cartesian grids that can rotate and translate in order to decompose the domain into different regions with varying spatial resolutions. While there are a wide variety of methods for adaptive discretization, many of these methods suffer from issues with costly remeshing and domain decomposition when they are scaled to solve large problems that require the use of large distributed systems. Block structured approaches such as Adaptive Mesh Refinement (AMR) and Chimera grid methods have been successful in alleviating these issues by utilizing structured grids patched upon one another. However, typical AMR methods constrain grid patches to be axis-aligned greatly increasing the number of patches required. With so many small patches, typical AMR methods are often more akin to unstructured grids with respect to parallelization and scalability. Chimera grid methods allow the grid patches to rotate allowing one to resolve interesting features with far fewer degrees of freedom. Moreover, unlike typical AMR methods which require the coarse grid lines to match up with the fine grid lines along patch boundaries, Chimera grid methods do not have this requirement allowing the grids to move in order to for example follow the motion of moving solids with no need of remeshing. The presented computational framework can be categorized as a Chimera grid method, and new ideas are proposed regarding conservation, linear systems for implicit solve, and alleviating time step restrictions. The incompressible Navier-Stokes equations are discretized on overlapping grids by first performing advection on each grid with first or second order accurate semi-Lagrangian schemes extended to Chimera grids in order to alleviate any time step restrictions associated with small cells which are introduced due to adaptivity. In order to solve for the stiff terms such as the pressure or viscous forces implicitly on overlapping grids, local Voronoi meshes are constructed along intergrid boundaries to connect the various degrees of freedom across different grids in a contiguous manner, resulting in a symmetric positive-definite system that can be solved via the preconditioned conjugate gradient method. In order to handle free surface flow on overlapping grids, the particle level set method is modified, including devising particle treatment across grid boundaries with disparate cell sizes, and designing strategies to deal with locality in the implementation of the level set and fast marching algorithms. The resulting method is highly scalable on distributed parallel architectures with minimal communication costs. The Euler equations for compressible flow are discretized using a semi-implicit formulation that splits the time integration into an explicit step for advection followed by an implicit solve for the pressure. A second order accurate flux based scheme is devised to handle advection on each moving Cartesian grid using an effective characteristic velocity that accounts for the grid motion. In order to avoid the stringent time step restriction imposed by small cells, strategies are proposed in order to allow for a fluid velocity CFL number larger than 1. The stringent time step restriction related to the sound speed is alleviated by formulating an implicit linear system in order to find a pressure consistent with the equation of state, again utilizing the Voronoi mesh obtaining a symmetric positive-definite system. Since a straightforward application of this technique contains an inherent central differencing which can result in spurious oscillations, a new high order diffusion term is introduced similar in spirit to ENO-LLF but solved for implicitly in order to avoid any associated time step restrictions. The method is conservative on each grid, as well as globally conservative on the background grid that contains all other grids. Moreover, a conservative interpolation operator is devised for conservatively remapping values in order to keep them consistent across different overlapping grids. Additionally, the method is extended to handle two-way solid fluid coupling in a monolithic fashion. In solid fluid coupling problems, the fluid in the thin gap between solids in close proximity is difficult to resolve with fluid grids. Although one might attempt to address this difficulty using an adaptive, body-fitted, or ALE fluid grid, the size of the fluid cells can shrink to zero as the solids collide. The inability to apply pressure forces in a thin lubricated gap tends to make the solids stick together, since collision forces stop interpenetration but vanish when the solids are separating leaving the fluid pressure forces on the surfaces of the solids unbalanced in regard to the gap region. This problem is addressed by adding fluid pressure and velocity degrees of freedom onto solids' surfaces, and subsequently using the resulting pressure forces to provide solid fluid coupling in the thin gap region. These fluid pressure and velocity degrees of freedom readily resolve the tangential flow along the solid surface inside the gap and are two-way coupled to the fluid degrees of freedom on the grids allowing the fluid to freely flow into and out of the gap region, which again results in a symmetric positive-definite implicit linear system.
This dissertation presents novel methods for simulating incompressible and compressible flow on a multitude of Cartesian grids that can rotate and translate in order to decompose the domain into different regions with varying spatial resolutions. While there are a wide variety of methods for adaptive discretization, many of these methods suffer from issues with costly remeshing and domain decomposition when they are scaled to solve large problems that require the use of large distributed systems. Block structured approaches such as Adaptive Mesh Refinement (AMR) and Chimera grid methods have been successful in alleviating these issues by utilizing structured grids patched upon one another. However, typical AMR methods constrain grid patches to be axis-aligned greatly increasing the number of patches required. With so many small patches, typical AMR methods are often more akin to unstructured grids with respect to parallelization and scalability. Chimera grid methods allow the grid patches to rotate allowing one to resolve interesting features with far fewer degrees of freedom. Moreover, unlike typical AMR methods which require the coarse grid lines to match up with the fine grid lines along patch boundaries, Chimera grid methods do not have this requirement allowing the grids to move in order to for example follow the motion of moving solids with no need of remeshing. The presented computational framework can be categorized as a Chimera grid method, and new ideas are proposed regarding conservation, linear systems for implicit solve, and alleviating time step restrictions. The incompressible Navier-Stokes equations are discretized on overlapping grids by first performing advection on each grid with first or second order accurate semi-Lagrangian schemes extended to Chimera grids in order to alleviate any time step restrictions associated with small cells which are introduced due to adaptivity. In order to solve for the stiff terms such as the pressure or viscous forces implicitly on overlapping grids, local Voronoi meshes are constructed along intergrid boundaries to connect the various degrees of freedom across different grids in a contiguous manner, resulting in a symmetric positive-definite system that can be solved via the preconditioned conjugate gradient method. In order to handle free surface flow on overlapping grids, the particle level set method is modified, including devising particle treatment across grid boundaries with disparate cell sizes, and designing strategies to deal with locality in the implementation of the level set and fast marching algorithms. The resulting method is highly scalable on distributed parallel architectures with minimal communication costs. The Euler equations for compressible flow are discretized using a semi-implicit formulation that splits the time integration into an explicit step for advection followed by an implicit solve for the pressure. A second order accurate flux based scheme is devised to handle advection on each moving Cartesian grid using an effective characteristic velocity that accounts for the grid motion. In order to avoid the stringent time step restriction imposed by small cells, strategies are proposed in order to allow for a fluid velocity CFL number larger than 1. The stringent time step restriction related to the sound speed is alleviated by formulating an implicit linear system in order to find a pressure consistent with the equation of state, again utilizing the Voronoi mesh obtaining a symmetric positive-definite system. Since a straightforward application of this technique contains an inherent central differencing which can result in spurious oscillations, a new high order diffusion term is introduced similar in spirit to ENO-LLF but solved for implicitly in order to avoid any associated time step restrictions. The method is conservative on each grid, as well as globally conservative on the background grid that contains all other grids. Moreover, a conservative interpolation operator is devised for conservatively remapping values in order to keep them consistent across different overlapping grids. Additionally, the method is extended to handle two-way solid fluid coupling in a monolithic fashion. In solid fluid coupling problems, the fluid in the thin gap between solids in close proximity is difficult to resolve with fluid grids. Although one might attempt to address this difficulty using an adaptive, body-fitted, or ALE fluid grid, the size of the fluid cells can shrink to zero as the solids collide. The inability to apply pressure forces in a thin lubricated gap tends to make the solids stick together, since collision forces stop interpenetration but vanish when the solids are separating leaving the fluid pressure forces on the surfaces of the solids unbalanced in regard to the gap region. This problem is addressed by adding fluid pressure and velocity degrees of freedom onto solids' surfaces, and subsequently using the resulting pressure forces to provide solid fluid coupling in the thin gap region. These fluid pressure and velocity degrees of freedom readily resolve the tangential flow along the solid surface inside the gap and are two-way coupled to the fluid degrees of freedom on the grids allowing the fluid to freely flow into and out of the gap region, which again results in a symmetric positive-definite implicit linear system.
Book
1 online resource.
This dissertation studies mechanisms that mediate inhibitory and excitatory synaptic transmission in the healthy brain and in animal models of stroke and epilepsy. Stroke is a major cause of disability yet lacks pharmacotherapies for recovery (Donnan et al., 2008). During the repair phase, spontaneous cortical circuit plasticity and reorganization adjacent to the stroke site (peri-infarct) promote functional recovery (Carmichael, 2006; Dijkhuizen et al., 2001; Murphy and Corbett, 2009; Nudo et al., 1996). Elucidating mechanisms that target these endogenous brain repair processes could lead to new therapeutics with a broad treatment window. Inhibiting the post stroke increase in tonic (extrasynaptic) GABA signaling during the repair phase was reported to enhance functional recovery in mice, suggesting that GABA plays an important function in modulating brain repair (Clarkson et al., 2010). While tonic GABA appears to suppress brain repair after stroke, the role of phasic (synaptic) GABA during the repair phase is unknown. In Chapter 2, we report a post-synaptic increase in phasic GABA signaling within the peri-infarct cortex that is specific to layer 5 pyramidal neurons; we measured increased numbers of alpha-1 receptor subunit containing GABAergic synapses detected using array tomography, and an associated increased efficacy of spontaneous and miniature inhibitory post-synaptic currents in pyramidal neurons. In contrast to the reported effects of tonic inhibition, enhancing phasic GABA signaling in the recovery phase using zolpidem, an alpha-1 subunit positive allosteric modulator (Crestani et al., 2000), improved behavioral recovery. These data identify a novel role for phasic GABA signaling in brain repair, indicate zolpidem's potential to improve recovery, and underscore the necessity to distinguish the role of tonic and phasic inhibition in stroke recovery. Temporal lobe epilepsy (TLE) is the most common form of adult seizure disorder and is often associated with drug refractory epilepsy (Wiebe, 2000). GABAARs are thought to play a key role in the pathophysiology of many types of epilepsy, including TLE, and are the target site of benzodiazepines, commonly used as antiepileptic medications (Noebels et al., 2012a; Gonzalez and Brooks-Kayal, 2011). Previous studies have shown dentate granule cells display an increased pharmacological response to the central benzodiazepine receptor (CBR) antagonist flumazenil (FLZ) after status epilepticus (SE) in an animal model of temporal lobe epilepsy. It has been previously reported that in slices taken from pilocarpine-induced epileptic rats, hippocampal dentate granule cells demonstrate a FLZ induced reduction in mIPSC half width. This is in contrast to control animals, where FLZ application has no effect on dentate granule cell mIPSC kinetics. The mechanism(s) by which FLZ reduces mIPSC half width in SE tissue is not known, but one hypothesis is that an endogenous compound active at the CBR may be upregulated after SE. However, in Chapter 3 I performed preliminary experiments in animals with SE and did not find a consistent reduction in mIPSC parameters after SE previously reported (Leroy et al., 2004). My results demonstrate that FLZ is not acting as a pure negative allosteric modulator (NAM), but instead may act occasionally as a NAM and predominantly as a positive allosteric modulator (PAM) or that NAMs are expressed after seizures and actually weaken IPSCs. These experiments suggest that a small subset of dentate granule cells respond to FLZ with varying response profiles. Synaptic transmission requires a continuous supply of neurotransmitter for release. Although most types of neurons use direct reuptake to recycle released neurotransmitters, evidence indicates that glutamatergic synapses rely predominantly on astrocytes for generation and recycling of glutamate (Hertz, 1979). Although biochemical studies suggest that excitatory neurons are metabolically coupled with astrocytes to generate the glutamate necessary to maintain glutamatergic neurotransmission, direct electrophysiological evidence is lacking. In fact, a requirement for the cycle has only been demonstrated during epileptiform activity, a disease setting in which glutamate release is greatly increased (Bacci et al., 2002; Otsuki et al., 2005; Tani et al., 2010). The large distance between cell bodies and axon terminals limits the contribution of somatic sources to the pool of glutamate available for synaptic release and predicts that glutamine-glutamate cycle is synaptically localized. In Chapter 4, we investigated neurotransmitter release from isolated nerve terminals in brain slices by transecting hippocampal Schaffer collaterals and cortical layer I axons. Stimulating with alternating periods of high frequency (20 Hz) and rest (0.2 Hz), we identified an activity-dependent reduction in synaptic efficacy that correlated with reduced glutamate release. This was enhanced by inhibition of astrocytic glutamine synthetase and reversed or prevented by exogenous glutamine. Importantly, this activity dependence was also revealed with an in vivo derived natural stimulus both at network and cellular levels. These data provide direct electrophysiological evidence that an astrocyte-dependent glutamate-glutamine cycle is required to maintain active neurotransmission at excitatory terminals.
This dissertation studies mechanisms that mediate inhibitory and excitatory synaptic transmission in the healthy brain and in animal models of stroke and epilepsy. Stroke is a major cause of disability yet lacks pharmacotherapies for recovery (Donnan et al., 2008). During the repair phase, spontaneous cortical circuit plasticity and reorganization adjacent to the stroke site (peri-infarct) promote functional recovery (Carmichael, 2006; Dijkhuizen et al., 2001; Murphy and Corbett, 2009; Nudo et al., 1996). Elucidating mechanisms that target these endogenous brain repair processes could lead to new therapeutics with a broad treatment window. Inhibiting the post stroke increase in tonic (extrasynaptic) GABA signaling during the repair phase was reported to enhance functional recovery in mice, suggesting that GABA plays an important function in modulating brain repair (Clarkson et al., 2010). While tonic GABA appears to suppress brain repair after stroke, the role of phasic (synaptic) GABA during the repair phase is unknown. In Chapter 2, we report a post-synaptic increase in phasic GABA signaling within the peri-infarct cortex that is specific to layer 5 pyramidal neurons; we measured increased numbers of alpha-1 receptor subunit containing GABAergic synapses detected using array tomography, and an associated increased efficacy of spontaneous and miniature inhibitory post-synaptic currents in pyramidal neurons. In contrast to the reported effects of tonic inhibition, enhancing phasic GABA signaling in the recovery phase using zolpidem, an alpha-1 subunit positive allosteric modulator (Crestani et al., 2000), improved behavioral recovery. These data identify a novel role for phasic GABA signaling in brain repair, indicate zolpidem's potential to improve recovery, and underscore the necessity to distinguish the role of tonic and phasic inhibition in stroke recovery. Temporal lobe epilepsy (TLE) is the most common form of adult seizure disorder and is often associated with drug refractory epilepsy (Wiebe, 2000). GABAARs are thought to play a key role in the pathophysiology of many types of epilepsy, including TLE, and are the target site of benzodiazepines, commonly used as antiepileptic medications (Noebels et al., 2012a; Gonzalez and Brooks-Kayal, 2011). Previous studies have shown dentate granule cells display an increased pharmacological response to the central benzodiazepine receptor (CBR) antagonist flumazenil (FLZ) after status epilepticus (SE) in an animal model of temporal lobe epilepsy. It has been previously reported that in slices taken from pilocarpine-induced epileptic rats, hippocampal dentate granule cells demonstrate a FLZ induced reduction in mIPSC half width. This is in contrast to control animals, where FLZ application has no effect on dentate granule cell mIPSC kinetics. The mechanism(s) by which FLZ reduces mIPSC half width in SE tissue is not known, but one hypothesis is that an endogenous compound active at the CBR may be upregulated after SE. However, in Chapter 3 I performed preliminary experiments in animals with SE and did not find a consistent reduction in mIPSC parameters after SE previously reported (Leroy et al., 2004). My results demonstrate that FLZ is not acting as a pure negative allosteric modulator (NAM), but instead may act occasionally as a NAM and predominantly as a positive allosteric modulator (PAM) or that NAMs are expressed after seizures and actually weaken IPSCs. These experiments suggest that a small subset of dentate granule cells respond to FLZ with varying response profiles. Synaptic transmission requires a continuous supply of neurotransmitter for release. Although most types of neurons use direct reuptake to recycle released neurotransmitters, evidence indicates that glutamatergic synapses rely predominantly on astrocytes for generation and recycling of glutamate (Hertz, 1979). Although biochemical studies suggest that excitatory neurons are metabolically coupled with astrocytes to generate the glutamate necessary to maintain glutamatergic neurotransmission, direct electrophysiological evidence is lacking. In fact, a requirement for the cycle has only been demonstrated during epileptiform activity, a disease setting in which glutamate release is greatly increased (Bacci et al., 2002; Otsuki et al., 2005; Tani et al., 2010). The large distance between cell bodies and axon terminals limits the contribution of somatic sources to the pool of glutamate available for synaptic release and predicts that glutamine-glutamate cycle is synaptically localized. In Chapter 4, we investigated neurotransmitter release from isolated nerve terminals in brain slices by transecting hippocampal Schaffer collaterals and cortical layer I axons. Stimulating with alternating periods of high frequency (20 Hz) and rest (0.2 Hz), we identified an activity-dependent reduction in synaptic efficacy that correlated with reduced glutamate release. This was enhanced by inhibition of astrocytic glutamine synthetase and reversed or prevented by exogenous glutamine. Importantly, this activity dependence was also revealed with an in vivo derived natural stimulus both at network and cellular levels. These data provide direct electrophysiological evidence that an astrocyte-dependent glutamate-glutamine cycle is required to maintain active neurotransmission at excitatory terminals.
Book
1 online resource.
Dynamic state-space models are useful for describing data in various fields, including robotics. An important problem that may be solved by using dynamic state-space models is the estimation of underlying state processes from given observations. When the models are non-linear and the noise not Gaussian, it is impossible to solve the problem analytically; thus, particle filters, also known as sequential Monte Carlo methods, tend to be employed. However, because particle filters are based on sequential importance sampling, the problem arises of how to select the importance density function. Handling unknown parameters in the model presents another significant difficulty in particle filtering. Simultaneous localization and mapping (SLAM) in robotics is one well-known but difficult problem for which particle filters have been used. This dissertation is motivated by SLAM problems and related particle filtering approaches. In this dissertation, we design a new proposal distribution that better approximates the optimal importance function, using a novel way of combining information from observations and state transition dynamics. In the first part of our study, after reviewing representative approaches for SLAM problems, we justify our method of combining information with a series of examples and offer an efficient means of constructing the new proposal distribution. In the second part, we focus on the problems inherent in handling unknown parameters in state-space models. We suggest the application of one-step recursive expectation-maximization (EM) algorithm to learn unknown parameters, and recommend pairing it with the new proposal distribution into an adaptive particle filter algorithm. Furthermore, we propose a new SLAM filter based on the adaptation of the new adaptive particle filter to SLAM problems. In Chapter 3, we conduct simulation studies on localization and SLAM problems to demonstrate the superior numerical performance of the proposed algorithms.
Dynamic state-space models are useful for describing data in various fields, including robotics. An important problem that may be solved by using dynamic state-space models is the estimation of underlying state processes from given observations. When the models are non-linear and the noise not Gaussian, it is impossible to solve the problem analytically; thus, particle filters, also known as sequential Monte Carlo methods, tend to be employed. However, because particle filters are based on sequential importance sampling, the problem arises of how to select the importance density function. Handling unknown parameters in the model presents another significant difficulty in particle filtering. Simultaneous localization and mapping (SLAM) in robotics is one well-known but difficult problem for which particle filters have been used. This dissertation is motivated by SLAM problems and related particle filtering approaches. In this dissertation, we design a new proposal distribution that better approximates the optimal importance function, using a novel way of combining information from observations and state transition dynamics. In the first part of our study, after reviewing representative approaches for SLAM problems, we justify our method of combining information with a series of examples and offer an efficient means of constructing the new proposal distribution. In the second part, we focus on the problems inherent in handling unknown parameters in state-space models. We suggest the application of one-step recursive expectation-maximization (EM) algorithm to learn unknown parameters, and recommend pairing it with the new proposal distribution into an adaptive particle filter algorithm. Furthermore, we propose a new SLAM filter based on the adaptation of the new adaptive particle filter to SLAM problems. In Chapter 3, we conduct simulation studies on localization and SLAM problems to demonstrate the superior numerical performance of the proposed algorithms.
Book
1 online resource.
Delamination of solar module laminates (encapsulations, backsheets and frontsheets) is the least understood failure mode in the PV industry. It is well known, however, that long-term exposure to environmental stressors, including moisture, temperature and UV light, often leads to laminate degradation and ensuing loss of adhesion. Low levels of mechanical stress (thermal, wind, or handling) can then lead to module debonding, component corrosion and loss of function. Although adhesive failure of module laminates is frequently reported, it has yet to be characterized, understood or quantified. In this dissertation, I present a series of mechanics-based techniques to quantify adhesion in encapsulations, backsheets and frontsheets. Cantilever-beam and cantilever--plate techniques were developed for small specimens and full-size modules. Up to 90% loss of laminate adhesion was measured after small increases of operating temperature (T), relative humidity (RH) and UV light. These metrologies allow the use of an absolute scale (J/m2) to quantify adhesive stability after field or simulated exposures. To estimate module lifetime, the kinetics of debonding of the module laminates were characterized in the presence of environmental stressors. Debonding rates as low as 10-8 m/s were studied as a function of mechanical stress, T, RH and UV light. The debond growth rates of the laminates increased up to 1000-fold with small increases of T (10°C) and RH (15%). To elucidate the mechanisms of environmental debonding, fracture and bond-rupture kinetics models were developed. In these models, the viscoelastic relaxation and reaction-kinetics processes at the debonding-tip are used to predict debond growth. The adhesion metrologies and kinetics models we developed constitute a fundamental basis for developing accelerated aging tests and long-term reliability predictions for solar module materials.
Delamination of solar module laminates (encapsulations, backsheets and frontsheets) is the least understood failure mode in the PV industry. It is well known, however, that long-term exposure to environmental stressors, including moisture, temperature and UV light, often leads to laminate degradation and ensuing loss of adhesion. Low levels of mechanical stress (thermal, wind, or handling) can then lead to module debonding, component corrosion and loss of function. Although adhesive failure of module laminates is frequently reported, it has yet to be characterized, understood or quantified. In this dissertation, I present a series of mechanics-based techniques to quantify adhesion in encapsulations, backsheets and frontsheets. Cantilever-beam and cantilever--plate techniques were developed for small specimens and full-size modules. Up to 90% loss of laminate adhesion was measured after small increases of operating temperature (T), relative humidity (RH) and UV light. These metrologies allow the use of an absolute scale (J/m2) to quantify adhesive stability after field or simulated exposures. To estimate module lifetime, the kinetics of debonding of the module laminates were characterized in the presence of environmental stressors. Debonding rates as low as 10-8 m/s were studied as a function of mechanical stress, T, RH and UV light. The debond growth rates of the laminates increased up to 1000-fold with small increases of T (10°C) and RH (15%). To elucidate the mechanisms of environmental debonding, fracture and bond-rupture kinetics models were developed. In these models, the viscoelastic relaxation and reaction-kinetics processes at the debonding-tip are used to predict debond growth. The adhesion metrologies and kinetics models we developed constitute a fundamental basis for developing accelerated aging tests and long-term reliability predictions for solar module materials.
Book
1 online resource.
The ability to precisely modify the genome with high efficiency in a directed manner is dependent on generating and exploiting a double-strand break in DNA. The tools available to generate double-strand breaks include the recombinases and resolvases, the phage integrases, the homing endonucleases, zinc finger nucleases, transcription activator-like effector nucleases, and the CRISPR/Cas9 system, the most recently developed tool. These tools allow for in-depth studies of the function of genes and genome structure, the generation of organisms bearing economically important bio-synthetic pathways, and the development of novel gene and cell therapies for addressing previously untreatable diseases. Taking full advantage of each tool and knowing when to use each one requires a thorough understanding of how the tool functions. For most of the genome engineering toolkit, this understanding has been achieved. The CRISPR/Cas9 system, however, has been assumed to function in the same manner as the zinc finger and transcription activator-like effector nucleases despite generating blunt-end double-strand breaks rather than staggered breaks. This difference is important to understanding how Cas9-mediated double-strand breaks are repaired, and, thus, how best to exploit these breaks for genome engineering. This thesis illustrates the importance understanding how each genome engineering tool functions in developing advantageous methods for gene and cell therapies.
The ability to precisely modify the genome with high efficiency in a directed manner is dependent on generating and exploiting a double-strand break in DNA. The tools available to generate double-strand breaks include the recombinases and resolvases, the phage integrases, the homing endonucleases, zinc finger nucleases, transcription activator-like effector nucleases, and the CRISPR/Cas9 system, the most recently developed tool. These tools allow for in-depth studies of the function of genes and genome structure, the generation of organisms bearing economically important bio-synthetic pathways, and the development of novel gene and cell therapies for addressing previously untreatable diseases. Taking full advantage of each tool and knowing when to use each one requires a thorough understanding of how the tool functions. For most of the genome engineering toolkit, this understanding has been achieved. The CRISPR/Cas9 system, however, has been assumed to function in the same manner as the zinc finger and transcription activator-like effector nucleases despite generating blunt-end double-strand breaks rather than staggered breaks. This difference is important to understanding how Cas9-mediated double-strand breaks are repaired, and, thus, how best to exploit these breaks for genome engineering. This thesis illustrates the importance understanding how each genome engineering tool functions in developing advantageous methods for gene and cell therapies.
Book
1 online resource.
RNA plays critical roles in fundamental biological processes, including transcription, translation, post-transcriptional regulation of genetic expression, and catalysis as enzymes. These critical RNA functions are determined by the structures and dynamics of the RNA molecules. Computational methods might be used to predict the structures and dynamics of RNA. Unfortunately, the prediction accuracies of current computational methods are still inferior compared to experiments. In this dissertation, I discuss recent advances I made in improving and developing computational methods to make accurate predictions on the RNA structures and dynamics. The dissertation contains three individual research projects. In the first part, I present a protocol for Enumerative Real-space Refinement ASsisted by Electron density under Rosetta (ERRASER). ERRASER combined RNA structure prediction algorithm with experimental constraints from crystallography, to correct the pervasive ambiguities in RNA crystal structures. On 24 RNA crystallographic datasets, ERRASER corrects the majority of steric clashes and anomalous backbone geometries, improves the average Rfree by 0.014, resolves functionally important structural discrepancies, and refines low-resolution structures to better match higher resolution structures. In the second part, I present HelixMC, a package for simulating kilobase-length double-stranded DNA and RNA (dsDNA and dsRNA) under external forces and torques, which is typical in single-molecule tweezers experiments. It recovered the experimental bending persistence length of dsRNA within the error of the simulations and accurately predicted that dsRNA's "spring-like" conformation would give a two-fold decrease of stretch modulus relative to dsDNA. In the third part, I developed a framework of Reweighting of Energy-function Collection with Conformational Ensemble Sampling (RECCES), to predict the folding free energies of RNA duplexes. With efficient sampling and reweighting, RECCES allows comprehensive exploration of the prediction power of Rosetta energy function, and provides a powerful platform for testing future improvement of the energy function. In all the projects above, I leveraged rich datasets from previous experiments to develop novel algorithms that gave predictions with unprecedented accuracies, which were validated by independent blind tests. These computational methods I developed could also serve as a solid foundation for future efforts of improving prediction accuracies of RNA computational algorithms.
RNA plays critical roles in fundamental biological processes, including transcription, translation, post-transcriptional regulation of genetic expression, and catalysis as enzymes. These critical RNA functions are determined by the structures and dynamics of the RNA molecules. Computational methods might be used to predict the structures and dynamics of RNA. Unfortunately, the prediction accuracies of current computational methods are still inferior compared to experiments. In this dissertation, I discuss recent advances I made in improving and developing computational methods to make accurate predictions on the RNA structures and dynamics. The dissertation contains three individual research projects. In the first part, I present a protocol for Enumerative Real-space Refinement ASsisted by Electron density under Rosetta (ERRASER). ERRASER combined RNA structure prediction algorithm with experimental constraints from crystallography, to correct the pervasive ambiguities in RNA crystal structures. On 24 RNA crystallographic datasets, ERRASER corrects the majority of steric clashes and anomalous backbone geometries, improves the average Rfree by 0.014, resolves functionally important structural discrepancies, and refines low-resolution structures to better match higher resolution structures. In the second part, I present HelixMC, a package for simulating kilobase-length double-stranded DNA and RNA (dsDNA and dsRNA) under external forces and torques, which is typical in single-molecule tweezers experiments. It recovered the experimental bending persistence length of dsRNA within the error of the simulations and accurately predicted that dsRNA's "spring-like" conformation would give a two-fold decrease of stretch modulus relative to dsDNA. In the third part, I developed a framework of Reweighting of Energy-function Collection with Conformational Ensemble Sampling (RECCES), to predict the folding free energies of RNA duplexes. With efficient sampling and reweighting, RECCES allows comprehensive exploration of the prediction power of Rosetta energy function, and provides a powerful platform for testing future improvement of the energy function. In all the projects above, I leveraged rich datasets from previous experiments to develop novel algorithms that gave predictions with unprecedented accuracies, which were validated by independent blind tests. These computational methods I developed could also serve as a solid foundation for future efforts of improving prediction accuracies of RNA computational algorithms.