Search results

128,516 results

View results as:
Number of results to display per page
Book
1 online resource.
The labeling of specific biological structures with single fluorescent molecules has ushered in a new era of imaging technology: super-resolution optical microscopy with resolution far beyond the diffraction limit down to some tens of nm. With the features of these exquisite tools in mind, this Dissertation discusses optical strategies for measuring the three-dimensional (3D) position and orientation of single molecules with nanoscale precision and several super-resolution imaging studies of structures in living cells. The concepts of single-molecule imaging, super-resolution microscopy, the engineering of optical point spread functions (PSFs), and quantitative analysis of single-molecule fluorescence images are introduced. The various computational methods and experimental apparatuses developed during the course of my graduate work are also discussed. Next, a new engineered point spread function, called the Corkscrew PSF, is shown for 3D imaging of point-like emitters. This PSF has been demonstrated to measure the location of nanoscale objects with 2-6 nm precision in 3D throughout a 3.2-micrometer depth range. Characterization and application of the Double-Helix (DH) PSF for super-resolution imaging of structures within mammalian and bacterial cells is discussed. The DH-PSF enables 3D single-molecule imaging within living cells with precisions of tens of nanometers throughout a ~2-micrometer depth range. Finally, the impact of single-molecule emission patterns and molecular orientation on optical imaging is treated, with particular emphasis on multiple strategies for improving the accuracy of super-resolution imaging. The DH microscope is shown to be well-suited for accurately and simultaneously measuring the 3D position and orientation of single molecules.
The labeling of specific biological structures with single fluorescent molecules has ushered in a new era of imaging technology: super-resolution optical microscopy with resolution far beyond the diffraction limit down to some tens of nm. With the features of these exquisite tools in mind, this Dissertation discusses optical strategies for measuring the three-dimensional (3D) position and orientation of single molecules with nanoscale precision and several super-resolution imaging studies of structures in living cells. The concepts of single-molecule imaging, super-resolution microscopy, the engineering of optical point spread functions (PSFs), and quantitative analysis of single-molecule fluorescence images are introduced. The various computational methods and experimental apparatuses developed during the course of my graduate work are also discussed. Next, a new engineered point spread function, called the Corkscrew PSF, is shown for 3D imaging of point-like emitters. This PSF has been demonstrated to measure the location of nanoscale objects with 2-6 nm precision in 3D throughout a 3.2-micrometer depth range. Characterization and application of the Double-Helix (DH) PSF for super-resolution imaging of structures within mammalian and bacterial cells is discussed. The DH-PSF enables 3D single-molecule imaging within living cells with precisions of tens of nanometers throughout a ~2-micrometer depth range. Finally, the impact of single-molecule emission patterns and molecular orientation on optical imaging is treated, with particular emphasis on multiple strategies for improving the accuracy of super-resolution imaging. The DH microscope is shown to be well-suited for accurately and simultaneously measuring the 3D position and orientation of single molecules.
Collection
Undergraduate Theses, School of Engineering
RNA splicing is a critical step in manufacturing most human proteins. Regulating the splicing machinery is crucial for normal development, and aberrant splicing can result in diseases such as cancer. Recent studies have uncovered a recurrent mutation of the splicing factor U2AF1 in several human cancers including lung cancer. The lung cancer cell line HCC78 is the only cancer cell line known to harbor this mutation, and it also has a gene fusion involving the ROS1 gene that genetically separates it from other types of lung cancer. This study sets out to both examine splicing effects of the mutated U2AF1 in lung cancer and to clarify the relationship between mutant U2AF1 and the ROS1-fusion, two rare genetic alterations that have been observed to occur together in lung cancers at a significantly higher than expected frequency. Specifically, by genomically editing the HCC78 cell line and repairing the U2AF1 point mutation, an appropriate point of comparison has been created using transcription activator-like effector nucleases (TALENs). Use of TALENs allows for targeting sequence-specific locations in the genome for double-stranded breaks. Taking advantage of endogenous DNA repair machinery, a designed sequence has been inserted into the genome of these HCC78 cells to repair the U2AF1 point mutation.
RNA splicing is a critical step in manufacturing most human proteins. Regulating the splicing machinery is crucial for normal development, and aberrant splicing can result in diseases such as cancer. Recent studies have uncovered a recurrent mutation of the splicing factor U2AF1 in several human cancers including lung cancer. The lung cancer cell line HCC78 is the only cancer cell line known to harbor this mutation, and it also has a gene fusion involving the ROS1 gene that genetically separates it from other types of lung cancer. This study sets out to both examine splicing effects of the mutated U2AF1 in lung cancer and to clarify the relationship between mutant U2AF1 and the ROS1-fusion, two rare genetic alterations that have been observed to occur together in lung cancers at a significantly higher than expected frequency. Specifically, by genomically editing the HCC78 cell line and repairing the U2AF1 point mutation, an appropriate point of comparison has been created using transcription activator-like effector nucleases (TALENs). Use of TALENs allows for targeting sequence-specific locations in the genome for double-stranded breaks. Taking advantage of endogenous DNA repair machinery, a designed sequence has been inserted into the genome of these HCC78 cells to repair the U2AF1 point mutation.
Collection
Undergraduate Theses, Department of Biology, 2013-2014
Adult stem cells are an important class of cells responsible for the maintenance and regeneration of the body’s many tissues. These cells are under heavy investigation for potential therapeutic roles. However, one of the rate limiting steps is an understanding of the exact mechanisms and genes that regulate this class of cells. To better understand the genetic program regulating adult stem cells, a forward genetic screen was utilized involving a two-stage screening process comprising a Flp/FRT primary screen and an EGUF/hid secondary screen . Out of 3,118 mutant males produced, 1,041 were recovered, of which 412 showed loss of germ line clones and one showed overproliferation of germline clones in the primary screen. Of the 412 germ cell loss mutations, 59 passed through a secondary screen for cell lethality (EGUF/hid). 6 mutant strains have been partially or completely mapped, uncovering germ cell loss mutations in DNA Replication-Related Element Factor (DREF), Apoptosis Inducing Factor (AIF), Guanylyl Cyclase 32E (Gyc32E) and two other loci, as well as a germ cell overproliferation mutation in Star. Further research on DREF has shown that the allele we uncovered genetically separates DREF’s role in cell division and DNA-replication from its role in adult stem cell maintenance. Furthermore, I have uncovered a novel antagonistic interaction between DREF and members of the NuRD complex that is essential for the regulation of germline stem cell maintenance. These results suggest that our understanding of the genetic program of adult stem cells can still be enriched by well-designed, classic genetic screens.
Adult stem cells are an important class of cells responsible for the maintenance and regeneration of the body’s many tissues. These cells are under heavy investigation for potential therapeutic roles. However, one of the rate limiting steps is an understanding of the exact mechanisms and genes that regulate this class of cells. To better understand the genetic program regulating adult stem cells, a forward genetic screen was utilized involving a two-stage screening process comprising a Flp/FRT primary screen and an EGUF/hid secondary screen . Out of 3,118 mutant males produced, 1,041 were recovered, of which 412 showed loss of germ line clones and one showed overproliferation of germline clones in the primary screen. Of the 412 germ cell loss mutations, 59 passed through a secondary screen for cell lethality (EGUF/hid). 6 mutant strains have been partially or completely mapped, uncovering germ cell loss mutations in DNA Replication-Related Element Factor (DREF), Apoptosis Inducing Factor (AIF), Guanylyl Cyclase 32E (Gyc32E) and two other loci, as well as a germ cell overproliferation mutation in Star. Further research on DREF has shown that the allele we uncovered genetically separates DREF’s role in cell division and DNA-replication from its role in adult stem cell maintenance. Furthermore, I have uncovered a novel antagonistic interaction between DREF and members of the NuRD complex that is essential for the regulation of germline stem cell maintenance. These results suggest that our understanding of the genetic program of adult stem cells can still be enriched by well-designed, classic genetic screens.
Collection
Undergraduate Theses, Department of Biology, 2013-2014
DNA assembly techniques have developed rapidly, enabling efficient construction of complex constructs that would be prohibitively difficult using traditional restriction-digest based methods. Most of the recent methods for assembling multiple DNA fragments in vitro suffer from high costs, complex set-ups, and diminishing efficiency when used for more than a few DNA segments. Here I present a cycled ligation-based DNA assembly protocol that is simple, cheap, efficient, and powerful. The method employs a thermostable ligase and short Scaffold Oligonucleotide Connectors (SOCs) that are homologous to the ends and beginnings of two adjacent DNA sequences. These SOCs direct an exponential increase in the amount of correctly assembled product during a reaction that cycles between denaturing and annealing/ligating temperatures. Products of early cycles serve as templates for later cycles, allowing the assembly of many sequences in a single reaction. In tests I directed the assembly of twelve inserts, in one reaction, into a transformable plasmid. All the joints were precise, and assembly was scarless in the sense that no nucleotides were added or missing at junctions. I applied cycled ligation assembly to construct chimeric proteins, revealing functional roles for individual domains of the Hedgehog signaling pathway protein PTCH1. Simple, efficient, and low-cost cycled ligation assemblies will facilitate wider use of complex genetic constructs in biomedical research.
DNA assembly techniques have developed rapidly, enabling efficient construction of complex constructs that would be prohibitively difficult using traditional restriction-digest based methods. Most of the recent methods for assembling multiple DNA fragments in vitro suffer from high costs, complex set-ups, and diminishing efficiency when used for more than a few DNA segments. Here I present a cycled ligation-based DNA assembly protocol that is simple, cheap, efficient, and powerful. The method employs a thermostable ligase and short Scaffold Oligonucleotide Connectors (SOCs) that are homologous to the ends and beginnings of two adjacent DNA sequences. These SOCs direct an exponential increase in the amount of correctly assembled product during a reaction that cycles between denaturing and annealing/ligating temperatures. Products of early cycles serve as templates for later cycles, allowing the assembly of many sequences in a single reaction. In tests I directed the assembly of twelve inserts, in one reaction, into a transformable plasmid. All the joints were precise, and assembly was scarless in the sense that no nucleotides were added or missing at junctions. I applied cycled ligation assembly to construct chimeric proteins, revealing functional roles for individual domains of the Hedgehog signaling pathway protein PTCH1. Simple, efficient, and low-cost cycled ligation assemblies will facilitate wider use of complex genetic constructs in biomedical research.
Collection
Undergraduate Theses, Department of Physics
Hα emitting pulsar wind bow shock nebulae are rare and beautiful objects whose study has the potential to provide insights into the nature of relativistic shocks, pulsar emission, and the composition of the ISM. We report the results of a large Hα survey of 100 Fermi pulsars to characterize the distribution of Hα pulsar bow shocks, constituting the largest and most sensitive such survey yet undertaken. By reobserving previously known Balmer shocks, we confirm the excellent sensitive of our observations and reveal additional Hα structure not previously documented around PSRs J0742-2822 and J2124-3358. Our survey discovered three additional Hα shocks of interesting morphology around PSRs J1741-2054 (already discussed in a previous publication), J2030+4415, and J1509-5850. Despite our excellent sensitivity, fully 94 of our targets have no convincing evidence of an Hα shock. We develop a novel method to characterize the sensitivity of our imaging for all frames as a function of bow shock angular size that accounts for the essential role played by the characteristic bow shock shape in aiding detections. Combining these measurements with a standard model of the ISM and the expected Hα flux of a bow shock apex, we conclude that the number of confirmed detections around all Fermi pulsars is in reasonable agreement with our model. Our results are inconsistent with a model of the ISM that predicts a significantly larger fraction of neutral material in the ISM, and our data may provide a spot-sample constraint of HI.
Hα emitting pulsar wind bow shock nebulae are rare and beautiful objects whose study has the potential to provide insights into the nature of relativistic shocks, pulsar emission, and the composition of the ISM. We report the results of a large Hα survey of 100 Fermi pulsars to characterize the distribution of Hα pulsar bow shocks, constituting the largest and most sensitive such survey yet undertaken. By reobserving previously known Balmer shocks, we confirm the excellent sensitive of our observations and reveal additional Hα structure not previously documented around PSRs J0742-2822 and J2124-3358. Our survey discovered three additional Hα shocks of interesting morphology around PSRs J1741-2054 (already discussed in a previous publication), J2030+4415, and J1509-5850. Despite our excellent sensitivity, fully 94 of our targets have no convincing evidence of an Hα shock. We develop a novel method to characterize the sensitivity of our imaging for all frames as a function of bow shock angular size that accounts for the essential role played by the characteristic bow shock shape in aiding detections. Combining these measurements with a standard model of the ISM and the expected Hα flux of a bow shock apex, we conclude that the number of confirmed detections around all Fermi pulsars is in reasonable agreement with our model. Our results are inconsistent with a model of the ISM that predicts a significantly larger fraction of neutral material in the ISM, and our data may provide a spot-sample constraint of HI.
Collection
Undergraduate Theses, Department of Biology, 2013-2014
Recent work has established disruption of neurogenesis as a key cause of cognitive decline after use of brain irradiation to treat primary and metastatic tumors in children. Yet though this is widely accepted, little is known about the possibilities of restoration of normal neural stem cell (NSC) function after such a treatment, either through stem cell transplant or by promoting endogenous recovery by enhancing trophic effects. It is yet to be discovered whether endogenous quiescent neural stem cells (qNSCs) that are present in the irradiated brain have the potential to repopulate an injured neurogenic niche or whether resident stem cells are themselves damaged and are unable to repopulate the niche. It is also possible that a niche occupied by defective stem cells may simply block undamaged cells from occupying the niche. In this case, it may be necessary to ablate cells to create space for engraftment. Whereas most prior research has focused on solely anti-mitotic methods that spare rarely dividing quiescent NSCs, this research analyzes effective ablation of neural stem cells, including quiescent NSCs, in the dentate gyrus of the subgranular zone through the use of diphtheria toxin receptor-mediated cell death. My hypothesis was that partial ablation would show that qNSCs could repopulate the affected niche over time. After ablation treatment, it was observed that the number of nestin+ cells in the dentate gyrus (DG) was drastically reduced, and remained that way following a two-month recovery period. Along with this lack of renewal was partial ablation of neurogenesis in the olfactory bulb, evaluated by quantification of IdU+/CldU+ cells two months after ablation treatment. The data shown here provide new insight into the response of the neurogenic niche and surviving cells to effective ablation of local NSCs. This will potentially have a great impact on further research to be done regarding recovery procedures for irradiation-treated children.
Recent work has established disruption of neurogenesis as a key cause of cognitive decline after use of brain irradiation to treat primary and metastatic tumors in children. Yet though this is widely accepted, little is known about the possibilities of restoration of normal neural stem cell (NSC) function after such a treatment, either through stem cell transplant or by promoting endogenous recovery by enhancing trophic effects. It is yet to be discovered whether endogenous quiescent neural stem cells (qNSCs) that are present in the irradiated brain have the potential to repopulate an injured neurogenic niche or whether resident stem cells are themselves damaged and are unable to repopulate the niche. It is also possible that a niche occupied by defective stem cells may simply block undamaged cells from occupying the niche. In this case, it may be necessary to ablate cells to create space for engraftment. Whereas most prior research has focused on solely anti-mitotic methods that spare rarely dividing quiescent NSCs, this research analyzes effective ablation of neural stem cells, including quiescent NSCs, in the dentate gyrus of the subgranular zone through the use of diphtheria toxin receptor-mediated cell death. My hypothesis was that partial ablation would show that qNSCs could repopulate the affected niche over time. After ablation treatment, it was observed that the number of nestin+ cells in the dentate gyrus (DG) was drastically reduced, and remained that way following a two-month recovery period. Along with this lack of renewal was partial ablation of neurogenesis in the olfactory bulb, evaluated by quantification of IdU+/CldU+ cells two months after ablation treatment. The data shown here provide new insight into the response of the neurogenic niche and surviving cells to effective ablation of local NSCs. This will potentially have a great impact on further research to be done regarding recovery procedures for irradiation-treated children.
Book
1 online resource.
Temperature plays a vital role in shaping species' biology and biogeography, but the response of marine species to changes in temperature is still poorly understood over time scales relevant to climate change. In this thesis, I use a single species as a case study to explore acclimation and adaptation to temperature at the levels of gene sequence, gene expression, and whole-animal physiology. My study species is the European green crab, Carcinus maenas, a globally invasive temperate species that thrives across a wide range of environmental temperatures. By comparing an invasive species across seven populations in its native and invasive range, I was able to explore both short-term (invasive range) and long-term (native range) impacts of environmental temperature. To lay the groundwork for this project, I first review the literature on adaptation in marine invasive species. While quantitative research strongly suggests a role for adaptation, the dearth of integrated genetic-quantitative work severely limits our understanding of this process. The rest of this thesis attempts to fill this gap with empirical research. First, I describe the thermal physiology of green crabs in detail, and find that green crabs have high inherent eurythermality and acclimatory plasticity. Despite this thermal flexibility, I also observed potentially adaptive differentiation among populations, particularly within the native range. Population genetics supported a role for significant local adaptation between populations in the species' native range. I identified a number of specific genes likely involved in long-term adaptation between northern and southern native range populations, suggesting that innate immunity and muscle function may be under selection. These data also suggested a more limited role for ongoing, rapid adaptation in the species' invasive range. Patterns of gene expression integrate neatly with the genetic and physiological data, and I identified two groups of co-expressed genes whose expression appears related to adaptive differences in intraspecific physiology. Taken together, this project provides detailed, integrative evidence for the importance of acclimatory plasticity and both short- and long-term adaptation in success across a wide range of thermal environments in a high gene flow species. Finally, I discuss the broader implications of this work to species persistence in a rapidly changing ocean.
Temperature plays a vital role in shaping species' biology and biogeography, but the response of marine species to changes in temperature is still poorly understood over time scales relevant to climate change. In this thesis, I use a single species as a case study to explore acclimation and adaptation to temperature at the levels of gene sequence, gene expression, and whole-animal physiology. My study species is the European green crab, Carcinus maenas, a globally invasive temperate species that thrives across a wide range of environmental temperatures. By comparing an invasive species across seven populations in its native and invasive range, I was able to explore both short-term (invasive range) and long-term (native range) impacts of environmental temperature. To lay the groundwork for this project, I first review the literature on adaptation in marine invasive species. While quantitative research strongly suggests a role for adaptation, the dearth of integrated genetic-quantitative work severely limits our understanding of this process. The rest of this thesis attempts to fill this gap with empirical research. First, I describe the thermal physiology of green crabs in detail, and find that green crabs have high inherent eurythermality and acclimatory plasticity. Despite this thermal flexibility, I also observed potentially adaptive differentiation among populations, particularly within the native range. Population genetics supported a role for significant local adaptation between populations in the species' native range. I identified a number of specific genes likely involved in long-term adaptation between northern and southern native range populations, suggesting that innate immunity and muscle function may be under selection. These data also suggested a more limited role for ongoing, rapid adaptation in the species' invasive range. Patterns of gene expression integrate neatly with the genetic and physiological data, and I identified two groups of co-expressed genes whose expression appears related to adaptive differences in intraspecific physiology. Taken together, this project provides detailed, integrative evidence for the importance of acclimatory plasticity and both short- and long-term adaptation in success across a wide range of thermal environments in a high gene flow species. Finally, I discuss the broader implications of this work to species persistence in a rapidly changing ocean.
Book
1 online resource.
Organic photovoltaic (OPV) devices using materials compatible with flexible plastic substrates, have reached over 10 % power conversion efficiency, one of the critical milestones for market penetration. However, organic materials are often mechanically fragile compared to their inorganic counterparts, and devices containing these materials have a higher tendency for adhesive and cohesive failure. Using a thin-film adhesion technique that enables us to precisely measure the energy required to separate adjacent layers, weak interfaces in OPV devices were identified. For example, the interface of the P3HT:PCBM and PEDOT:PSS in a polymer solar cell with the inverted device architecture has an adhesive value of only ~1.5 to 2 J/m2. Such poor adhesion between adjacent thin-films can contribute to low processing yield and poor long-term reliability. Several strategies to improve the adhesion are proposed and quantified in this work, including chemical and thermal treatments. Pre and post electrode deposition thermal annealing can be used to tune interfacial and film parameters, such as interface chemistry, bonding and morphology to improve the adhesion. Post annealing effectively improved the adhesion at the P3HT:PCBM/PEDOT:PSS interface. Using near edge X-ray absorption fine structure (NEXAFS), we precisely quantified the interfacial composition and P3HT orientation at the delaminated surfaces and correlated the increase in adhesion to changes in the interfacial structure. The structural and chemical reorganizations are correlated with the glass transition and crystallization temperatures of the materials used in the structure and thus the conclusion can be generalized to other materials systems. Understanding the interlayer adhesion and developing strategies to improve the adhesion of OPV materials is essential to improve the overall mechanical integrity and yield general guidelines for the design and processing of reliable OPV devices. We also demonstrate how moisture and temperature accelerate debond propagation at mechanical stresses well below those required for critical failure. Understanding such debonding kinetics is critical for device reliability and lifetime. When environmental species are introduced, the bulk layers and other interfaces in the OPV structure become more susceptible to debonding. The cohesion of the PEDOT:PSS layer is significantly influenced by moisture along with temperature and mechanical loads. Elucidating the kinetic mechanisms using atomistic bond rupture models supports that decohesion is facilitated by a chemical reaction between water molecules from the environment and strained hydrogen bonds. This extensive series of quantitative analysis provides the impact of the different environmental species and most importantly their synergies, leading us to an in-depth understanding of the debonding mechanisms.
Organic photovoltaic (OPV) devices using materials compatible with flexible plastic substrates, have reached over 10 % power conversion efficiency, one of the critical milestones for market penetration. However, organic materials are often mechanically fragile compared to their inorganic counterparts, and devices containing these materials have a higher tendency for adhesive and cohesive failure. Using a thin-film adhesion technique that enables us to precisely measure the energy required to separate adjacent layers, weak interfaces in OPV devices were identified. For example, the interface of the P3HT:PCBM and PEDOT:PSS in a polymer solar cell with the inverted device architecture has an adhesive value of only ~1.5 to 2 J/m2. Such poor adhesion between adjacent thin-films can contribute to low processing yield and poor long-term reliability. Several strategies to improve the adhesion are proposed and quantified in this work, including chemical and thermal treatments. Pre and post electrode deposition thermal annealing can be used to tune interfacial and film parameters, such as interface chemistry, bonding and morphology to improve the adhesion. Post annealing effectively improved the adhesion at the P3HT:PCBM/PEDOT:PSS interface. Using near edge X-ray absorption fine structure (NEXAFS), we precisely quantified the interfacial composition and P3HT orientation at the delaminated surfaces and correlated the increase in adhesion to changes in the interfacial structure. The structural and chemical reorganizations are correlated with the glass transition and crystallization temperatures of the materials used in the structure and thus the conclusion can be generalized to other materials systems. Understanding the interlayer adhesion and developing strategies to improve the adhesion of OPV materials is essential to improve the overall mechanical integrity and yield general guidelines for the design and processing of reliable OPV devices. We also demonstrate how moisture and temperature accelerate debond propagation at mechanical stresses well below those required for critical failure. Understanding such debonding kinetics is critical for device reliability and lifetime. When environmental species are introduced, the bulk layers and other interfaces in the OPV structure become more susceptible to debonding. The cohesion of the PEDOT:PSS layer is significantly influenced by moisture along with temperature and mechanical loads. Elucidating the kinetic mechanisms using atomistic bond rupture models supports that decohesion is facilitated by a chemical reaction between water molecules from the environment and strained hydrogen bonds. This extensive series of quantitative analysis provides the impact of the different environmental species and most importantly their synergies, leading us to an in-depth understanding of the debonding mechanisms.
Book
x, 230 p. : ill. ; 24 cm.
Green Library
Status of items at Green Library
Green Library Status
Stacks Find it
PF3199 .T45 2014 Unknown
Book
1 online resource.
We present a local optimization method for design of dispersive and nondispersive dielectric structures for applications in nanophotonics, based on adjoint solutions of Maxwell's equations. The sensitivity of a merit function (e.g. total absorption) is sought with respect to design parameters (e.g. shape and size of scattering structures). We use adjoint design sensitivity analysis to obtain this sensitivity in a computationally efficient manner. By discretizing Maxwell's equations as a linear FDTD system with matrix elements that vary smoothly with design parameters, the entire numerical system is made differentiable. The derivative of the merit function with respect to all design parameters may be derived using the chain rule, and calculated efficiently using a solution to the adjoint FDTD system. Next we formulate the adjoint problem as a partial differential equation and solve it with the finite element method. This more accurate method is applied to metals. An ``optimization force'' may be calculated for all design parameters, or visualized at points along material interfaces to lend insight into design rules. We apply this method to the design of waveguide mode converters and resonant metallic nano-apertures.
We present a local optimization method for design of dispersive and nondispersive dielectric structures for applications in nanophotonics, based on adjoint solutions of Maxwell's equations. The sensitivity of a merit function (e.g. total absorption) is sought with respect to design parameters (e.g. shape and size of scattering structures). We use adjoint design sensitivity analysis to obtain this sensitivity in a computationally efficient manner. By discretizing Maxwell's equations as a linear FDTD system with matrix elements that vary smoothly with design parameters, the entire numerical system is made differentiable. The derivative of the merit function with respect to all design parameters may be derived using the chain rule, and calculated efficiently using a solution to the adjoint FDTD system. Next we formulate the adjoint problem as a partial differential equation and solve it with the finite element method. This more accurate method is applied to metals. An ``optimization force'' may be calculated for all design parameters, or visualized at points along material interfaces to lend insight into design rules. We apply this method to the design of waveguide mode converters and resonant metallic nano-apertures.
Book
1 online resource.
  • Fundamentals
  • Complete disassembly planning
  • Flexible disassembly planning
  • Resume.
  • Fundamentals
  • Complete disassembly planning
  • Flexible disassembly planning
  • Resume.
Book
1 online resource.
Radiotherapy is an image-guided intervention, and medical imaging is involved in every key step of the treatment process, ranging from patient staging, simulation, treatment planning, and radiation delivery, to patient follow-up. Image guided-radiation therapy (IGRT) is the most sophisticated method of radiation treatment to address the issue of tumor movement during treatment. IGRT uses advanced imaging technology such as cone-beam CT (CBCT) using on-board imager (OBI) to provide high-resolution, three-dimensional images to pinpoint tumor sites, adjust patient positioning when necessary, and complete a treatment within the standard treatment time slot. Combined with modern technologies in planning and delivering such as intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), IGRT improves the accuracy of tumor localization while reducing the radiation exposure of healthy tissues. For radiotherapy, repeated CBCT scan of the patient is often required, but excessive radiation exposure is directly related to the risk of polymorphism of genes involved in DNA damage and repair. Hence, we first demonstrate algorithms CBCT for dose reduction. With a low-dose CBCT protocol, the radiation exposure can be mitigated while signal-to-noise ration (SNR) of the measurement is also lowered. It is shown that the high-quality medical image can be reconstructed from noisy CBCT projections by solving an large-scale optimization problem with reasonable preconditioning techniques. In the presence of patient's movement, CBCT projections are highly undersampled, and the precondition in the full-scan case is no longer available. Instead, a first-order method with linearization is adopted for the reconstruction. To further accelerate the convergence speed, we demonstrate a scaling technique in Fourier space. For inverse planning of IGRT, two issues are introduced: beam direction selection and beamlet based optimization. We present an iterative framework suited to modern IMRT/VMAT.
Radiotherapy is an image-guided intervention, and medical imaging is involved in every key step of the treatment process, ranging from patient staging, simulation, treatment planning, and radiation delivery, to patient follow-up. Image guided-radiation therapy (IGRT) is the most sophisticated method of radiation treatment to address the issue of tumor movement during treatment. IGRT uses advanced imaging technology such as cone-beam CT (CBCT) using on-board imager (OBI) to provide high-resolution, three-dimensional images to pinpoint tumor sites, adjust patient positioning when necessary, and complete a treatment within the standard treatment time slot. Combined with modern technologies in planning and delivering such as intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), IGRT improves the accuracy of tumor localization while reducing the radiation exposure of healthy tissues. For radiotherapy, repeated CBCT scan of the patient is often required, but excessive radiation exposure is directly related to the risk of polymorphism of genes involved in DNA damage and repair. Hence, we first demonstrate algorithms CBCT for dose reduction. With a low-dose CBCT protocol, the radiation exposure can be mitigated while signal-to-noise ration (SNR) of the measurement is also lowered. It is shown that the high-quality medical image can be reconstructed from noisy CBCT projections by solving an large-scale optimization problem with reasonable preconditioning techniques. In the presence of patient's movement, CBCT projections are highly undersampled, and the precondition in the full-scan case is no longer available. Instead, a first-order method with linearization is adopted for the reconstruction. To further accelerate the convergence speed, we demonstrate a scaling technique in Fourier space. For inverse planning of IGRT, two issues are introduced: beam direction selection and beamlet based optimization. We present an iterative framework suited to modern IMRT/VMAT.
Book
273 p. : tab. ; 26 cm.
Green Library
Status of items at Green Library
Green Library Status
Stacks Find it
(no call number) Unavailable In process Request
Book
1 online resource.
This dissertation explores four American artists working at the intersection of sculpture, architecture, and design in the late 1960s and early 1970s as they shaped the discourse of "habitability, " a spatial term developed by the National Aeronautics and Space Administration (NASA). While NASA initially used habitability to describe the physiological suitability of hostile environments (e.g., outer space) for human exploration, this study shows how these artists revised the meaning of this term in order to accommodate a variety of architectural, environmental, psychological, and social interpretations. Over the course of four chapters, I examine the artistic practices and theories of four Los Angeles-based artists—James Turrell, Robert Irwin, and Larry Bell, all members of a 1960s "Light and Space" sculptural movement; and Sheila Levrant de Bretteville, a feminist graphic designer and co-founder of the Woman's Building, an independent feminist art and education center—as they directly engaged with the concept of habitability (and in turn, NASA's research) between 1966 and 1973. I consider these four artists and their responses to habitability—which ranged from their enthusiastic collaboration with social scientists and engineers, to their careful parsing of the philosophical and epistemological foundations of the discourse, to their explicit critique on the basis of difference and gender—by way of their work with two additional figures who simultaneously developed habitability in empirical and aesthetic contexts: Edward C. Wortz (1930--2004), a perceptual psychologist who worked in the aerospace industry at the height of the 1960s "Space Race"; and his wife, Melinda (Farris) Wortz (1940--2002), who wrote extensively about these artists as an art historian, critic, and curator. The first comprehensive examination of the Wortz archives, this study addresses the intersection of art and science during the Cold War. I combine formal analysis of the artwork occasioned by these collaborations (minimal abstract sculpture, installations of projected light, experimental architecture, and radical interior and graphic design) with inquiry into the social history of both postwar American art and scientific research.
This dissertation explores four American artists working at the intersection of sculpture, architecture, and design in the late 1960s and early 1970s as they shaped the discourse of "habitability, " a spatial term developed by the National Aeronautics and Space Administration (NASA). While NASA initially used habitability to describe the physiological suitability of hostile environments (e.g., outer space) for human exploration, this study shows how these artists revised the meaning of this term in order to accommodate a variety of architectural, environmental, psychological, and social interpretations. Over the course of four chapters, I examine the artistic practices and theories of four Los Angeles-based artists—James Turrell, Robert Irwin, and Larry Bell, all members of a 1960s "Light and Space" sculptural movement; and Sheila Levrant de Bretteville, a feminist graphic designer and co-founder of the Woman's Building, an independent feminist art and education center—as they directly engaged with the concept of habitability (and in turn, NASA's research) between 1966 and 1973. I consider these four artists and their responses to habitability—which ranged from their enthusiastic collaboration with social scientists and engineers, to their careful parsing of the philosophical and epistemological foundations of the discourse, to their explicit critique on the basis of difference and gender—by way of their work with two additional figures who simultaneously developed habitability in empirical and aesthetic contexts: Edward C. Wortz (1930--2004), a perceptual psychologist who worked in the aerospace industry at the height of the 1960s "Space Race"; and his wife, Melinda (Farris) Wortz (1940--2002), who wrote extensively about these artists as an art historian, critic, and curator. The first comprehensive examination of the Wortz archives, this study addresses the intersection of art and science during the Cold War. I combine formal analysis of the artwork occasioned by these collaborations (minimal abstract sculpture, installations of projected light, experimental architecture, and radical interior and graphic design) with inquiry into the social history of both postwar American art and scientific research.
Book
1 online resource.
Laughter is a universal human response to emotional stimuli. Though the production mechanism of laughter may seem crude when compared to other modes of vocalization such as speech and singing, the resulting auditory signal is nonetheless expressive. That is, laughter triggered by different social and emotional contexts is characterized by distinctiveness in auditory features that implicate certain state and attitude of the laughing person. By implementing prototypes for interactive laughter synthesis and conducting crowdsourced experiments on the synthesized laughter stimuli, this dissertation investigates acoustic features of laughter expressions, and how they may give rise to emotional meaning. The first part of the dissertation (Chapter 3) provides a new approach for interactive laughter synthesis that prioritizes expressiveness. Our synthesis model, with a reference implementation in the ChucK programming language, offers three levels of representation: the transcription mode requires specifying precise values of all control parameters, the instrument mode allows users to freely trigger and control laughter within the instrument's capacities, and the agent mode semi-automatically generates laughter according to its predefined characteristic tendency. Modified versions of this model has served as a stimulus generator for conducting perception experiments, as well as an instrument for the laptop orchestra. The second part of the dissertation (Chapter 4) describes a series of experiments conducted to understand (1) how acoustic features affect listeners' perception of emotions in synthesized laughter, and (2) the extent to which this observed relationships between features and emotions are laughter-specific. To explore the first question, a few chosen features are varied systematically to measure their impact on the perceived intensity and valence of emotions. To explore the second question, we intentionally eliminate timbral and pitch-contour cues that are essential to our recognition of laughter in order to gauge the extent to which our acoustic features are specific to the domain of laughter. As a related contribution, we describe our attempts to characterize features of auditory signal that can be used to distinguish laughter from speech (Chapter 5). While the corpus used to conduct this work does not provide annotations about the emotional qualities of laughter, and instead simply labels a given frame as either laughter, filler (such as 'uh', 'like', or 'er'), or garbage (including speech without laughter), this portion of research nonetheless serves as a starting point for applying our insights from Chapter 3 and Chapter 4 to a more practical problem involving laughter classification using real-life data. By focusing on the affective dimensions of laughter, this work complements prior works on laughter synthesis that have primarily emphasized the acceptability criteria. Moreover, by collecting listeners' response to synthesized laughter stimuli, this work attempts to establish a causal link between acoustic features and emotional meaning that is difficult to achieve when using real laughter sounds. The collection of research presented in this dissertation is intended to offer novel tools and framework for exploring many more unsolved questions about how humans communicate through laughter.
Laughter is a universal human response to emotional stimuli. Though the production mechanism of laughter may seem crude when compared to other modes of vocalization such as speech and singing, the resulting auditory signal is nonetheless expressive. That is, laughter triggered by different social and emotional contexts is characterized by distinctiveness in auditory features that implicate certain state and attitude of the laughing person. By implementing prototypes for interactive laughter synthesis and conducting crowdsourced experiments on the synthesized laughter stimuli, this dissertation investigates acoustic features of laughter expressions, and how they may give rise to emotional meaning. The first part of the dissertation (Chapter 3) provides a new approach for interactive laughter synthesis that prioritizes expressiveness. Our synthesis model, with a reference implementation in the ChucK programming language, offers three levels of representation: the transcription mode requires specifying precise values of all control parameters, the instrument mode allows users to freely trigger and control laughter within the instrument's capacities, and the agent mode semi-automatically generates laughter according to its predefined characteristic tendency. Modified versions of this model has served as a stimulus generator for conducting perception experiments, as well as an instrument for the laptop orchestra. The second part of the dissertation (Chapter 4) describes a series of experiments conducted to understand (1) how acoustic features affect listeners' perception of emotions in synthesized laughter, and (2) the extent to which this observed relationships between features and emotions are laughter-specific. To explore the first question, a few chosen features are varied systematically to measure their impact on the perceived intensity and valence of emotions. To explore the second question, we intentionally eliminate timbral and pitch-contour cues that are essential to our recognition of laughter in order to gauge the extent to which our acoustic features are specific to the domain of laughter. As a related contribution, we describe our attempts to characterize features of auditory signal that can be used to distinguish laughter from speech (Chapter 5). While the corpus used to conduct this work does not provide annotations about the emotional qualities of laughter, and instead simply labels a given frame as either laughter, filler (such as 'uh', 'like', or 'er'), or garbage (including speech without laughter), this portion of research nonetheless serves as a starting point for applying our insights from Chapter 3 and Chapter 4 to a more practical problem involving laughter classification using real-life data. By focusing on the affective dimensions of laughter, this work complements prior works on laughter synthesis that have primarily emphasized the acceptability criteria. Moreover, by collecting listeners' response to synthesized laughter stimuli, this work attempts to establish a causal link between acoustic features and emotional meaning that is difficult to achieve when using real laughter sounds. The collection of research presented in this dissertation is intended to offer novel tools and framework for exploring many more unsolved questions about how humans communicate through laughter.
Book
1 online resource.
The present research provides support for Affective Norm Theory (ANT), a new theory proposing that cultural context and situational norms interact to define both what is considered an appropriate affective display, and how observers respond to affective norm violations, or instances where the affect a person displays is inconsistent with both situational norms and observer expectations. A series of studies supports the hypotheses put forward by ANT: that in European American cultural contexts, (H1) observers notice affective deviance and (H2) negatively evaluate individuals who display deviant affect, that (H3) one reason affective displays are so powerful is because observers can use deviant displays to draw inferences about moral values, and that (H4) observers narrow the range of affective expressions they find appropriate in response to a stimulus when they interpret it as having moral content. Discussion focuses on the role of beliefs about the meaning of affective displays, individual difference measures, and the importance of gender and cultural context in defining and understanding affective norms and expectations.
The present research provides support for Affective Norm Theory (ANT), a new theory proposing that cultural context and situational norms interact to define both what is considered an appropriate affective display, and how observers respond to affective norm violations, or instances where the affect a person displays is inconsistent with both situational norms and observer expectations. A series of studies supports the hypotheses put forward by ANT: that in European American cultural contexts, (H1) observers notice affective deviance and (H2) negatively evaluate individuals who display deviant affect, that (H3) one reason affective displays are so powerful is because observers can use deviant displays to draw inferences about moral values, and that (H4) observers narrow the range of affective expressions they find appropriate in response to a stimulus when they interpret it as having moral content. Discussion focuses on the role of beliefs about the meaning of affective displays, individual difference measures, and the importance of gender and cultural context in defining and understanding affective norms and expectations.
Book
xxvii, 205 pages : 38 color illustrations ; 23 cm.
Green Library
Status of items at Green Library
Green Library Status
Stacks Find it
PJ6 .A2 V.92 Unavailable In process Request
Book
1 online resource.
Below a critical temperature, three-dimensional bosonic gases form a Bose-Einstein-condensate which exhibits spatial coherence. In two-dimensional (2D) systems, true long range order is impossible at non-zero temperatures since long-range fluctuations which increase the entropy and destroy the coherence can easily be excited. However it has been predicted by the Berezinskii-Kosterlitz-Thouless (BKT) theory that such a 2D condensate can exhibit quasi-long-range order which is characterized by a power-law decay of the spatial correlation function. This thesis presents the first observation of the coherence decay in a 2D exciton-polariton condensate with a power-law whose exponent (less than 1/4) behaves as predicted by the theory. Exciton-polaritons are quasi-particles which can be described as the quantum mechanical superposition of an exciton in a quantum well and a photon trapped in a semiconductor cavity. Due to their bosonic properties and small effective mass, they already condense at a temperature of a few kelvin, compared to a few hundred nanokelvin in the atomic case. Exciton-polaritons are created by optical excitation of the sample, and they continuously decay through the leakage of photons out of the sample. These leaking photons preserve the coherence properties of the decaying exciton-polaritons, and their coherence can be determined through interference measurements.
Below a critical temperature, three-dimensional bosonic gases form a Bose-Einstein-condensate which exhibits spatial coherence. In two-dimensional (2D) systems, true long range order is impossible at non-zero temperatures since long-range fluctuations which increase the entropy and destroy the coherence can easily be excited. However it has been predicted by the Berezinskii-Kosterlitz-Thouless (BKT) theory that such a 2D condensate can exhibit quasi-long-range order which is characterized by a power-law decay of the spatial correlation function. This thesis presents the first observation of the coherence decay in a 2D exciton-polariton condensate with a power-law whose exponent (less than 1/4) behaves as predicted by the theory. Exciton-polaritons are quasi-particles which can be described as the quantum mechanical superposition of an exciton in a quantum well and a photon trapped in a semiconductor cavity. Due to their bosonic properties and small effective mass, they already condense at a temperature of a few kelvin, compared to a few hundred nanokelvin in the atomic case. Exciton-polaritons are created by optical excitation of the sample, and they continuously decay through the leakage of photons out of the sample. These leaking photons preserve the coherence properties of the decaying exciton-polaritons, and their coherence can be determined through interference measurements.
Book
1 online resource.
What is the nature of human inference? How does it work, why does it work that way, and how might we like it to work? I advance a framework for answering these questions in tandem, with a rich interplay between normative and descriptive considerations. Specifically, I explore a view of inference based on the idea of probabilistic sampling, which is supported by behavioral psychological data and appears to be neurally plausible, and which also engenders a philosophically novel and appealing view of subjective probability. I then discuss this view in the context of the Bayesian program in cognitive psychology, proposing a methodology of boundedly rational analysis, which particularly exemplifies the normative/descriptive interplay. By taking resource bounds seriously, we can improve and augment the more standard rational analysis strategy. This helps us focus efforts to understand how minds in fact infer, and in turn allows sharpening normative questions about how minds ought to infer. Against this background I explore the phenomenon of metareasoning, which arises naturally when discussing bounded but representationally sophisticated agents, but which has not been explored in the context of probabilistic approaches to the mind. I propose an analysis of metareasoning in terms of the value of information, and explore the consequences of this view for how we should think about inference. The focus of this dissertation is on implemented (or at least implementable) models of agents, and the role of inference in guiding and supporting intelligent action for real, resource-bounded agents. Consequently, a number of the suggestions and claims made are supported by simulation studies.
What is the nature of human inference? How does it work, why does it work that way, and how might we like it to work? I advance a framework for answering these questions in tandem, with a rich interplay between normative and descriptive considerations. Specifically, I explore a view of inference based on the idea of probabilistic sampling, which is supported by behavioral psychological data and appears to be neurally plausible, and which also engenders a philosophically novel and appealing view of subjective probability. I then discuss this view in the context of the Bayesian program in cognitive psychology, proposing a methodology of boundedly rational analysis, which particularly exemplifies the normative/descriptive interplay. By taking resource bounds seriously, we can improve and augment the more standard rational analysis strategy. This helps us focus efforts to understand how minds in fact infer, and in turn allows sharpening normative questions about how minds ought to infer. Against this background I explore the phenomenon of metareasoning, which arises naturally when discussing bounded but representationally sophisticated agents, but which has not been explored in the context of probabilistic approaches to the mind. I propose an analysis of metareasoning in terms of the value of information, and explore the consequences of this view for how we should think about inference. The focus of this dissertation is on implemented (or at least implementable) models of agents, and the role of inference in guiding and supporting intelligent action for real, resource-bounded agents. Consequently, a number of the suggestions and claims made are supported by simulation studies.
Book
1 online resource.
In this dissertation we discuss three problems characterized by hidden structure or information. The first part of this thesis focuses on extracting subspace structures from data. Subspace Clustering is the problem of finding a multi-subspace representation that best fits a collection of points taken from a high-dimensional space. As with most clustering problems, popular techniques for subspace clustering are often difficult to analyze theoretically and/or terminate in local optima of non-convex functions--these problems are only exacerbated in the presence of noise and missing data. We introduce a collection of subspace clustering algorithms, which are tractable and provably robust to various forms of data imperfections. We further illustrate our methods with numerical experiments on a wide variety of data segmentation problems. In the second part of the thesis, we consider the problem of recovering the seemingly hidden phase of an object from intensity-only measurements, a problem which naturally appears in X-ray crystallography and related disciplines. We formulate the problem as a non-convex quadratic program whose global optimum recovers the phase information exactly from a near minimal number of magnitude-only measurements. To solve this non-convex problem, we develop an iterative algorithm that starts with a careful initialization and then refines this initial estimate by iteratively applying novel update rules. The main contribution is that we show that the sequence of successive iterates provably converges to the global optimum at a geometric rate so that the proposed scheme is efficient both in terms of computational and data resources. We also show that this approach is stable vis a vis noise. In theory, a variation on this scheme leads to a near-linear time algorithm for a physically realizable model based on coded diffraction patterns. In this part of the thesis we also prove similar results about two other approaches, the first one is based on convex optimization and the second one is inspired by the error reduction algorithm of Gerchberg-Saxton and Fienup. We illustrate the effectiveness of our methods with various experiments on image data. Underlying the analysis of this part of the thesis are insights for the analysis of non-convex optimization schemes that may have implications for computational problems beyond phase retrieval. In the third part of the thesis, we look at two related problems involving coherent and redundant dictionaries. The first problem, is about the recovery of signals from under-sampled data in the common situation where such signals are not sparse in an orthonormal basis, but in a coherent and redundant dictionary. We focus on a formulation of the problem where one minimizes the $\ell_1$ norm of the coefficients of the representation of the signal in the dictionary subject to the measurement constraints, a.k.a. the synthesis problem. For this formulation we characterize the required number of random measurements in terms of geometric quantities related to the dictionary. Furthermore, we connect this problem to the denoising problem where instead of under-sampled measurements of the signal we observe a noisy version of it. In this case we characterize the reconstruction error obtained by using the over-complete dictionary for denoising and show that it depends on the same geometric quantities that affect the number of measurements in the synthesis problem. The second problem concerns sparse recovery with coherent and redundant dictionaries which appears in a variety of applications such as microscopy, astronomy, tomography, computer vision, radar, and seismology. Our results show that sparse recovery via $\ell_1$ minimization is effective in these dictionaries even though these dictionaries have maximum pair-wise column coherence very close to 1, i.e. they contain almost identical columns. This holds with the proviso that the sparse coefficients are not too clustered. This general theory, when applied to the special case of low pass Fourier (a.k.a. super-resolution), allows for less restrictive requirements when compared with recent literature with significantly shorter proofs.
In this dissertation we discuss three problems characterized by hidden structure or information. The first part of this thesis focuses on extracting subspace structures from data. Subspace Clustering is the problem of finding a multi-subspace representation that best fits a collection of points taken from a high-dimensional space. As with most clustering problems, popular techniques for subspace clustering are often difficult to analyze theoretically and/or terminate in local optima of non-convex functions--these problems are only exacerbated in the presence of noise and missing data. We introduce a collection of subspace clustering algorithms, which are tractable and provably robust to various forms of data imperfections. We further illustrate our methods with numerical experiments on a wide variety of data segmentation problems. In the second part of the thesis, we consider the problem of recovering the seemingly hidden phase of an object from intensity-only measurements, a problem which naturally appears in X-ray crystallography and related disciplines. We formulate the problem as a non-convex quadratic program whose global optimum recovers the phase information exactly from a near minimal number of magnitude-only measurements. To solve this non-convex problem, we develop an iterative algorithm that starts with a careful initialization and then refines this initial estimate by iteratively applying novel update rules. The main contribution is that we show that the sequence of successive iterates provably converges to the global optimum at a geometric rate so that the proposed scheme is efficient both in terms of computational and data resources. We also show that this approach is stable vis a vis noise. In theory, a variation on this scheme leads to a near-linear time algorithm for a physically realizable model based on coded diffraction patterns. In this part of the thesis we also prove similar results about two other approaches, the first one is based on convex optimization and the second one is inspired by the error reduction algorithm of Gerchberg-Saxton and Fienup. We illustrate the effectiveness of our methods with various experiments on image data. Underlying the analysis of this part of the thesis are insights for the analysis of non-convex optimization schemes that may have implications for computational problems beyond phase retrieval. In the third part of the thesis, we look at two related problems involving coherent and redundant dictionaries. The first problem, is about the recovery of signals from under-sampled data in the common situation where such signals are not sparse in an orthonormal basis, but in a coherent and redundant dictionary. We focus on a formulation of the problem where one minimizes the $\ell_1$ norm of the coefficients of the representation of the signal in the dictionary subject to the measurement constraints, a.k.a. the synthesis problem. For this formulation we characterize the required number of random measurements in terms of geometric quantities related to the dictionary. Furthermore, we connect this problem to the denoising problem where instead of under-sampled measurements of the signal we observe a noisy version of it. In this case we characterize the reconstruction error obtained by using the over-complete dictionary for denoising and show that it depends on the same geometric quantities that affect the number of measurements in the synthesis problem. The second problem concerns sparse recovery with coherent and redundant dictionaries which appears in a variety of applications such as microscopy, astronomy, tomography, computer vision, radar, and seismology. Our results show that sparse recovery via $\ell_1$ minimization is effective in these dictionaries even though these dictionaries have maximum pair-wise column coherence very close to 1, i.e. they contain almost identical columns. This holds with the proviso that the sparse coefficients are not too clustered. This general theory, when applied to the special case of low pass Fourier (a.k.a. super-resolution), allows for less restrictive requirements when compared with recent literature with significantly shorter proofs.