Search results

128,735 results

View results as:
Number of results to display per page
Book
xvi, 227 p. ; 25 cm.
Green Library
Status of items at Green Library
Green Library Status
Stacks Find it
(no call number) Unavailable On order Request
Book
1 online resource.
The labeling of specific biological structures with single fluorescent molecules has ushered in a new era of imaging technology: super-resolution optical microscopy with resolution far beyond the diffraction limit down to some tens of nm. With the features of these exquisite tools in mind, this Dissertation discusses optical strategies for measuring the three-dimensional (3D) position and orientation of single molecules with nanoscale precision and several super-resolution imaging studies of structures in living cells. The concepts of single-molecule imaging, super-resolution microscopy, the engineering of optical point spread functions (PSFs), and quantitative analysis of single-molecule fluorescence images are introduced. The various computational methods and experimental apparatuses developed during the course of my graduate work are also discussed. Next, a new engineered point spread function, called the Corkscrew PSF, is shown for 3D imaging of point-like emitters. This PSF has been demonstrated to measure the location of nanoscale objects with 2-6 nm precision in 3D throughout a 3.2-micrometer depth range. Characterization and application of the Double-Helix (DH) PSF for super-resolution imaging of structures within mammalian and bacterial cells is discussed. The DH-PSF enables 3D single-molecule imaging within living cells with precisions of tens of nanometers throughout a ~2-micrometer depth range. Finally, the impact of single-molecule emission patterns and molecular orientation on optical imaging is treated, with particular emphasis on multiple strategies for improving the accuracy of super-resolution imaging. The DH microscope is shown to be well-suited for accurately and simultaneously measuring the 3D position and orientation of single molecules.
The labeling of specific biological structures with single fluorescent molecules has ushered in a new era of imaging technology: super-resolution optical microscopy with resolution far beyond the diffraction limit down to some tens of nm. With the features of these exquisite tools in mind, this Dissertation discusses optical strategies for measuring the three-dimensional (3D) position and orientation of single molecules with nanoscale precision and several super-resolution imaging studies of structures in living cells. The concepts of single-molecule imaging, super-resolution microscopy, the engineering of optical point spread functions (PSFs), and quantitative analysis of single-molecule fluorescence images are introduced. The various computational methods and experimental apparatuses developed during the course of my graduate work are also discussed. Next, a new engineered point spread function, called the Corkscrew PSF, is shown for 3D imaging of point-like emitters. This PSF has been demonstrated to measure the location of nanoscale objects with 2-6 nm precision in 3D throughout a 3.2-micrometer depth range. Characterization and application of the Double-Helix (DH) PSF for super-resolution imaging of structures within mammalian and bacterial cells is discussed. The DH-PSF enables 3D single-molecule imaging within living cells with precisions of tens of nanometers throughout a ~2-micrometer depth range. Finally, the impact of single-molecule emission patterns and molecular orientation on optical imaging is treated, with particular emphasis on multiple strategies for improving the accuracy of super-resolution imaging. The DH microscope is shown to be well-suited for accurately and simultaneously measuring the 3D position and orientation of single molecules.
Book
x, 342 p. : 79 ill. ; 25 cm.
Green Library
Status of items at Green Library
Green Library Status
Stacks Find it
(no call number) Unavailable On order Request
Book
1 online resource.
Traditionally, new adaptive algorithms were developed `microscopically' by changing the internal structure of LMS, Recursive Least Squares (RLS) and their variants, such as update equations and optimization criteria. This research attempts to reignite the interest in improving adaptive algorithms by considering a different question: by treating any known adaptive algorithm as a black-box learning agent, what can we do to leverage these little learner(s) to form a more intelligent adaptive algorithm? A framework is developed in this thesis to guide the design process, in which algorithms created from the framework are only allowed to manipulate these little black-boxes without hacking into their inner workings. Since it is a block-level (macroscopic) design strategy, the framework is called `Macro-Adaptive Framework' (MAF) and algorithms developed from the framework are called `Macro-Adaptive Algorithms' (MAA), hence the name of the thesis. In this thesis, macro-adaptive framework (MAF) will be defined. Algorithms satisfying the framework, including new ones developed by the author, will be discussed, analyzed, and compared with other existing algorithms, followed by simulation results in adaptive system identification. Since MAF opened the flood-gate for aggressive optimization (squeezing more information out of limited number of samples) that was not previously available, one possible side effect is over-adaptation, which is rarely studied in adaptive filtering literature. In addition to solutions developed in the thesis, the author did some original research on the phenomenon and the results are presented in the thesis as well.
Traditionally, new adaptive algorithms were developed `microscopically' by changing the internal structure of LMS, Recursive Least Squares (RLS) and their variants, such as update equations and optimization criteria. This research attempts to reignite the interest in improving adaptive algorithms by considering a different question: by treating any known adaptive algorithm as a black-box learning agent, what can we do to leverage these little learner(s) to form a more intelligent adaptive algorithm? A framework is developed in this thesis to guide the design process, in which algorithms created from the framework are only allowed to manipulate these little black-boxes without hacking into their inner workings. Since it is a block-level (macroscopic) design strategy, the framework is called `Macro-Adaptive Framework' (MAF) and algorithms developed from the framework are called `Macro-Adaptive Algorithms' (MAA), hence the name of the thesis. In this thesis, macro-adaptive framework (MAF) will be defined. Algorithms satisfying the framework, including new ones developed by the author, will be discussed, analyzed, and compared with other existing algorithms, followed by simulation results in adaptive system identification. Since MAF opened the flood-gate for aggressive optimization (squeezing more information out of limited number of samples) that was not previously available, one possible side effect is over-adaptation, which is rarely studied in adaptive filtering literature. In addition to solutions developed in the thesis, the author did some original research on the phenomenon and the results are presented in the thesis as well.
Book
1 online resource.
An ubiquitous challenge in modern data and signal acquisition arises from the ever-growing size of the object under study. Hardware and power limitations often preclude sampling with the desired rate and precision, which motivates the exploitation of signal and/or channel structures in order to enable reduced-rate sampling while preserving information integrity. This thesis is devoted to understanding the fundamental interplay between the underlying signal structures and the data acquisition paradigms, as well as developing efficient and provably effective algorithms for data reconstruction. The main contributions of this thesis are as follows. (1) We investigate the effect of sub-Nyquist sampling upon the capacity of a continuous-time channel. We start by deriving the sub-Nyquist sampled channel capacity under periodic sampling systems that subsume three canonical sampling structures, and then characterize the fundamental upper limit on the capacity achievable by general time-preserving sub-Nyquist sampling methods. Our findings indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest signal-to-noise ratio and is alias-suppressing. In addition, we illuminate an intriguing connection between sampled channels and MIMO channels, as well as a new connection between sampled capacity and MMSE. (2) We study the universal sub-Nyquist design when the sampler is designed to operate independent of instantaneous channel realizations, under a sparse multiband channel model. We evaluate the sampler design based on the capacity loss due to channel-independent sub-Nyquist sampling, and characterize the minimax capacity loss. This fundamental minimax limit can be approached by random sampling in the high-SNR regime, which demonstrates the optimality of random sampling schemes. (3) We explore the problem of recovering a spectrally sparse signal from a few random time-domain samples, where the underlying frequencies of the signal can assume any continuous values in a unit disk. To address a basis mismatch issue that arises in conventional compressed sensing methods, we develop a novel convex program by exploiting the equivalence between (off-the-grid) spectral sparsity and Hankel low-rank structure. The algorithm exploits sparsity while enforcing physically meaningful constraints. Under mild incoherence conditions, our algorithm allows perfect recovery as soon as the sample complexity exceeds the spectral sparsity level (up to a logarithmic gap). (4) We consider the task of covariance estimation with limited storage and low computational complexity. We focus on a quadratic random measurement scheme in processing data streams and high-frequency signals, which is shown to impose a minimal memory requirement and low computational complexity. Three structural assumptions of covariance matrices, including low rank, Toeplitz low rank, and jointly rank-one and sparse structure, are investigated. We show that a covariance matrix with any of these structures can be universally and faithfully recovered from near-minimal sub-Gaussian quadratic measurements via efficient convex programs for the respective structure. All in all, the central theme of this thesis is on the interplay between economical subsampling schemes and the structures of the object under investigation, from both information-theoretic and algorithmic perspectives.
An ubiquitous challenge in modern data and signal acquisition arises from the ever-growing size of the object under study. Hardware and power limitations often preclude sampling with the desired rate and precision, which motivates the exploitation of signal and/or channel structures in order to enable reduced-rate sampling while preserving information integrity. This thesis is devoted to understanding the fundamental interplay between the underlying signal structures and the data acquisition paradigms, as well as developing efficient and provably effective algorithms for data reconstruction. The main contributions of this thesis are as follows. (1) We investigate the effect of sub-Nyquist sampling upon the capacity of a continuous-time channel. We start by deriving the sub-Nyquist sampled channel capacity under periodic sampling systems that subsume three canonical sampling structures, and then characterize the fundamental upper limit on the capacity achievable by general time-preserving sub-Nyquist sampling methods. Our findings indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest signal-to-noise ratio and is alias-suppressing. In addition, we illuminate an intriguing connection between sampled channels and MIMO channels, as well as a new connection between sampled capacity and MMSE. (2) We study the universal sub-Nyquist design when the sampler is designed to operate independent of instantaneous channel realizations, under a sparse multiband channel model. We evaluate the sampler design based on the capacity loss due to channel-independent sub-Nyquist sampling, and characterize the minimax capacity loss. This fundamental minimax limit can be approached by random sampling in the high-SNR regime, which demonstrates the optimality of random sampling schemes. (3) We explore the problem of recovering a spectrally sparse signal from a few random time-domain samples, where the underlying frequencies of the signal can assume any continuous values in a unit disk. To address a basis mismatch issue that arises in conventional compressed sensing methods, we develop a novel convex program by exploiting the equivalence between (off-the-grid) spectral sparsity and Hankel low-rank structure. The algorithm exploits sparsity while enforcing physically meaningful constraints. Under mild incoherence conditions, our algorithm allows perfect recovery as soon as the sample complexity exceeds the spectral sparsity level (up to a logarithmic gap). (4) We consider the task of covariance estimation with limited storage and low computational complexity. We focus on a quadratic random measurement scheme in processing data streams and high-frequency signals, which is shown to impose a minimal memory requirement and low computational complexity. Three structural assumptions of covariance matrices, including low rank, Toeplitz low rank, and jointly rank-one and sparse structure, are investigated. We show that a covariance matrix with any of these structures can be universally and faithfully recovered from near-minimal sub-Gaussian quadratic measurements via efficient convex programs for the respective structure. All in all, the central theme of this thesis is on the interplay between economical subsampling schemes and the structures of the object under investigation, from both information-theoretic and algorithmic perspectives.
Book
x, 525 p. : ill. ; 24 cm.
Green Library
Status of items at Green Library
Green Library Status
Stacks Find it
(no call number) Unavailable On order Request
Collection
Undergraduate Theses, School of Engineering
RNA splicing is a critical step in manufacturing most human proteins. Regulating the splicing machinery is crucial for normal development, and aberrant splicing can result in diseases such as cancer. Recent studies have uncovered a recurrent mutation of the splicing factor U2AF1 in several human cancers including lung cancer. The lung cancer cell line HCC78 is the only cancer cell line known to harbor this mutation, and it also has a gene fusion involving the ROS1 gene that genetically separates it from other types of lung cancer. This study sets out to both examine splicing effects of the mutated U2AF1 in lung cancer and to clarify the relationship between mutant U2AF1 and the ROS1-fusion, two rare genetic alterations that have been observed to occur together in lung cancers at a significantly higher than expected frequency. Specifically, by genomically editing the HCC78 cell line and repairing the U2AF1 point mutation, an appropriate point of comparison has been created using transcription activator-like effector nucleases (TALENs). Use of TALENs allows for targeting sequence-specific locations in the genome for double-stranded breaks. Taking advantage of endogenous DNA repair machinery, a designed sequence has been inserted into the genome of these HCC78 cells to repair the U2AF1 point mutation.
RNA splicing is a critical step in manufacturing most human proteins. Regulating the splicing machinery is crucial for normal development, and aberrant splicing can result in diseases such as cancer. Recent studies have uncovered a recurrent mutation of the splicing factor U2AF1 in several human cancers including lung cancer. The lung cancer cell line HCC78 is the only cancer cell line known to harbor this mutation, and it also has a gene fusion involving the ROS1 gene that genetically separates it from other types of lung cancer. This study sets out to both examine splicing effects of the mutated U2AF1 in lung cancer and to clarify the relationship between mutant U2AF1 and the ROS1-fusion, two rare genetic alterations that have been observed to occur together in lung cancers at a significantly higher than expected frequency. Specifically, by genomically editing the HCC78 cell line and repairing the U2AF1 point mutation, an appropriate point of comparison has been created using transcription activator-like effector nucleases (TALENs). Use of TALENs allows for targeting sequence-specific locations in the genome for double-stranded breaks. Taking advantage of endogenous DNA repair machinery, a designed sequence has been inserted into the genome of these HCC78 cells to repair the U2AF1 point mutation.
Collection
Undergraduate Theses, Department of Biology, 2013-2014
Adult stem cells are an important class of cells responsible for the maintenance and regeneration of the body’s many tissues. These cells are under heavy investigation for potential therapeutic roles. However, one of the rate limiting steps is an understanding of the exact mechanisms and genes that regulate this class of cells. To better understand the genetic program regulating adult stem cells, a forward genetic screen was utilized involving a two-stage screening process comprising a Flp/FRT primary screen and an EGUF/hid secondary screen . Out of 3,118 mutant males produced, 1,041 were recovered, of which 412 showed loss of germ line clones and one showed overproliferation of germline clones in the primary screen. Of the 412 germ cell loss mutations, 59 passed through a secondary screen for cell lethality (EGUF/hid). 6 mutant strains have been partially or completely mapped, uncovering germ cell loss mutations in DNA Replication-Related Element Factor (DREF), Apoptosis Inducing Factor (AIF), Guanylyl Cyclase 32E (Gyc32E) and two other loci, as well as a germ cell overproliferation mutation in Star. Further research on DREF has shown that the allele we uncovered genetically separates DREF’s role in cell division and DNA-replication from its role in adult stem cell maintenance. Furthermore, I have uncovered a novel antagonistic interaction between DREF and members of the NuRD complex that is essential for the regulation of germline stem cell maintenance. These results suggest that our understanding of the genetic program of adult stem cells can still be enriched by well-designed, classic genetic screens.
Adult stem cells are an important class of cells responsible for the maintenance and regeneration of the body’s many tissues. These cells are under heavy investigation for potential therapeutic roles. However, one of the rate limiting steps is an understanding of the exact mechanisms and genes that regulate this class of cells. To better understand the genetic program regulating adult stem cells, a forward genetic screen was utilized involving a two-stage screening process comprising a Flp/FRT primary screen and an EGUF/hid secondary screen . Out of 3,118 mutant males produced, 1,041 were recovered, of which 412 showed loss of germ line clones and one showed overproliferation of germline clones in the primary screen. Of the 412 germ cell loss mutations, 59 passed through a secondary screen for cell lethality (EGUF/hid). 6 mutant strains have been partially or completely mapped, uncovering germ cell loss mutations in DNA Replication-Related Element Factor (DREF), Apoptosis Inducing Factor (AIF), Guanylyl Cyclase 32E (Gyc32E) and two other loci, as well as a germ cell overproliferation mutation in Star. Further research on DREF has shown that the allele we uncovered genetically separates DREF’s role in cell division and DNA-replication from its role in adult stem cell maintenance. Furthermore, I have uncovered a novel antagonistic interaction between DREF and members of the NuRD complex that is essential for the regulation of germline stem cell maintenance. These results suggest that our understanding of the genetic program of adult stem cells can still be enriched by well-designed, classic genetic screens.
Collection
Undergraduate Theses, Department of Biology, 2013-2014
DNA assembly techniques have developed rapidly, enabling efficient construction of complex constructs that would be prohibitively difficult using traditional restriction-digest based methods. Most of the recent methods for assembling multiple DNA fragments in vitro suffer from high costs, complex set-ups, and diminishing efficiency when used for more than a few DNA segments. Here I present a cycled ligation-based DNA assembly protocol that is simple, cheap, efficient, and powerful. The method employs a thermostable ligase and short Scaffold Oligonucleotide Connectors (SOCs) that are homologous to the ends and beginnings of two adjacent DNA sequences. These SOCs direct an exponential increase in the amount of correctly assembled product during a reaction that cycles between denaturing and annealing/ligating temperatures. Products of early cycles serve as templates for later cycles, allowing the assembly of many sequences in a single reaction. In tests I directed the assembly of twelve inserts, in one reaction, into a transformable plasmid. All the joints were precise, and assembly was scarless in the sense that no nucleotides were added or missing at junctions. I applied cycled ligation assembly to construct chimeric proteins, revealing functional roles for individual domains of the Hedgehog signaling pathway protein PTCH1. Simple, efficient, and low-cost cycled ligation assemblies will facilitate wider use of complex genetic constructs in biomedical research.
DNA assembly techniques have developed rapidly, enabling efficient construction of complex constructs that would be prohibitively difficult using traditional restriction-digest based methods. Most of the recent methods for assembling multiple DNA fragments in vitro suffer from high costs, complex set-ups, and diminishing efficiency when used for more than a few DNA segments. Here I present a cycled ligation-based DNA assembly protocol that is simple, cheap, efficient, and powerful. The method employs a thermostable ligase and short Scaffold Oligonucleotide Connectors (SOCs) that are homologous to the ends and beginnings of two adjacent DNA sequences. These SOCs direct an exponential increase in the amount of correctly assembled product during a reaction that cycles between denaturing and annealing/ligating temperatures. Products of early cycles serve as templates for later cycles, allowing the assembly of many sequences in a single reaction. In tests I directed the assembly of twelve inserts, in one reaction, into a transformable plasmid. All the joints were precise, and assembly was scarless in the sense that no nucleotides were added or missing at junctions. I applied cycled ligation assembly to construct chimeric proteins, revealing functional roles for individual domains of the Hedgehog signaling pathway protein PTCH1. Simple, efficient, and low-cost cycled ligation assemblies will facilitate wider use of complex genetic constructs in biomedical research.
Collection
Undergraduate Theses, Department of Physics
Hα emitting pulsar wind bow shock nebulae are rare and beautiful objects whose study has the potential to provide insights into the nature of relativistic shocks, pulsar emission, and the composition of the ISM. We report the results of a large Hα survey of 100 Fermi pulsars to characterize the distribution of Hα pulsar bow shocks, constituting the largest and most sensitive such survey yet undertaken. By reobserving previously known Balmer shocks, we confirm the excellent sensitive of our observations and reveal additional Hα structure not previously documented around PSRs J0742-2822 and J2124-3358. Our survey discovered three additional Hα shocks of interesting morphology around PSRs J1741-2054 (already discussed in a previous publication), J2030+4415, and J1509-5850. Despite our excellent sensitivity, fully 94 of our targets have no convincing evidence of an Hα shock. We develop a novel method to characterize the sensitivity of our imaging for all frames as a function of bow shock angular size that accounts for the essential role played by the characteristic bow shock shape in aiding detections. Combining these measurements with a standard model of the ISM and the expected Hα flux of a bow shock apex, we conclude that the number of confirmed detections around all Fermi pulsars is in reasonable agreement with our model. Our results are inconsistent with a model of the ISM that predicts a significantly larger fraction of neutral material in the ISM, and our data may provide a spot-sample constraint of HI.
Hα emitting pulsar wind bow shock nebulae are rare and beautiful objects whose study has the potential to provide insights into the nature of relativistic shocks, pulsar emission, and the composition of the ISM. We report the results of a large Hα survey of 100 Fermi pulsars to characterize the distribution of Hα pulsar bow shocks, constituting the largest and most sensitive such survey yet undertaken. By reobserving previously known Balmer shocks, we confirm the excellent sensitive of our observations and reveal additional Hα structure not previously documented around PSRs J0742-2822 and J2124-3358. Our survey discovered three additional Hα shocks of interesting morphology around PSRs J1741-2054 (already discussed in a previous publication), J2030+4415, and J1509-5850. Despite our excellent sensitivity, fully 94 of our targets have no convincing evidence of an Hα shock. We develop a novel method to characterize the sensitivity of our imaging for all frames as a function of bow shock angular size that accounts for the essential role played by the characteristic bow shock shape in aiding detections. Combining these measurements with a standard model of the ISM and the expected Hα flux of a bow shock apex, we conclude that the number of confirmed detections around all Fermi pulsars is in reasonable agreement with our model. Our results are inconsistent with a model of the ISM that predicts a significantly larger fraction of neutral material in the ISM, and our data may provide a spot-sample constraint of HI.
Collection
Undergraduate Theses, Department of Biology, 2013-2014
Recent work has established disruption of neurogenesis as a key cause of cognitive decline after use of brain irradiation to treat primary and metastatic tumors in children. Yet though this is widely accepted, little is known about the possibilities of restoration of normal neural stem cell (NSC) function after such a treatment, either through stem cell transplant or by promoting endogenous recovery by enhancing trophic effects. It is yet to be discovered whether endogenous quiescent neural stem cells (qNSCs) that are present in the irradiated brain have the potential to repopulate an injured neurogenic niche or whether resident stem cells are themselves damaged and are unable to repopulate the niche. It is also possible that a niche occupied by defective stem cells may simply block undamaged cells from occupying the niche. In this case, it may be necessary to ablate cells to create space for engraftment. Whereas most prior research has focused on solely anti-mitotic methods that spare rarely dividing quiescent NSCs, this research analyzes effective ablation of neural stem cells, including quiescent NSCs, in the dentate gyrus of the subgranular zone through the use of diphtheria toxin receptor-mediated cell death. My hypothesis was that partial ablation would show that qNSCs could repopulate the affected niche over time. After ablation treatment, it was observed that the number of nestin+ cells in the dentate gyrus (DG) was drastically reduced, and remained that way following a two-month recovery period. Along with this lack of renewal was partial ablation of neurogenesis in the olfactory bulb, evaluated by quantification of IdU+/CldU+ cells two months after ablation treatment. The data shown here provide new insight into the response of the neurogenic niche and surviving cells to effective ablation of local NSCs. This will potentially have a great impact on further research to be done regarding recovery procedures for irradiation-treated children.
Recent work has established disruption of neurogenesis as a key cause of cognitive decline after use of brain irradiation to treat primary and metastatic tumors in children. Yet though this is widely accepted, little is known about the possibilities of restoration of normal neural stem cell (NSC) function after such a treatment, either through stem cell transplant or by promoting endogenous recovery by enhancing trophic effects. It is yet to be discovered whether endogenous quiescent neural stem cells (qNSCs) that are present in the irradiated brain have the potential to repopulate an injured neurogenic niche or whether resident stem cells are themselves damaged and are unable to repopulate the niche. It is also possible that a niche occupied by defective stem cells may simply block undamaged cells from occupying the niche. In this case, it may be necessary to ablate cells to create space for engraftment. Whereas most prior research has focused on solely anti-mitotic methods that spare rarely dividing quiescent NSCs, this research analyzes effective ablation of neural stem cells, including quiescent NSCs, in the dentate gyrus of the subgranular zone through the use of diphtheria toxin receptor-mediated cell death. My hypothesis was that partial ablation would show that qNSCs could repopulate the affected niche over time. After ablation treatment, it was observed that the number of nestin+ cells in the dentate gyrus (DG) was drastically reduced, and remained that way following a two-month recovery period. Along with this lack of renewal was partial ablation of neurogenesis in the olfactory bulb, evaluated by quantification of IdU+/CldU+ cells two months after ablation treatment. The data shown here provide new insight into the response of the neurogenic niche and surviving cells to effective ablation of local NSCs. This will potentially have a great impact on further research to be done regarding recovery procedures for irradiation-treated children.
Book
1 online resource.
A C-arm-based cone-beam computed tomography (CBCT) scanner with a digital flat-panel detector represents a promising imaging system for the evaluation of static 3D joint positions and orientations and cartilage bone stress in vivo under weight-bearing conditions. The C-arm system provides high-resolution (150 μm isotropic) 3D volume images (i.e., a stack of slices) with superior bone contrast and highly flexible trajectories for image acquisition, and has short image acquisition times. With the use of contrast agents in CBCT imaging, accurate visualization of soft tissue structures, including the meniscus and articular cartilage, is possible. In the first part of this dissertation, new technologies that enable weight-bearing imaging using a C-arm CT system are described. First, verification of system trajectory reproducibility in the new horizontal gantry trajectory (i.e., with the axis of rotation perpendicular to the floor) is demonstrated. Image quality of static objects imaged using the new geometry is presented. Second, the impact of limited dynamic range of the detector on image quality is discussed, and a solution to overcome the limitation is provided. Finally, during imaging of a human subject in vivo in weight-bearing positions, involuntary and non-reproducible motion is significant over the 10-20s scan. This motion renders the 3D images non-diagnostic. Three new approaches to correct for knee motion have been developed, and final image quality in in vivo images of volunteers is presented. In the second part of this dissertation, the use of C-arm CT as a diagnostic tool for joint disorders such as patellofemoral pain syndrome, and knee osteoarthritis is described. Workflow for image acquisition and analysis for successful measurement of 3D patellofemoral tracking and for in vivo time-dependent cartilage deformation under full weight-bearing conditions is demonstrated.
A C-arm-based cone-beam computed tomography (CBCT) scanner with a digital flat-panel detector represents a promising imaging system for the evaluation of static 3D joint positions and orientations and cartilage bone stress in vivo under weight-bearing conditions. The C-arm system provides high-resolution (150 μm isotropic) 3D volume images (i.e., a stack of slices) with superior bone contrast and highly flexible trajectories for image acquisition, and has short image acquisition times. With the use of contrast agents in CBCT imaging, accurate visualization of soft tissue structures, including the meniscus and articular cartilage, is possible. In the first part of this dissertation, new technologies that enable weight-bearing imaging using a C-arm CT system are described. First, verification of system trajectory reproducibility in the new horizontal gantry trajectory (i.e., with the axis of rotation perpendicular to the floor) is demonstrated. Image quality of static objects imaged using the new geometry is presented. Second, the impact of limited dynamic range of the detector on image quality is discussed, and a solution to overcome the limitation is provided. Finally, during imaging of a human subject in vivo in weight-bearing positions, involuntary and non-reproducible motion is significant over the 10-20s scan. This motion renders the 3D images non-diagnostic. Three new approaches to correct for knee motion have been developed, and final image quality in in vivo images of volunteers is presented. In the second part of this dissertation, the use of C-arm CT as a diagnostic tool for joint disorders such as patellofemoral pain syndrome, and knee osteoarthritis is described. Workflow for image acquisition and analysis for successful measurement of 3D patellofemoral tracking and for in vivo time-dependent cartilage deformation under full weight-bearing conditions is demonstrated.
Book
1 online resource.
Temperature plays a vital role in shaping species' biology and biogeography, but the response of marine species to changes in temperature is still poorly understood over time scales relevant to climate change. In this thesis, I use a single species as a case study to explore acclimation and adaptation to temperature at the levels of gene sequence, gene expression, and whole-animal physiology. My study species is the European green crab, Carcinus maenas, a globally invasive temperate species that thrives across a wide range of environmental temperatures. By comparing an invasive species across seven populations in its native and invasive range, I was able to explore both short-term (invasive range) and long-term (native range) impacts of environmental temperature. To lay the groundwork for this project, I first review the literature on adaptation in marine invasive species. While quantitative research strongly suggests a role for adaptation, the dearth of integrated genetic-quantitative work severely limits our understanding of this process. The rest of this thesis attempts to fill this gap with empirical research. First, I describe the thermal physiology of green crabs in detail, and find that green crabs have high inherent eurythermality and acclimatory plasticity. Despite this thermal flexibility, I also observed potentially adaptive differentiation among populations, particularly within the native range. Population genetics supported a role for significant local adaptation between populations in the species' native range. I identified a number of specific genes likely involved in long-term adaptation between northern and southern native range populations, suggesting that innate immunity and muscle function may be under selection. These data also suggested a more limited role for ongoing, rapid adaptation in the species' invasive range. Patterns of gene expression integrate neatly with the genetic and physiological data, and I identified two groups of co-expressed genes whose expression appears related to adaptive differences in intraspecific physiology. Taken together, this project provides detailed, integrative evidence for the importance of acclimatory plasticity and both short- and long-term adaptation in success across a wide range of thermal environments in a high gene flow species. Finally, I discuss the broader implications of this work to species persistence in a rapidly changing ocean.
Temperature plays a vital role in shaping species' biology and biogeography, but the response of marine species to changes in temperature is still poorly understood over time scales relevant to climate change. In this thesis, I use a single species as a case study to explore acclimation and adaptation to temperature at the levels of gene sequence, gene expression, and whole-animal physiology. My study species is the European green crab, Carcinus maenas, a globally invasive temperate species that thrives across a wide range of environmental temperatures. By comparing an invasive species across seven populations in its native and invasive range, I was able to explore both short-term (invasive range) and long-term (native range) impacts of environmental temperature. To lay the groundwork for this project, I first review the literature on adaptation in marine invasive species. While quantitative research strongly suggests a role for adaptation, the dearth of integrated genetic-quantitative work severely limits our understanding of this process. The rest of this thesis attempts to fill this gap with empirical research. First, I describe the thermal physiology of green crabs in detail, and find that green crabs have high inherent eurythermality and acclimatory plasticity. Despite this thermal flexibility, I also observed potentially adaptive differentiation among populations, particularly within the native range. Population genetics supported a role for significant local adaptation between populations in the species' native range. I identified a number of specific genes likely involved in long-term adaptation between northern and southern native range populations, suggesting that innate immunity and muscle function may be under selection. These data also suggested a more limited role for ongoing, rapid adaptation in the species' invasive range. Patterns of gene expression integrate neatly with the genetic and physiological data, and I identified two groups of co-expressed genes whose expression appears related to adaptive differences in intraspecific physiology. Taken together, this project provides detailed, integrative evidence for the importance of acclimatory plasticity and both short- and long-term adaptation in success across a wide range of thermal environments in a high gene flow species. Finally, I discuss the broader implications of this work to species persistence in a rapidly changing ocean.
Book
1 online resource.
Organic photovoltaic (OPV) devices using materials compatible with flexible plastic substrates, have reached over 10 % power conversion efficiency, one of the critical milestones for market penetration. However, organic materials are often mechanically fragile compared to their inorganic counterparts, and devices containing these materials have a higher tendency for adhesive and cohesive failure. Using a thin-film adhesion technique that enables us to precisely measure the energy required to separate adjacent layers, weak interfaces in OPV devices were identified. For example, the interface of the P3HT:PCBM and PEDOT:PSS in a polymer solar cell with the inverted device architecture has an adhesive value of only ~1.5 to 2 J/m2. Such poor adhesion between adjacent thin-films can contribute to low processing yield and poor long-term reliability. Several strategies to improve the adhesion are proposed and quantified in this work, including chemical and thermal treatments. Pre and post electrode deposition thermal annealing can be used to tune interfacial and film parameters, such as interface chemistry, bonding and morphology to improve the adhesion. Post annealing effectively improved the adhesion at the P3HT:PCBM/PEDOT:PSS interface. Using near edge X-ray absorption fine structure (NEXAFS), we precisely quantified the interfacial composition and P3HT orientation at the delaminated surfaces and correlated the increase in adhesion to changes in the interfacial structure. The structural and chemical reorganizations are correlated with the glass transition and crystallization temperatures of the materials used in the structure and thus the conclusion can be generalized to other materials systems. Understanding the interlayer adhesion and developing strategies to improve the adhesion of OPV materials is essential to improve the overall mechanical integrity and yield general guidelines for the design and processing of reliable OPV devices. We also demonstrate how moisture and temperature accelerate debond propagation at mechanical stresses well below those required for critical failure. Understanding such debonding kinetics is critical for device reliability and lifetime. When environmental species are introduced, the bulk layers and other interfaces in the OPV structure become more susceptible to debonding. The cohesion of the PEDOT:PSS layer is significantly influenced by moisture along with temperature and mechanical loads. Elucidating the kinetic mechanisms using atomistic bond rupture models supports that decohesion is facilitated by a chemical reaction between water molecules from the environment and strained hydrogen bonds. This extensive series of quantitative analysis provides the impact of the different environmental species and most importantly their synergies, leading us to an in-depth understanding of the debonding mechanisms.
Organic photovoltaic (OPV) devices using materials compatible with flexible plastic substrates, have reached over 10 % power conversion efficiency, one of the critical milestones for market penetration. However, organic materials are often mechanically fragile compared to their inorganic counterparts, and devices containing these materials have a higher tendency for adhesive and cohesive failure. Using a thin-film adhesion technique that enables us to precisely measure the energy required to separate adjacent layers, weak interfaces in OPV devices were identified. For example, the interface of the P3HT:PCBM and PEDOT:PSS in a polymer solar cell with the inverted device architecture has an adhesive value of only ~1.5 to 2 J/m2. Such poor adhesion between adjacent thin-films can contribute to low processing yield and poor long-term reliability. Several strategies to improve the adhesion are proposed and quantified in this work, including chemical and thermal treatments. Pre and post electrode deposition thermal annealing can be used to tune interfacial and film parameters, such as interface chemistry, bonding and morphology to improve the adhesion. Post annealing effectively improved the adhesion at the P3HT:PCBM/PEDOT:PSS interface. Using near edge X-ray absorption fine structure (NEXAFS), we precisely quantified the interfacial composition and P3HT orientation at the delaminated surfaces and correlated the increase in adhesion to changes in the interfacial structure. The structural and chemical reorganizations are correlated with the glass transition and crystallization temperatures of the materials used in the structure and thus the conclusion can be generalized to other materials systems. Understanding the interlayer adhesion and developing strategies to improve the adhesion of OPV materials is essential to improve the overall mechanical integrity and yield general guidelines for the design and processing of reliable OPV devices. We also demonstrate how moisture and temperature accelerate debond propagation at mechanical stresses well below those required for critical failure. Understanding such debonding kinetics is critical for device reliability and lifetime. When environmental species are introduced, the bulk layers and other interfaces in the OPV structure become more susceptible to debonding. The cohesion of the PEDOT:PSS layer is significantly influenced by moisture along with temperature and mechanical loads. Elucidating the kinetic mechanisms using atomistic bond rupture models supports that decohesion is facilitated by a chemical reaction between water molecules from the environment and strained hydrogen bonds. This extensive series of quantitative analysis provides the impact of the different environmental species and most importantly their synergies, leading us to an in-depth understanding of the debonding mechanisms.
Book
x, 230 p. : ill. ; 24 cm.
Green Library
Status of items at Green Library
Green Library Status
Stacks Find it
PF3199 .T45 2014 Unknown
Book
1 online resource.
We present a local optimization method for design of dispersive and nondispersive dielectric structures for applications in nanophotonics, based on adjoint solutions of Maxwell's equations. The sensitivity of a merit function (e.g. total absorption) is sought with respect to design parameters (e.g. shape and size of scattering structures). We use adjoint design sensitivity analysis to obtain this sensitivity in a computationally efficient manner. By discretizing Maxwell's equations as a linear FDTD system with matrix elements that vary smoothly with design parameters, the entire numerical system is made differentiable. The derivative of the merit function with respect to all design parameters may be derived using the chain rule, and calculated efficiently using a solution to the adjoint FDTD system. Next we formulate the adjoint problem as a partial differential equation and solve it with the finite element method. This more accurate method is applied to metals. An ``optimization force'' may be calculated for all design parameters, or visualized at points along material interfaces to lend insight into design rules. We apply this method to the design of waveguide mode converters and resonant metallic nano-apertures.
We present a local optimization method for design of dispersive and nondispersive dielectric structures for applications in nanophotonics, based on adjoint solutions of Maxwell's equations. The sensitivity of a merit function (e.g. total absorption) is sought with respect to design parameters (e.g. shape and size of scattering structures). We use adjoint design sensitivity analysis to obtain this sensitivity in a computationally efficient manner. By discretizing Maxwell's equations as a linear FDTD system with matrix elements that vary smoothly with design parameters, the entire numerical system is made differentiable. The derivative of the merit function with respect to all design parameters may be derived using the chain rule, and calculated efficiently using a solution to the adjoint FDTD system. Next we formulate the adjoint problem as a partial differential equation and solve it with the finite element method. This more accurate method is applied to metals. An ``optimization force'' may be calculated for all design parameters, or visualized at points along material interfaces to lend insight into design rules. We apply this method to the design of waveguide mode converters and resonant metallic nano-apertures.
Book
1 online resource.
  • Fundamentals
  • Complete disassembly planning
  • Flexible disassembly planning
  • Resume.
  • Fundamentals
  • Complete disassembly planning
  • Flexible disassembly planning
  • Resume.
Book
1 online resource.
Radiotherapy is an image-guided intervention, and medical imaging is involved in every key step of the treatment process, ranging from patient staging, simulation, treatment planning, and radiation delivery, to patient follow-up. Image guided-radiation therapy (IGRT) is the most sophisticated method of radiation treatment to address the issue of tumor movement during treatment. IGRT uses advanced imaging technology such as cone-beam CT (CBCT) using on-board imager (OBI) to provide high-resolution, three-dimensional images to pinpoint tumor sites, adjust patient positioning when necessary, and complete a treatment within the standard treatment time slot. Combined with modern technologies in planning and delivering such as intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), IGRT improves the accuracy of tumor localization while reducing the radiation exposure of healthy tissues. For radiotherapy, repeated CBCT scan of the patient is often required, but excessive radiation exposure is directly related to the risk of polymorphism of genes involved in DNA damage and repair. Hence, we first demonstrate algorithms CBCT for dose reduction. With a low-dose CBCT protocol, the radiation exposure can be mitigated while signal-to-noise ration (SNR) of the measurement is also lowered. It is shown that the high-quality medical image can be reconstructed from noisy CBCT projections by solving an large-scale optimization problem with reasonable preconditioning techniques. In the presence of patient's movement, CBCT projections are highly undersampled, and the precondition in the full-scan case is no longer available. Instead, a first-order method with linearization is adopted for the reconstruction. To further accelerate the convergence speed, we demonstrate a scaling technique in Fourier space. For inverse planning of IGRT, two issues are introduced: beam direction selection and beamlet based optimization. We present an iterative framework suited to modern IMRT/VMAT.
Radiotherapy is an image-guided intervention, and medical imaging is involved in every key step of the treatment process, ranging from patient staging, simulation, treatment planning, and radiation delivery, to patient follow-up. Image guided-radiation therapy (IGRT) is the most sophisticated method of radiation treatment to address the issue of tumor movement during treatment. IGRT uses advanced imaging technology such as cone-beam CT (CBCT) using on-board imager (OBI) to provide high-resolution, three-dimensional images to pinpoint tumor sites, adjust patient positioning when necessary, and complete a treatment within the standard treatment time slot. Combined with modern technologies in planning and delivering such as intensity-modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT), IGRT improves the accuracy of tumor localization while reducing the radiation exposure of healthy tissues. For radiotherapy, repeated CBCT scan of the patient is often required, but excessive radiation exposure is directly related to the risk of polymorphism of genes involved in DNA damage and repair. Hence, we first demonstrate algorithms CBCT for dose reduction. With a low-dose CBCT protocol, the radiation exposure can be mitigated while signal-to-noise ration (SNR) of the measurement is also lowered. It is shown that the high-quality medical image can be reconstructed from noisy CBCT projections by solving an large-scale optimization problem with reasonable preconditioning techniques. In the presence of patient's movement, CBCT projections are highly undersampled, and the precondition in the full-scan case is no longer available. Instead, a first-order method with linearization is adopted for the reconstruction. To further accelerate the convergence speed, we demonstrate a scaling technique in Fourier space. For inverse planning of IGRT, two issues are introduced: beam direction selection and beamlet based optimization. We present an iterative framework suited to modern IMRT/VMAT.
Book
273 p. : tab. ; 26 cm.
Green Library
Status of items at Green Library
Green Library Status
Stacks Find it
(no call number) Unavailable In process Request
Book
1 online resource.
This dissertation explores four American artists working at the intersection of sculpture, architecture, and design in the late 1960s and early 1970s as they shaped the discourse of "habitability, " a spatial term developed by the National Aeronautics and Space Administration (NASA). While NASA initially used habitability to describe the physiological suitability of hostile environments (e.g., outer space) for human exploration, this study shows how these artists revised the meaning of this term in order to accommodate a variety of architectural, environmental, psychological, and social interpretations. Over the course of four chapters, I examine the artistic practices and theories of four Los Angeles-based artists—James Turrell, Robert Irwin, and Larry Bell, all members of a 1960s "Light and Space" sculptural movement; and Sheila Levrant de Bretteville, a feminist graphic designer and co-founder of the Woman's Building, an independent feminist art and education center—as they directly engaged with the concept of habitability (and in turn, NASA's research) between 1966 and 1973. I consider these four artists and their responses to habitability—which ranged from their enthusiastic collaboration with social scientists and engineers, to their careful parsing of the philosophical and epistemological foundations of the discourse, to their explicit critique on the basis of difference and gender—by way of their work with two additional figures who simultaneously developed habitability in empirical and aesthetic contexts: Edward C. Wortz (1930--2004), a perceptual psychologist who worked in the aerospace industry at the height of the 1960s "Space Race"; and his wife, Melinda (Farris) Wortz (1940--2002), who wrote extensively about these artists as an art historian, critic, and curator. The first comprehensive examination of the Wortz archives, this study addresses the intersection of art and science during the Cold War. I combine formal analysis of the artwork occasioned by these collaborations (minimal abstract sculpture, installations of projected light, experimental architecture, and radical interior and graphic design) with inquiry into the social history of both postwar American art and scientific research.
This dissertation explores four American artists working at the intersection of sculpture, architecture, and design in the late 1960s and early 1970s as they shaped the discourse of "habitability, " a spatial term developed by the National Aeronautics and Space Administration (NASA). While NASA initially used habitability to describe the physiological suitability of hostile environments (e.g., outer space) for human exploration, this study shows how these artists revised the meaning of this term in order to accommodate a variety of architectural, environmental, psychological, and social interpretations. Over the course of four chapters, I examine the artistic practices and theories of four Los Angeles-based artists—James Turrell, Robert Irwin, and Larry Bell, all members of a 1960s "Light and Space" sculptural movement; and Sheila Levrant de Bretteville, a feminist graphic designer and co-founder of the Woman's Building, an independent feminist art and education center—as they directly engaged with the concept of habitability (and in turn, NASA's research) between 1966 and 1973. I consider these four artists and their responses to habitability—which ranged from their enthusiastic collaboration with social scientists and engineers, to their careful parsing of the philosophical and epistemological foundations of the discourse, to their explicit critique on the basis of difference and gender—by way of their work with two additional figures who simultaneously developed habitability in empirical and aesthetic contexts: Edward C. Wortz (1930--2004), a perceptual psychologist who worked in the aerospace industry at the height of the 1960s "Space Race"; and his wife, Melinda (Farris) Wortz (1940--2002), who wrote extensively about these artists as an art historian, critic, and curator. The first comprehensive examination of the Wortz archives, this study addresses the intersection of art and science during the Cold War. I combine formal analysis of the artwork occasioned by these collaborations (minimal abstract sculpture, installations of projected light, experimental architecture, and radical interior and graphic design) with inquiry into the social history of both postwar American art and scientific research.