Search results

128,871 results

View results as:
Number of results to display per page
Book
1 online resource.
Due to its intermittent nature, large-scale adoption of solar energy requires new technological advancements to efficiently store and distribute energy. The photoelectrochemical (PEC) splitting of water is a promising way to capture solar energy and store it in the form of chemical bonds. We look at leveraging the advantages of ALD, a technique well known in the microelectronics industry, to address some of the most pressing issues in PEC water splitting. In particular, the focus of our studies is the development of catalysts to drive the oxygen evolution reaction (OER), a reaction typically associated with high overpotentials and sluggish kinetics. We first investigate known active transition metal oxide catalysts, exploring how to enhance their activity with higher surface area and through electronic effects. We create highly active electrocatalysts of both MnOx and NiOx, and discuss some of the advantages and limitations of using ALD to deposit these films. Next, we focus on using ALD to manage charge transport limitations in semiconducting oxide thin films. We demonstrate the sensitivity of semiconducing thin films to film thickness using ALD TiO2 as a model material. We then show how ALD can be used to explore new semiconducting oxide catalysts, focusing on a Ti-Mn oxide system. We also discuss the integration of these catalysts into PEC devices, with an emphasis on the role of stability, oxidation, and surface area in enhancing the OER activity for photoanodes.
Due to its intermittent nature, large-scale adoption of solar energy requires new technological advancements to efficiently store and distribute energy. The photoelectrochemical (PEC) splitting of water is a promising way to capture solar energy and store it in the form of chemical bonds. We look at leveraging the advantages of ALD, a technique well known in the microelectronics industry, to address some of the most pressing issues in PEC water splitting. In particular, the focus of our studies is the development of catalysts to drive the oxygen evolution reaction (OER), a reaction typically associated with high overpotentials and sluggish kinetics. We first investigate known active transition metal oxide catalysts, exploring how to enhance their activity with higher surface area and through electronic effects. We create highly active electrocatalysts of both MnOx and NiOx, and discuss some of the advantages and limitations of using ALD to deposit these films. Next, we focus on using ALD to manage charge transport limitations in semiconducting oxide thin films. We demonstrate the sensitivity of semiconducing thin films to film thickness using ALD TiO2 as a model material. We then show how ALD can be used to explore new semiconducting oxide catalysts, focusing on a Ti-Mn oxide system. We also discuss the integration of these catalysts into PEC devices, with an emphasis on the role of stability, oxidation, and surface area in enhancing the OER activity for photoanodes.
Book
1 online resource.
Emergency evacuation (egress) is an important issue in safety design of buildings. Studies of catastrophic incidents have highlighted the need to consider occupants' behaviors for better understanding of evacuation patterns. Although egress outcomes are influenced by human and social factors, quantifying these factors in design codes and standards is difficult because occupants' characteristics and emergency scenarios vary widely. As an alternative, computational egress simulation tools have been used to evaluate egress designs. However, most of current simulation tools oversimplify the behavioral aspects of evacuees. This thesis describes a flexible computational framework that incorporates human and social behaviors in simulations to aid occupant-centric egress design. Based on the analysis of literature in social science and disaster studies, the design requirements of SAFEgress (Social Agents For Egress), an agent-based simulation framework, are derived. In SAFEgress, the agent's decision-making process, the representation of the egress environment and the occupants, and the algorithms that emulate human capabilities in perception and navigation are carefully designed to simulate group dynamics and social interactions. A series of validation tests has been conducted to verify the capability of the framework to model a wide range of behaviors. Case studies of a museum and a stadium show that considering group navigation could cause additional bottlenecks on egress routes, thus prolong evacuation. On the other hand, by strategically arranging stewards to control crowd flow, evacuation time can be significantly improved. SAFEgress provides a means to systematically evaluate the effects of human and social factors on egress performance in buildings and facilities. Using the simulation results, facility managers and designers can develop occupant-centric solutions to crowd problems by addressing different scenarios and unique occupants' characteristics. Furthermore, the framework could be applied to support research in social science to investigate the collective behaviors of crowds in a built environment.
Emergency evacuation (egress) is an important issue in safety design of buildings. Studies of catastrophic incidents have highlighted the need to consider occupants' behaviors for better understanding of evacuation patterns. Although egress outcomes are influenced by human and social factors, quantifying these factors in design codes and standards is difficult because occupants' characteristics and emergency scenarios vary widely. As an alternative, computational egress simulation tools have been used to evaluate egress designs. However, most of current simulation tools oversimplify the behavioral aspects of evacuees. This thesis describes a flexible computational framework that incorporates human and social behaviors in simulations to aid occupant-centric egress design. Based on the analysis of literature in social science and disaster studies, the design requirements of SAFEgress (Social Agents For Egress), an agent-based simulation framework, are derived. In SAFEgress, the agent's decision-making process, the representation of the egress environment and the occupants, and the algorithms that emulate human capabilities in perception and navigation are carefully designed to simulate group dynamics and social interactions. A series of validation tests has been conducted to verify the capability of the framework to model a wide range of behaviors. Case studies of a museum and a stadium show that considering group navigation could cause additional bottlenecks on egress routes, thus prolong evacuation. On the other hand, by strategically arranging stewards to control crowd flow, evacuation time can be significantly improved. SAFEgress provides a means to systematically evaluate the effects of human and social factors on egress performance in buildings and facilities. Using the simulation results, facility managers and designers can develop occupant-centric solutions to crowd problems by addressing different scenarios and unique occupants' characteristics. Furthermore, the framework could be applied to support research in social science to investigate the collective behaviors of crowds in a built environment.
Book
1 online resource.
As power generation gets more distributed, and electricity markets become more deregulated throughout the world, utilities have been trying to find a way to match consumption with generation. Recently, advanced metering infrastructure (AMI) has been deployed wide-spread including smart meters, and smart meters make two-way communication possible between the meter and the central system. They provide concrete information about customer electricity consumption and create a unique opportunity to investigate and understand customer consumption behavior much better than before. Integrating the needs of utilities and potential benefits to customers, the ultimate target of the studies in this dissertation is building a data-driven demand management system which is based on smart meter data analytics. Thus, we develop scalable methodologies of data analytics using data mining, machine learning techniques. The methodologies developed in this dissertation include learning customer consumption patterns, segmenting customers by relevant features, selecting the proper customers for various energy programs and implementing a data analytics system.
As power generation gets more distributed, and electricity markets become more deregulated throughout the world, utilities have been trying to find a way to match consumption with generation. Recently, advanced metering infrastructure (AMI) has been deployed wide-spread including smart meters, and smart meters make two-way communication possible between the meter and the central system. They provide concrete information about customer electricity consumption and create a unique opportunity to investigate and understand customer consumption behavior much better than before. Integrating the needs of utilities and potential benefits to customers, the ultimate target of the studies in this dissertation is building a data-driven demand management system which is based on smart meter data analytics. Thus, we develop scalable methodologies of data analytics using data mining, machine learning techniques. The methodologies developed in this dissertation include learning customer consumption patterns, segmenting customers by relevant features, selecting the proper customers for various energy programs and implementing a data analytics system.
Book
xix, 323 p. : ill. (partly col.) ; 24 cm.
Green Library
Status of items at Green Library
Green Library Status
Stacks Find it
(no call number) Unavailable On order Request
Book
1 online resource.
The labeling of specific biological structures with single fluorescent molecules has ushered in a new era of imaging technology: super-resolution optical microscopy with resolution far beyond the diffraction limit down to some tens of nm. With the features of these exquisite tools in mind, this Dissertation discusses optical strategies for measuring the three-dimensional (3D) position and orientation of single molecules with nanoscale precision and several super-resolution imaging studies of structures in living cells. The concepts of single-molecule imaging, super-resolution microscopy, the engineering of optical point spread functions (PSFs), and quantitative analysis of single-molecule fluorescence images are introduced. The various computational methods and experimental apparatuses developed during the course of my graduate work are also discussed. Next, a new engineered point spread function, called the Corkscrew PSF, is shown for 3D imaging of point-like emitters. This PSF has been demonstrated to measure the location of nanoscale objects with 2-6 nm precision in 3D throughout a 3.2-micrometer depth range. Characterization and application of the Double-Helix (DH) PSF for super-resolution imaging of structures within mammalian and bacterial cells is discussed. The DH-PSF enables 3D single-molecule imaging within living cells with precisions of tens of nanometers throughout a ~2-micrometer depth range. Finally, the impact of single-molecule emission patterns and molecular orientation on optical imaging is treated, with particular emphasis on multiple strategies for improving the accuracy of super-resolution imaging. The DH microscope is shown to be well-suited for accurately and simultaneously measuring the 3D position and orientation of single molecules.
The labeling of specific biological structures with single fluorescent molecules has ushered in a new era of imaging technology: super-resolution optical microscopy with resolution far beyond the diffraction limit down to some tens of nm. With the features of these exquisite tools in mind, this Dissertation discusses optical strategies for measuring the three-dimensional (3D) position and orientation of single molecules with nanoscale precision and several super-resolution imaging studies of structures in living cells. The concepts of single-molecule imaging, super-resolution microscopy, the engineering of optical point spread functions (PSFs), and quantitative analysis of single-molecule fluorescence images are introduced. The various computational methods and experimental apparatuses developed during the course of my graduate work are also discussed. Next, a new engineered point spread function, called the Corkscrew PSF, is shown for 3D imaging of point-like emitters. This PSF has been demonstrated to measure the location of nanoscale objects with 2-6 nm precision in 3D throughout a 3.2-micrometer depth range. Characterization and application of the Double-Helix (DH) PSF for super-resolution imaging of structures within mammalian and bacterial cells is discussed. The DH-PSF enables 3D single-molecule imaging within living cells with precisions of tens of nanometers throughout a ~2-micrometer depth range. Finally, the impact of single-molecule emission patterns and molecular orientation on optical imaging is treated, with particular emphasis on multiple strategies for improving the accuracy of super-resolution imaging. The DH microscope is shown to be well-suited for accurately and simultaneously measuring the 3D position and orientation of single molecules.
Book
1 online resource.
The "brittle-ductile transition" is an interval of Earth's crust where the primary mechanism of rock deformation gradually changes from brittle fracturing to viscous flow with increasing depth. Fault-related deformation within this transitional zone significantly influences earthquake rupture nucleation and propagation, yet remains poorly understood due to uncertainty in the constitutive equations that govern deformation by simultaneous brittle and viscous mechanisms. This dissertation provides a new perspective on the topic by integrating detailed field observations, microstructural analysis, and mechanics-based numerical modeling of geologic structures. The Bear Creek field area (central Sierra Nevada, CA) contains abundant left-lateral strike-slip faults in glacially polished outcrops of Lake Edison granodiorite (88±1 Ma) that were active under brittle-ductile conditions. Secondary structures near fault tips include splay fractures in extensional regions and an S-C mylonitic foliation in contractional regions. Microstructural observations (including electron backscatter diffraction analysis), titanium-in-quartz analysis, and thermal modeling of pluton intrusion and cooling indicate that the temperature during early faulting and mylonitization was 400-500°C. The faults remained active until 79 Ma as the pluton cooled to 250-300°C, as evidenced by cataclastic overprinting of the mylonitic foliation and the presence of lower greenschist minerals within faults. Kinematic and mechanical models of the Seven Gables outcrop provide insight into the constitutive behavior of fault-related deformation under brittle-ductile conditions. The outcrop contains a 4 cm-thick leucocratic dike that is offset 42 cm across a 10 cm wide contractional step between two left-lateral strike-slip faults measuring 1.1 m and 2.2 m in length. Within the step, the dike is stretched and rotated about a non-vertical axis, and a mylonitic foliation develops in the dike and surrounding granodiorite. The geometry and kinematic model for this outcrop serve as a basis for a 2D mechanics-based finite element model (FEM). The FEM tests five potential constitutive equations for brittle-ductile deformation: Von Mises elastoplasticity, Drucker-Prager elastoplasticity, power law creep, two-layer elastoviscoplasticity, and coupled elastoviscoplasticity. Models with plastic yield criteria based on the Mises equivalent stress are most successful in reproducing the outcrop deformation. Frictional plastic yield criteria (i.e., Drucker-Prager) and power-law creep are incapable of reproducing the outcrop deformation and can be excluded from further consideration. In addition, the effect of distributed inelastic deformation on fault slip and slip transfer through fault steps is investigated using field observations, microstructural analysis, and FEM results. Distributed plastic shear strain (i.e., mylonitization) near fault tips effectively lengthens faults, allowing for greater maximum slip and greater slip gradients near fault tips. Furthermore, distributed plastic shear strain facilitates slip transfer between echelon fault segments, particularly across contractional steps where plastic shear strain is greatest. However, fault segments separated by contractional steps also have significantly reduced slip in the step-bounding portions of the faults, because shear offset is accommodated by distributed shearing within the step. Thus, off-fault distributed inelastic deformation significantly impacts fault behavior within the brittle-ductile transition.
The "brittle-ductile transition" is an interval of Earth's crust where the primary mechanism of rock deformation gradually changes from brittle fracturing to viscous flow with increasing depth. Fault-related deformation within this transitional zone significantly influences earthquake rupture nucleation and propagation, yet remains poorly understood due to uncertainty in the constitutive equations that govern deformation by simultaneous brittle and viscous mechanisms. This dissertation provides a new perspective on the topic by integrating detailed field observations, microstructural analysis, and mechanics-based numerical modeling of geologic structures. The Bear Creek field area (central Sierra Nevada, CA) contains abundant left-lateral strike-slip faults in glacially polished outcrops of Lake Edison granodiorite (88±1 Ma) that were active under brittle-ductile conditions. Secondary structures near fault tips include splay fractures in extensional regions and an S-C mylonitic foliation in contractional regions. Microstructural observations (including electron backscatter diffraction analysis), titanium-in-quartz analysis, and thermal modeling of pluton intrusion and cooling indicate that the temperature during early faulting and mylonitization was 400-500°C. The faults remained active until 79 Ma as the pluton cooled to 250-300°C, as evidenced by cataclastic overprinting of the mylonitic foliation and the presence of lower greenschist minerals within faults. Kinematic and mechanical models of the Seven Gables outcrop provide insight into the constitutive behavior of fault-related deformation under brittle-ductile conditions. The outcrop contains a 4 cm-thick leucocratic dike that is offset 42 cm across a 10 cm wide contractional step between two left-lateral strike-slip faults measuring 1.1 m and 2.2 m in length. Within the step, the dike is stretched and rotated about a non-vertical axis, and a mylonitic foliation develops in the dike and surrounding granodiorite. The geometry and kinematic model for this outcrop serve as a basis for a 2D mechanics-based finite element model (FEM). The FEM tests five potential constitutive equations for brittle-ductile deformation: Von Mises elastoplasticity, Drucker-Prager elastoplasticity, power law creep, two-layer elastoviscoplasticity, and coupled elastoviscoplasticity. Models with plastic yield criteria based on the Mises equivalent stress are most successful in reproducing the outcrop deformation. Frictional plastic yield criteria (i.e., Drucker-Prager) and power-law creep are incapable of reproducing the outcrop deformation and can be excluded from further consideration. In addition, the effect of distributed inelastic deformation on fault slip and slip transfer through fault steps is investigated using field observations, microstructural analysis, and FEM results. Distributed plastic shear strain (i.e., mylonitization) near fault tips effectively lengthens faults, allowing for greater maximum slip and greater slip gradients near fault tips. Furthermore, distributed plastic shear strain facilitates slip transfer between echelon fault segments, particularly across contractional steps where plastic shear strain is greatest. However, fault segments separated by contractional steps also have significantly reduced slip in the step-bounding portions of the faults, because shear offset is accommodated by distributed shearing within the step. Thus, off-fault distributed inelastic deformation significantly impacts fault behavior within the brittle-ductile transition.
Book
1 online resource.
The privacy regulations and policies for healthcare and other industries are often complex, and the design and implementation of effective compliance systems call for precise analysis of the policies and understanding of their consequences. Similarly, the Web as a complex platform for sophisticated distributed applications has multifaceted security requirements. Its security specifications may involve unstated and unverified assumptions about other components of the web, and may introduce new vulnerabilities and break security invariants assumed by web applications. This work describes two formalization frameworks designed for privacy and web security policies, shows that abstract yet informed models of privacy and security policies can be amenable to automation, detect policy violations, and support useful evaluation of alternate designs. Formalization of applicable privacy policies that regulate business processes can facilitate policy compliance in certain computer applications. In this study, a stratified fragment of Datalog with limited use of negation is used to formalize a portion of the US Health Insurance Portability and Accountability Act (HIPAA). Each communication of Protected Health Information (PHI) is modeled as a mathematical tuple, the privacy policies are modeled as logic rules, and rule combination and conflict resolution are discussed. Federal and State policy makers have called for both education to increase stakeholder understanding of complex policies and improved systems that impose policy restrictions on the access and transmission of Electronic Health Information (EHI). Building on the work of formalizing privacy laws as logic programs, the existence of a representative finite model that exhibits precisely how the policy applies in all the cases it governs is proved for policies that conform to certain acyclicity pattern evident in HIPAA. This representative finite model could facilitate education and training on policy compliance, and support policy development and debugging. To address the need for secure transmission of usable EHI, a formalization of policy expressed as a logic program is used to automatically generate a form of access control policy used for Attribute-Based Encryption (ABE). This approach, testable using the representative finite model, makes it possible to share policy-encrypted data on untrusted cloud servers, or send strategically encrypted data across potentially insecure networks. As part of the study, a prototype is built to secure Health Information Exchanges (HIEs) using automatically generates ABE policies, and its performance is measured. For web security policies, a formal model of web security based on an abstraction of the web platform is described and partially implemented in the Alloy programming language. Three distinct threat models are identified that can be used to analyze web applications, ranging from a web attacker who controls malicious web sites and clients, to stronger attackers who can control the network or leverage sites designed to display user-supplied content. Two broadly applicable security goals are proposed, and certain security mechanisms are studied in detail. In the case studies, a SAT-based model-checking tool is used to find both previously known vulnerabilities and new vulnerabilities. The case study of a Kerberos-based single sign-on system illustrates the differences between a secure network protocol using custom client software and a similar but vulnerable web protocol that uses cookies, redirects, and embedded links instead.
The privacy regulations and policies for healthcare and other industries are often complex, and the design and implementation of effective compliance systems call for precise analysis of the policies and understanding of their consequences. Similarly, the Web as a complex platform for sophisticated distributed applications has multifaceted security requirements. Its security specifications may involve unstated and unverified assumptions about other components of the web, and may introduce new vulnerabilities and break security invariants assumed by web applications. This work describes two formalization frameworks designed for privacy and web security policies, shows that abstract yet informed models of privacy and security policies can be amenable to automation, detect policy violations, and support useful evaluation of alternate designs. Formalization of applicable privacy policies that regulate business processes can facilitate policy compliance in certain computer applications. In this study, a stratified fragment of Datalog with limited use of negation is used to formalize a portion of the US Health Insurance Portability and Accountability Act (HIPAA). Each communication of Protected Health Information (PHI) is modeled as a mathematical tuple, the privacy policies are modeled as logic rules, and rule combination and conflict resolution are discussed. Federal and State policy makers have called for both education to increase stakeholder understanding of complex policies and improved systems that impose policy restrictions on the access and transmission of Electronic Health Information (EHI). Building on the work of formalizing privacy laws as logic programs, the existence of a representative finite model that exhibits precisely how the policy applies in all the cases it governs is proved for policies that conform to certain acyclicity pattern evident in HIPAA. This representative finite model could facilitate education and training on policy compliance, and support policy development and debugging. To address the need for secure transmission of usable EHI, a formalization of policy expressed as a logic program is used to automatically generate a form of access control policy used for Attribute-Based Encryption (ABE). This approach, testable using the representative finite model, makes it possible to share policy-encrypted data on untrusted cloud servers, or send strategically encrypted data across potentially insecure networks. As part of the study, a prototype is built to secure Health Information Exchanges (HIEs) using automatically generates ABE policies, and its performance is measured. For web security policies, a formal model of web security based on an abstraction of the web platform is described and partially implemented in the Alloy programming language. Three distinct threat models are identified that can be used to analyze web applications, ranging from a web attacker who controls malicious web sites and clients, to stronger attackers who can control the network or leverage sites designed to display user-supplied content. Two broadly applicable security goals are proposed, and certain security mechanisms are studied in detail. In the case studies, a SAT-based model-checking tool is used to find both previously known vulnerabilities and new vulnerabilities. The case study of a Kerberos-based single sign-on system illustrates the differences between a secure network protocol using custom client software and a similar but vulnerable web protocol that uses cookies, redirects, and embedded links instead.
Book
1 online resource.
One of the biggest challenges in the field of genetics currently is to identify genetic variation that influences human phenotypes, and to elucidate the mechanism by which these variants act. This thesis provides insights into technological biases in detecting diverse genetic variation, explores how regulatory variation differs across divergent human populations, and assesses the genetic architecture and evolutionary adaptation of pigmentation variability in one of the oldest modern human populations. The chapters of this thesis apply population genetic principles to provide a better mechanistic understanding of the forces that shape phenotypic diversity across humans. Chapter 2 evaluates current array design strategies for genotype imputation using differing platforms, methodologies, and across different 1000 Genomes populations. Chapter 3 assesses transcriptome variation across 7 populations from the Human Genome Diversity Project (HGDP) in lymphoblastoid cell lines. Chapter 4 explores the genetic architecture and evolutionary history of lightened skin pigmentation in the southern African ‡Khomani San and identifies novel loci associated with pigmentation. Together, these chapters highlight the importance of diverse human population studies for methodological and technical development as well as to identify a broader mechanistic understanding of phenotypic variability.
One of the biggest challenges in the field of genetics currently is to identify genetic variation that influences human phenotypes, and to elucidate the mechanism by which these variants act. This thesis provides insights into technological biases in detecting diverse genetic variation, explores how regulatory variation differs across divergent human populations, and assesses the genetic architecture and evolutionary adaptation of pigmentation variability in one of the oldest modern human populations. The chapters of this thesis apply population genetic principles to provide a better mechanistic understanding of the forces that shape phenotypic diversity across humans. Chapter 2 evaluates current array design strategies for genotype imputation using differing platforms, methodologies, and across different 1000 Genomes populations. Chapter 3 assesses transcriptome variation across 7 populations from the Human Genome Diversity Project (HGDP) in lymphoblastoid cell lines. Chapter 4 explores the genetic architecture and evolutionary history of lightened skin pigmentation in the southern African ‡Khomani San and identifies novel loci associated with pigmentation. Together, these chapters highlight the importance of diverse human population studies for methodological and technical development as well as to identify a broader mechanistic understanding of phenotypic variability.
Book
xxi, 259 p. ; 24 cm.
Law Library (Crown)
Status of items at Law Library (Crown)
Law Library (Crown) Status
In process Request
(no call number) Unavailable
Book
1 online resource.
In statistical learning and modeling, when the dimension of the data is higher than the number of samples, the estimation accuracy can be very poor and thus the model can be hard to interpret. One approach to reducing the dimension is based on the assumption that many variables are a nuisance and redundant; consequently, many variable selection methods have been proposed to identify only the most important variables. Another approach to reducing the dimension is factor extraction, which is based on the assumption that the high-dimensional data can be approximately projected on a lower-dimensional space. Factor extraction, such as principal component analysis (PCA) and canonical correlation analysis (CCA), provides a good interpretation of the data, but the dimension of the reduced space (the number of underlying features) is typically not easy to estimate. In the context of regression analysis, where we want to fit a linear model yt = BTxt + et given n observations xi 2 Rp and yi 2 Rq for i = 1; : : : ; n, several important variable selection methods, e.g. lasso-type shrinkage, forward stepwise selection and backward elimination, have been well studied for dimension reduction. However, there are not many theoretical results of these methods for multivariate regression models with stochastic regressors (also called `stochastic regression models') found in the literature. In this dissertation, we present an effcient algorithm for solving high-dimensional multivariate linear stochastic regression. The motivation comes from modeling and prediction for multivariate time series models in macroeconomics and for linear MIMO (multiple-input and multiple-output) stochastic systems in control theory. By extending the `orthogonal greedy algorithm' and `high-dimensional information criterion' for `weakly sparse' models in Ing and Lai (2011), we can choose a subset of xt and reduce the dimension p to o(n). We can then perform reduced-rank regression of yt on this reduced set of regressors and introduce an information criterion to choose the number of factors and estimate the factors. We provide theoretical results for our algorithm. We carry out simulation studies and an econometric data analysis to evaluate the algorithm.
In statistical learning and modeling, when the dimension of the data is higher than the number of samples, the estimation accuracy can be very poor and thus the model can be hard to interpret. One approach to reducing the dimension is based on the assumption that many variables are a nuisance and redundant; consequently, many variable selection methods have been proposed to identify only the most important variables. Another approach to reducing the dimension is factor extraction, which is based on the assumption that the high-dimensional data can be approximately projected on a lower-dimensional space. Factor extraction, such as principal component analysis (PCA) and canonical correlation analysis (CCA), provides a good interpretation of the data, but the dimension of the reduced space (the number of underlying features) is typically not easy to estimate. In the context of regression analysis, where we want to fit a linear model yt = BTxt + et given n observations xi 2 Rp and yi 2 Rq for i = 1; : : : ; n, several important variable selection methods, e.g. lasso-type shrinkage, forward stepwise selection and backward elimination, have been well studied for dimension reduction. However, there are not many theoretical results of these methods for multivariate regression models with stochastic regressors (also called `stochastic regression models') found in the literature. In this dissertation, we present an effcient algorithm for solving high-dimensional multivariate linear stochastic regression. The motivation comes from modeling and prediction for multivariate time series models in macroeconomics and for linear MIMO (multiple-input and multiple-output) stochastic systems in control theory. By extending the `orthogonal greedy algorithm' and `high-dimensional information criterion' for `weakly sparse' models in Ing and Lai (2011), we can choose a subset of xt and reduce the dimension p to o(n). We can then perform reduced-rank regression of yt on this reduced set of regressors and introduce an information criterion to choose the number of factors and estimate the factors. We provide theoretical results for our algorithm. We carry out simulation studies and an econometric data analysis to evaluate the algorithm.
Book
283 p. : fig., tab.
SAL3 (off-campus storage)
Status of items at SAL3 (off-campus storage)
SAL3 (off-campus storage) Status
In process Request
HB991 .A3 H83 2015 Available
Book
1 online resource.
The brain implements recognition systems with incredible competence. Our perceptual systems recognize an object from various perspectives as it transforms through space and time. A key property of effective recognition is invariance to changes in the input. In fact, invariant representations focus on high-level information and neglect irrelevant changes, facilitating effective recognition. It is desirable for computational simulations to capture invariant properties. However, quantifying and designing invariance is difficult, because the input signals to a perceptual system are high dimensional, and the number of input variations, conceived in terms of separate dimensions of variation, such as position, rotation, scale can be exponentially large. Natural invariance resides in a subspace of this exponential space, one that, I argue, can be more effectively captured through learning than through design. To capture perceptual invariance, I take the approach of modeling through deep neural networks. These models are classic AI algorithms. The deep neural network is characteristic of composing simple features from lower layers into more complex representations in higher layers. Going up in the hierarchy, the network forms high-level representations which capture various forms of invariance found in natural images. Within this framework, I present three applications. First, I investigate position-preserving invariance properties of a classical architecture, the convolutional neural network. Indeed, with convolutional networks, I show results surpassing the previous state-of-the-art performance in detecting the location of objects in images. In such models, however, translational invariance is designed, limiting their ability to capture the full invariance structure of real inputs. To learn invariance without design, I exploit unsupervised learning from videos using the `slowness' principle. Concretely, the unsupervised learning algorithm discovers invariance arising from transformations such as rotation, out-of-plane changes, or warping from motions in video. When quantitatively measured, the learned invariant features are more robust than ones that are hand-crafted. Using such invariant features, recognition in still images is consistently improved. Finally, I explore the development of invariant representations of number through learning from unlabeled examples in a generic neural network. By learning from examples of `visual numbers', this network forms number representations invariant to object size. With these representations, I illustrate novel simulations for cognitive processes of the `Approximate Number Sense'. Concretely, I correlate simulations with deep networks with the sensitivity of discrimination across a range of numbers. These simulations capture properties of human number representation, focusing on approximate invariance to other stimulus factors.
The brain implements recognition systems with incredible competence. Our perceptual systems recognize an object from various perspectives as it transforms through space and time. A key property of effective recognition is invariance to changes in the input. In fact, invariant representations focus on high-level information and neglect irrelevant changes, facilitating effective recognition. It is desirable for computational simulations to capture invariant properties. However, quantifying and designing invariance is difficult, because the input signals to a perceptual system are high dimensional, and the number of input variations, conceived in terms of separate dimensions of variation, such as position, rotation, scale can be exponentially large. Natural invariance resides in a subspace of this exponential space, one that, I argue, can be more effectively captured through learning than through design. To capture perceptual invariance, I take the approach of modeling through deep neural networks. These models are classic AI algorithms. The deep neural network is characteristic of composing simple features from lower layers into more complex representations in higher layers. Going up in the hierarchy, the network forms high-level representations which capture various forms of invariance found in natural images. Within this framework, I present three applications. First, I investigate position-preserving invariance properties of a classical architecture, the convolutional neural network. Indeed, with convolutional networks, I show results surpassing the previous state-of-the-art performance in detecting the location of objects in images. In such models, however, translational invariance is designed, limiting their ability to capture the full invariance structure of real inputs. To learn invariance without design, I exploit unsupervised learning from videos using the `slowness' principle. Concretely, the unsupervised learning algorithm discovers invariance arising from transformations such as rotation, out-of-plane changes, or warping from motions in video. When quantitatively measured, the learned invariant features are more robust than ones that are hand-crafted. Using such invariant features, recognition in still images is consistently improved. Finally, I explore the development of invariant representations of number through learning from unlabeled examples in a generic neural network. By learning from examples of `visual numbers', this network forms number representations invariant to object size. With these representations, I illustrate novel simulations for cognitive processes of the `Approximate Number Sense'. Concretely, I correlate simulations with deep networks with the sensitivity of discrimination across a range of numbers. These simulations capture properties of human number representation, focusing on approximate invariance to other stimulus factors.
Book
428 p. : 239 ill. (chiefly col.) ; 29 cm.
Art & Architecture Library
Status of items at Art & Architecture Library
Art & Architecture Library Status
Stacks
(no call number) Unavailable On order Request
Book
231 p. ; 23 cm.
Stanford University Libraries
Status of items at Stanford University Libraries
Stanford University Libraries Status
On order
(no call number) Unavailable On order Request
Book
xii, 484 p. : ill. ; 24 cm.
Green Library
Status of items at Green Library
Green Library Status
Stacks Find it
PA3 .B45 V.225 Unknown
Book
1 online resource.
Traditionally, new adaptive algorithms were developed `microscopically' by changing the internal structure of LMS, Recursive Least Squares (RLS) and their variants, such as update equations and optimization criteria. This research attempts to reignite the interest in improving adaptive algorithms by considering a different question: by treating any known adaptive algorithm as a black-box learning agent, what can we do to leverage these little learner(s) to form a more intelligent adaptive algorithm? A framework is developed in this thesis to guide the design process, in which algorithms created from the framework are only allowed to manipulate these little black-boxes without hacking into their inner workings. Since it is a block-level (macroscopic) design strategy, the framework is called `Macro-Adaptive Framework' (MAF) and algorithms developed from the framework are called `Macro-Adaptive Algorithms' (MAA), hence the name of the thesis. In this thesis, macro-adaptive framework (MAF) will be defined. Algorithms satisfying the framework, including new ones developed by the author, will be discussed, analyzed, and compared with other existing algorithms, followed by simulation results in adaptive system identification. Since MAF opened the flood-gate for aggressive optimization (squeezing more information out of limited number of samples) that was not previously available, one possible side effect is over-adaptation, which is rarely studied in adaptive filtering literature. In addition to solutions developed in the thesis, the author did some original research on the phenomenon and the results are presented in the thesis as well.
Traditionally, new adaptive algorithms were developed `microscopically' by changing the internal structure of LMS, Recursive Least Squares (RLS) and their variants, such as update equations and optimization criteria. This research attempts to reignite the interest in improving adaptive algorithms by considering a different question: by treating any known adaptive algorithm as a black-box learning agent, what can we do to leverage these little learner(s) to form a more intelligent adaptive algorithm? A framework is developed in this thesis to guide the design process, in which algorithms created from the framework are only allowed to manipulate these little black-boxes without hacking into their inner workings. Since it is a block-level (macroscopic) design strategy, the framework is called `Macro-Adaptive Framework' (MAF) and algorithms developed from the framework are called `Macro-Adaptive Algorithms' (MAA), hence the name of the thesis. In this thesis, macro-adaptive framework (MAF) will be defined. Algorithms satisfying the framework, including new ones developed by the author, will be discussed, analyzed, and compared with other existing algorithms, followed by simulation results in adaptive system identification. Since MAF opened the flood-gate for aggressive optimization (squeezing more information out of limited number of samples) that was not previously available, one possible side effect is over-adaptation, which is rarely studied in adaptive filtering literature. In addition to solutions developed in the thesis, the author did some original research on the phenomenon and the results are presented in the thesis as well.
Book
1 online resource.
Adherent cell functions can be altered by mechanical stimuli through cytoskeleton remodeling and cell-cell junction disruption. Thus, a better understanding of the mechanical response of adherent cells is crucial to the design of pharmacological therapies for cancers and skin blistering diseases. However, a lack of reliable tools to apply mechanical stimuli and probe the cellular response has limited research on the effects of varying strains on adherent cells. Therefore, I develop systems to probe cellular mechanics using microfabrication technology with soft materials specifically designed to exert controlled strain on adherent cells and probe their mechanical response.
Adherent cell functions can be altered by mechanical stimuli through cytoskeleton remodeling and cell-cell junction disruption. Thus, a better understanding of the mechanical response of adherent cells is crucial to the design of pharmacological therapies for cancers and skin blistering diseases. However, a lack of reliable tools to apply mechanical stimuli and probe the cellular response has limited research on the effects of varying strains on adherent cells. Therefore, I develop systems to probe cellular mechanics using microfabrication technology with soft materials specifically designed to exert controlled strain on adherent cells and probe their mechanical response.
Book
1 online resource.
In this thesis, we introduce and apply several new approaches to represent complex geological models in terms of a relatively small number of parameters. These concise representations are then used for history matching, in which the geological parameters characterizing the system are varied in order to generate models that provide predictions in agreement with observed production data. We first introduce a parameterization procedure based on principal component analysis (PCA). Unlike standard PCA-based methods, in which the high-dimensional model is constructed from a (small) set of parameters by simply performing a multiplication using the basis matrix, in this method the mapping is formulated as an optimization problem. This enables the inclusion of bound constraints and regularization, which are shown to be useful for capturing highly-connected geological features and non-Gaussian property distributions. The approach, referred to as optimization-based PCA (O-PCA), is applied here for binary-facies, three-facies and bimodal systems, including nonstationary models. These different types of geological scenarios are represented by varying the form and parameters in the O-PCA regularization term. The O-PCA procedure is applied both to generate new (random) realizations and for data assimilation, though our emphasis in this work is on its use for gradient-based history matching. The gradient of the O-PCA mapping, which is required for gradient-based history matching methods, is determined analytically or semi-analytically, depending on the form of the regularization term. O-PCA is implemented within a Bayesian history-matching framework to provide both the maximum a posteriori (MAP) estimate and to generate multiple history-matched models for uncertainty assessment. For the latter, O-PCA is combined with the randomized maximum likelihood (RML) method. The O-PCA method is shown to perform well for history matching problems, and to provide models that honor hard data, retain the large-scale connectivity features of the geological system, match historical production data and, when used with RML, provide an estimate of prediction uncertainty. O-PCA is also used to directly parameterize upscaled models. In this case the geological parameters represented in O-PCA are directional transmissibilities. Use of this representation should enable history matching to be performed using upscaled models, which could lead to significant computational savings. Finally, we consider kernel PCA (KPCA) methods, which entail the application of PCA in a high-dimensional `feature space.' In order to map the model back to input (physical) space, KPCA requires the solution of the so-called pre-image problem. The pre-image determination is, in general, a challenging optimization problem. A robust pre-image method, which is based on approaches from the machine learning community, is proposed for KPCA representations of complex geological systems. Consistent with the treatment used for O-PCA, we also introduce a regularized version of KPCA, referred to as R-KPCA. The impact of the pre-image method and regularization is demonstrated by generating new (random) realizations and through flow assessments. R-KPCA is also applied for history matching (MAP estimate) and is shown to provide accurate models. Our overall R-KPCA procedure may offer some slight benefit over O-PCA, but the relative simplicity of O-PCA is a key advantage.
In this thesis, we introduce and apply several new approaches to represent complex geological models in terms of a relatively small number of parameters. These concise representations are then used for history matching, in which the geological parameters characterizing the system are varied in order to generate models that provide predictions in agreement with observed production data. We first introduce a parameterization procedure based on principal component analysis (PCA). Unlike standard PCA-based methods, in which the high-dimensional model is constructed from a (small) set of parameters by simply performing a multiplication using the basis matrix, in this method the mapping is formulated as an optimization problem. This enables the inclusion of bound constraints and regularization, which are shown to be useful for capturing highly-connected geological features and non-Gaussian property distributions. The approach, referred to as optimization-based PCA (O-PCA), is applied here for binary-facies, three-facies and bimodal systems, including nonstationary models. These different types of geological scenarios are represented by varying the form and parameters in the O-PCA regularization term. The O-PCA procedure is applied both to generate new (random) realizations and for data assimilation, though our emphasis in this work is on its use for gradient-based history matching. The gradient of the O-PCA mapping, which is required for gradient-based history matching methods, is determined analytically or semi-analytically, depending on the form of the regularization term. O-PCA is implemented within a Bayesian history-matching framework to provide both the maximum a posteriori (MAP) estimate and to generate multiple history-matched models for uncertainty assessment. For the latter, O-PCA is combined with the randomized maximum likelihood (RML) method. The O-PCA method is shown to perform well for history matching problems, and to provide models that honor hard data, retain the large-scale connectivity features of the geological system, match historical production data and, when used with RML, provide an estimate of prediction uncertainty. O-PCA is also used to directly parameterize upscaled models. In this case the geological parameters represented in O-PCA are directional transmissibilities. Use of this representation should enable history matching to be performed using upscaled models, which could lead to significant computational savings. Finally, we consider kernel PCA (KPCA) methods, which entail the application of PCA in a high-dimensional `feature space.' In order to map the model back to input (physical) space, KPCA requires the solution of the so-called pre-image problem. The pre-image determination is, in general, a challenging optimization problem. A robust pre-image method, which is based on approaches from the machine learning community, is proposed for KPCA representations of complex geological systems. Consistent with the treatment used for O-PCA, we also introduce a regularized version of KPCA, referred to as R-KPCA. The impact of the pre-image method and regularization is demonstrated by generating new (random) realizations and through flow assessments. R-KPCA is also applied for history matching (MAP estimate) and is shown to provide accurate models. Our overall R-KPCA procedure may offer some slight benefit over O-PCA, but the relative simplicity of O-PCA is a key advantage.
Book
251 p. : ill. ; 24 cm.
Green Library
Status of items at Green Library
Green Library Status
Stacks Find it
(no call number) Unavailable In process Request
Book
1 online resource.
Basophils are rare hematopoietic cells that are found, under certain pathological conditions, throughout the tissues of the body: in the skin in atopic or contact dermatitis; in the lungs in allergic asthma or infection; in the liver, spleen and lungs during Nippostrongylus brasiliensis (N.b.) infection. The three projects described herein identified a new aspect of basophil phenotype, helped to clarify a pathway of basophil development, and developed a new mouse model for analyzing mast cell and basophil functions in vivo. In the first study, we showed that when tested at baseline (i.e., without any form of stimulation) mouse basophils express high levels of E-cadherin transcript and protein. E-cadherin protein was detected at the highest levels in bone marrow basophils, but also was present on blood and spleen basophils. In cultures of mouse bone marrow cells, E-cadherin expression on basophils increased over time, peaking at ~days 3-5, and then decreased by day 11 of culture to levels below those in freshly isolated bone marrow basophils. No baseline expression of E-cadherin protein was detected in human blood or bone marrow basophils. Thus, we have identified the expression of E-cadherin, first identified in epithelial cells, by mouse but not human basophils. In the second study, we found that Runx1P1N/P1N mice, which are deficient in the transcription factor distal promoter-derived Runx1 (P1-Runx1), have a marked reduction in the numbers of basophils in the bone marrow, spleen and blood. In contrast, Runx1P1N/P1N mice have normal numbers of mast cells as well as the other granulocytes, i.e., neutrophils and eosinophils. Runx1P1N/P1N mice fail to develop a basophil-dependent immune response, but responded normally when tested for IgE- and mast cell-dependent reactions. These results demonstrate that Runx1P1N/P1N mice exhibit markedly impaired development of basophils, but not mast cells. However, we found that infection with the nematode parasite Strongyloides venezuelensis, or injections of IL-3, can induce modest expansions of the very small populations of basophils in Runx1P1N/P1N mice. Finally, we found that Runx1P1N/P1N mice have normal numbers of granulocyte progenitor cells that can give rise to all granulocytes, but exhibit a > 95% reduction in basophil progenitors (BaPs). These observations indicate that P1-Runx1 is critical for a stage of basophil development between SN-Flk2+/- cells that have granulocyte progenitor potential and BaPs. In the third project, we generated C57BL/6-Cpa3-Cre; Mcl-1fl/fl mice and showed that they are severely deficient in mast cells and also have a marked deficiency in basophils, whereas the numbers of the many other hematopoietic cell populations examined exhibit little or no changes. Moreover, Cpa3-Cre; Mcl-1fl/fl mice exhibit marked reductions in the tissue swelling and leukocyte infiltration associated with either mast cell- and IgE-dependent passive cutaneous anaphylaxis or a basophil- and IgE-dependent model of chronic allergic inflammation of the skin, and they can't develop IgE-dependent passive systemic anaphylaxis. Our findings support the conclusion that the intracellular anti-apoptotic factor, Myeloid cell leukemia-1 (Mcl-1) is required for normal mast cell and basophil development/survival in vivo in mice, and also show that Cpa3-Cre; Mcl-1fl/fl mice represent a useful model for analyses of roles of mast cells and basophils in health and disease. Taken together, these studies further our understanding and encourage future studies of basophil distribution and function at steady state and during pathological responses.
Basophils are rare hematopoietic cells that are found, under certain pathological conditions, throughout the tissues of the body: in the skin in atopic or contact dermatitis; in the lungs in allergic asthma or infection; in the liver, spleen and lungs during Nippostrongylus brasiliensis (N.b.) infection. The three projects described herein identified a new aspect of basophil phenotype, helped to clarify a pathway of basophil development, and developed a new mouse model for analyzing mast cell and basophil functions in vivo. In the first study, we showed that when tested at baseline (i.e., without any form of stimulation) mouse basophils express high levels of E-cadherin transcript and protein. E-cadherin protein was detected at the highest levels in bone marrow basophils, but also was present on blood and spleen basophils. In cultures of mouse bone marrow cells, E-cadherin expression on basophils increased over time, peaking at ~days 3-5, and then decreased by day 11 of culture to levels below those in freshly isolated bone marrow basophils. No baseline expression of E-cadherin protein was detected in human blood or bone marrow basophils. Thus, we have identified the expression of E-cadherin, first identified in epithelial cells, by mouse but not human basophils. In the second study, we found that Runx1P1N/P1N mice, which are deficient in the transcription factor distal promoter-derived Runx1 (P1-Runx1), have a marked reduction in the numbers of basophils in the bone marrow, spleen and blood. In contrast, Runx1P1N/P1N mice have normal numbers of mast cells as well as the other granulocytes, i.e., neutrophils and eosinophils. Runx1P1N/P1N mice fail to develop a basophil-dependent immune response, but responded normally when tested for IgE- and mast cell-dependent reactions. These results demonstrate that Runx1P1N/P1N mice exhibit markedly impaired development of basophils, but not mast cells. However, we found that infection with the nematode parasite Strongyloides venezuelensis, or injections of IL-3, can induce modest expansions of the very small populations of basophils in Runx1P1N/P1N mice. Finally, we found that Runx1P1N/P1N mice have normal numbers of granulocyte progenitor cells that can give rise to all granulocytes, but exhibit a > 95% reduction in basophil progenitors (BaPs). These observations indicate that P1-Runx1 is critical for a stage of basophil development between SN-Flk2+/- cells that have granulocyte progenitor potential and BaPs. In the third project, we generated C57BL/6-Cpa3-Cre; Mcl-1fl/fl mice and showed that they are severely deficient in mast cells and also have a marked deficiency in basophils, whereas the numbers of the many other hematopoietic cell populations examined exhibit little or no changes. Moreover, Cpa3-Cre; Mcl-1fl/fl mice exhibit marked reductions in the tissue swelling and leukocyte infiltration associated with either mast cell- and IgE-dependent passive cutaneous anaphylaxis or a basophil- and IgE-dependent model of chronic allergic inflammation of the skin, and they can't develop IgE-dependent passive systemic anaphylaxis. Our findings support the conclusion that the intracellular anti-apoptotic factor, Myeloid cell leukemia-1 (Mcl-1) is required for normal mast cell and basophil development/survival in vivo in mice, and also show that Cpa3-Cre; Mcl-1fl/fl mice represent a useful model for analyses of roles of mast cells and basophils in health and disease. Taken together, these studies further our understanding and encourage future studies of basophil distribution and function at steady state and during pathological responses.