Search results

128,796 results

View results as:
Number of results to display per page
Book
325 p. ; 24 cm.
Green Library
Status of items at Green Library
Green Library Status
Stacks Find it
(no call number) Unavailable On order Request
Book
1 online resource.
Emergency evacuation (egress) is an important issue in safety design of buildings. Studies of catastrophic incidents have highlighted the need to consider occupants' behaviors for better understanding of evacuation patterns. Although egress outcomes are influenced by human and social factors, quantifying these factors in design codes and standards is difficult because occupants' characteristics and emergency scenarios vary widely. As an alternative, computational egress simulation tools have been used to evaluate egress designs. However, most of current simulation tools oversimplify the behavioral aspects of evacuees. This thesis describes a flexible computational framework that incorporates human and social behaviors in simulations to aid occupant-centric egress design. Based on the analysis of literature in social science and disaster studies, the design requirements of SAFEgress (Social Agents For Egress), an agent-based simulation framework, are derived. In SAFEgress, the agent's decision-making process, the representation of the egress environment and the occupants, and the algorithms that emulate human capabilities in perception and navigation are carefully designed to simulate group dynamics and social interactions. A series of validation tests has been conducted to verify the capability of the framework to model a wide range of behaviors. Case studies of a museum and a stadium show that considering group navigation could cause additional bottlenecks on egress routes, thus prolong evacuation. On the other hand, by strategically arranging stewards to control crowd flow, evacuation time can be significantly improved. SAFEgress provides a means to systematically evaluate the effects of human and social factors on egress performance in buildings and facilities. Using the simulation results, facility managers and designers can develop occupant-centric solutions to crowd problems by addressing different scenarios and unique occupants' characteristics. Furthermore, the framework could be applied to support research in social science to investigate the collective behaviors of crowds in a built environment.
Emergency evacuation (egress) is an important issue in safety design of buildings. Studies of catastrophic incidents have highlighted the need to consider occupants' behaviors for better understanding of evacuation patterns. Although egress outcomes are influenced by human and social factors, quantifying these factors in design codes and standards is difficult because occupants' characteristics and emergency scenarios vary widely. As an alternative, computational egress simulation tools have been used to evaluate egress designs. However, most of current simulation tools oversimplify the behavioral aspects of evacuees. This thesis describes a flexible computational framework that incorporates human and social behaviors in simulations to aid occupant-centric egress design. Based on the analysis of literature in social science and disaster studies, the design requirements of SAFEgress (Social Agents For Egress), an agent-based simulation framework, are derived. In SAFEgress, the agent's decision-making process, the representation of the egress environment and the occupants, and the algorithms that emulate human capabilities in perception and navigation are carefully designed to simulate group dynamics and social interactions. A series of validation tests has been conducted to verify the capability of the framework to model a wide range of behaviors. Case studies of a museum and a stadium show that considering group navigation could cause additional bottlenecks on egress routes, thus prolong evacuation. On the other hand, by strategically arranging stewards to control crowd flow, evacuation time can be significantly improved. SAFEgress provides a means to systematically evaluate the effects of human and social factors on egress performance in buildings and facilities. Using the simulation results, facility managers and designers can develop occupant-centric solutions to crowd problems by addressing different scenarios and unique occupants' characteristics. Furthermore, the framework could be applied to support research in social science to investigate the collective behaviors of crowds in a built environment.
Book
1 online resource.
The labeling of specific biological structures with single fluorescent molecules has ushered in a new era of imaging technology: super-resolution optical microscopy with resolution far beyond the diffraction limit down to some tens of nm. With the features of these exquisite tools in mind, this Dissertation discusses optical strategies for measuring the three-dimensional (3D) position and orientation of single molecules with nanoscale precision and several super-resolution imaging studies of structures in living cells. The concepts of single-molecule imaging, super-resolution microscopy, the engineering of optical point spread functions (PSFs), and quantitative analysis of single-molecule fluorescence images are introduced. The various computational methods and experimental apparatuses developed during the course of my graduate work are also discussed. Next, a new engineered point spread function, called the Corkscrew PSF, is shown for 3D imaging of point-like emitters. This PSF has been demonstrated to measure the location of nanoscale objects with 2-6 nm precision in 3D throughout a 3.2-micrometer depth range. Characterization and application of the Double-Helix (DH) PSF for super-resolution imaging of structures within mammalian and bacterial cells is discussed. The DH-PSF enables 3D single-molecule imaging within living cells with precisions of tens of nanometers throughout a ~2-micrometer depth range. Finally, the impact of single-molecule emission patterns and molecular orientation on optical imaging is treated, with particular emphasis on multiple strategies for improving the accuracy of super-resolution imaging. The DH microscope is shown to be well-suited for accurately and simultaneously measuring the 3D position and orientation of single molecules.
The labeling of specific biological structures with single fluorescent molecules has ushered in a new era of imaging technology: super-resolution optical microscopy with resolution far beyond the diffraction limit down to some tens of nm. With the features of these exquisite tools in mind, this Dissertation discusses optical strategies for measuring the three-dimensional (3D) position and orientation of single molecules with nanoscale precision and several super-resolution imaging studies of structures in living cells. The concepts of single-molecule imaging, super-resolution microscopy, the engineering of optical point spread functions (PSFs), and quantitative analysis of single-molecule fluorescence images are introduced. The various computational methods and experimental apparatuses developed during the course of my graduate work are also discussed. Next, a new engineered point spread function, called the Corkscrew PSF, is shown for 3D imaging of point-like emitters. This PSF has been demonstrated to measure the location of nanoscale objects with 2-6 nm precision in 3D throughout a 3.2-micrometer depth range. Characterization and application of the Double-Helix (DH) PSF for super-resolution imaging of structures within mammalian and bacterial cells is discussed. The DH-PSF enables 3D single-molecule imaging within living cells with precisions of tens of nanometers throughout a ~2-micrometer depth range. Finally, the impact of single-molecule emission patterns and molecular orientation on optical imaging is treated, with particular emphasis on multiple strategies for improving the accuracy of super-resolution imaging. The DH microscope is shown to be well-suited for accurately and simultaneously measuring the 3D position and orientation of single molecules.
Book
1 online resource.
The privacy regulations and policies for healthcare and other industries are often complex, and the design and implementation of effective compliance systems call for precise analysis of the policies and understanding of their consequences. Similarly, the Web as a complex platform for sophisticated distributed applications has multifaceted security requirements. Its security specifications may involve unstated and unverified assumptions about other components of the web, and may introduce new vulnerabilities and break security invariants assumed by web applications. This work describes two formalization frameworks designed for privacy and web security policies, shows that abstract yet informed models of privacy and security policies can be amenable to automation, detect policy violations, and support useful evaluation of alternate designs. Formalization of applicable privacy policies that regulate business processes can facilitate policy compliance in certain computer applications. In this study, a stratified fragment of Datalog with limited use of negation is used to formalize a portion of the US Health Insurance Portability and Accountability Act (HIPAA). Each communication of Protected Health Information (PHI) is modeled as a mathematical tuple, the privacy policies are modeled as logic rules, and rule combination and conflict resolution are discussed. Federal and State policy makers have called for both education to increase stakeholder understanding of complex policies and improved systems that impose policy restrictions on the access and transmission of Electronic Health Information (EHI). Building on the work of formalizing privacy laws as logic programs, the existence of a representative finite model that exhibits precisely how the policy applies in all the cases it governs is proved for policies that conform to certain acyclicity pattern evident in HIPAA. This representative finite model could facilitate education and training on policy compliance, and support policy development and debugging. To address the need for secure transmission of usable EHI, a formalization of policy expressed as a logic program is used to automatically generate a form of access control policy used for Attribute-Based Encryption (ABE). This approach, testable using the representative finite model, makes it possible to share policy-encrypted data on untrusted cloud servers, or send strategically encrypted data across potentially insecure networks. As part of the study, a prototype is built to secure Health Information Exchanges (HIEs) using automatically generates ABE policies, and its performance is measured. For web security policies, a formal model of web security based on an abstraction of the web platform is described and partially implemented in the Alloy programming language. Three distinct threat models are identified that can be used to analyze web applications, ranging from a web attacker who controls malicious web sites and clients, to stronger attackers who can control the network or leverage sites designed to display user-supplied content. Two broadly applicable security goals are proposed, and certain security mechanisms are studied in detail. In the case studies, a SAT-based model-checking tool is used to find both previously known vulnerabilities and new vulnerabilities. The case study of a Kerberos-based single sign-on system illustrates the differences between a secure network protocol using custom client software and a similar but vulnerable web protocol that uses cookies, redirects, and embedded links instead.
The privacy regulations and policies for healthcare and other industries are often complex, and the design and implementation of effective compliance systems call for precise analysis of the policies and understanding of their consequences. Similarly, the Web as a complex platform for sophisticated distributed applications has multifaceted security requirements. Its security specifications may involve unstated and unverified assumptions about other components of the web, and may introduce new vulnerabilities and break security invariants assumed by web applications. This work describes two formalization frameworks designed for privacy and web security policies, shows that abstract yet informed models of privacy and security policies can be amenable to automation, detect policy violations, and support useful evaluation of alternate designs. Formalization of applicable privacy policies that regulate business processes can facilitate policy compliance in certain computer applications. In this study, a stratified fragment of Datalog with limited use of negation is used to formalize a portion of the US Health Insurance Portability and Accountability Act (HIPAA). Each communication of Protected Health Information (PHI) is modeled as a mathematical tuple, the privacy policies are modeled as logic rules, and rule combination and conflict resolution are discussed. Federal and State policy makers have called for both education to increase stakeholder understanding of complex policies and improved systems that impose policy restrictions on the access and transmission of Electronic Health Information (EHI). Building on the work of formalizing privacy laws as logic programs, the existence of a representative finite model that exhibits precisely how the policy applies in all the cases it governs is proved for policies that conform to certain acyclicity pattern evident in HIPAA. This representative finite model could facilitate education and training on policy compliance, and support policy development and debugging. To address the need for secure transmission of usable EHI, a formalization of policy expressed as a logic program is used to automatically generate a form of access control policy used for Attribute-Based Encryption (ABE). This approach, testable using the representative finite model, makes it possible to share policy-encrypted data on untrusted cloud servers, or send strategically encrypted data across potentially insecure networks. As part of the study, a prototype is built to secure Health Information Exchanges (HIEs) using automatically generates ABE policies, and its performance is measured. For web security policies, a formal model of web security based on an abstraction of the web platform is described and partially implemented in the Alloy programming language. Three distinct threat models are identified that can be used to analyze web applications, ranging from a web attacker who controls malicious web sites and clients, to stronger attackers who can control the network or leverage sites designed to display user-supplied content. Two broadly applicable security goals are proposed, and certain security mechanisms are studied in detail. In the case studies, a SAT-based model-checking tool is used to find both previously known vulnerabilities and new vulnerabilities. The case study of a Kerberos-based single sign-on system illustrates the differences between a secure network protocol using custom client software and a similar but vulnerable web protocol that uses cookies, redirects, and embedded links instead.
Book
1 online resource.
One of the biggest challenges in the field of genetics currently is to identify genetic variation that influences human phenotypes, and to elucidate the mechanism by which these variants act. This thesis provides insights into technological biases in detecting diverse genetic variation, explores how regulatory variation differs across divergent human populations, and assesses the genetic architecture and evolutionary adaptation of pigmentation variability in one of the oldest modern human populations. The chapters of this thesis apply population genetic principles to provide a better mechanistic understanding of the forces that shape phenotypic diversity across humans. Chapter 2 evaluates current array design strategies for genotype imputation using differing platforms, methodologies, and across different 1000 Genomes populations. Chapter 3 assesses transcriptome variation across 7 populations from the Human Genome Diversity Project (HGDP) in lymphoblastoid cell lines. Chapter 4 explores the genetic architecture and evolutionary history of lightened skin pigmentation in the southern African ‡Khomani San and identifies novel loci associated with pigmentation. Together, these chapters highlight the importance of diverse human population studies for methodological and technical development as well as to identify a broader mechanistic understanding of phenotypic variability.
One of the biggest challenges in the field of genetics currently is to identify genetic variation that influences human phenotypes, and to elucidate the mechanism by which these variants act. This thesis provides insights into technological biases in detecting diverse genetic variation, explores how regulatory variation differs across divergent human populations, and assesses the genetic architecture and evolutionary adaptation of pigmentation variability in one of the oldest modern human populations. The chapters of this thesis apply population genetic principles to provide a better mechanistic understanding of the forces that shape phenotypic diversity across humans. Chapter 2 evaluates current array design strategies for genotype imputation using differing platforms, methodologies, and across different 1000 Genomes populations. Chapter 3 assesses transcriptome variation across 7 populations from the Human Genome Diversity Project (HGDP) in lymphoblastoid cell lines. Chapter 4 explores the genetic architecture and evolutionary history of lightened skin pigmentation in the southern African ‡Khomani San and identifies novel loci associated with pigmentation. Together, these chapters highlight the importance of diverse human population studies for methodological and technical development as well as to identify a broader mechanistic understanding of phenotypic variability.
Book
xxi, 259 p. ; 24 cm.
Law Library (Crown)
Status of items at Law Library (Crown)
Law Library (Crown) Status
In process Request
(no call number) Unavailable
Book
1 online resource.
In statistical learning and modeling, when the dimension of the data is higher than the number of samples, the estimation accuracy can be very poor and thus the model can be hard to interpret. One approach to reducing the dimension is based on the assumption that many variables are a nuisance and redundant; consequently, many variable selection methods have been proposed to identify only the most important variables. Another approach to reducing the dimension is factor extraction, which is based on the assumption that the high-dimensional data can be approximately projected on a lower-dimensional space. Factor extraction, such as principal component analysis (PCA) and canonical correlation analysis (CCA), provides a good interpretation of the data, but the dimension of the reduced space (the number of underlying features) is typically not easy to estimate. In the context of regression analysis, where we want to fit a linear model yt = BTxt + et given n observations xi 2 Rp and yi 2 Rq for i = 1; : : : ; n, several important variable selection methods, e.g. lasso-type shrinkage, forward stepwise selection and backward elimination, have been well studied for dimension reduction. However, there are not many theoretical results of these methods for multivariate regression models with stochastic regressors (also called `stochastic regression models') found in the literature. In this dissertation, we present an effcient algorithm for solving high-dimensional multivariate linear stochastic regression. The motivation comes from modeling and prediction for multivariate time series models in macroeconomics and for linear MIMO (multiple-input and multiple-output) stochastic systems in control theory. By extending the `orthogonal greedy algorithm' and `high-dimensional information criterion' for `weakly sparse' models in Ing and Lai (2011), we can choose a subset of xt and reduce the dimension p to o(n). We can then perform reduced-rank regression of yt on this reduced set of regressors and introduce an information criterion to choose the number of factors and estimate the factors. We provide theoretical results for our algorithm. We carry out simulation studies and an econometric data analysis to evaluate the algorithm.
In statistical learning and modeling, when the dimension of the data is higher than the number of samples, the estimation accuracy can be very poor and thus the model can be hard to interpret. One approach to reducing the dimension is based on the assumption that many variables are a nuisance and redundant; consequently, many variable selection methods have been proposed to identify only the most important variables. Another approach to reducing the dimension is factor extraction, which is based on the assumption that the high-dimensional data can be approximately projected on a lower-dimensional space. Factor extraction, such as principal component analysis (PCA) and canonical correlation analysis (CCA), provides a good interpretation of the data, but the dimension of the reduced space (the number of underlying features) is typically not easy to estimate. In the context of regression analysis, where we want to fit a linear model yt = BTxt + et given n observations xi 2 Rp and yi 2 Rq for i = 1; : : : ; n, several important variable selection methods, e.g. lasso-type shrinkage, forward stepwise selection and backward elimination, have been well studied for dimension reduction. However, there are not many theoretical results of these methods for multivariate regression models with stochastic regressors (also called `stochastic regression models') found in the literature. In this dissertation, we present an effcient algorithm for solving high-dimensional multivariate linear stochastic regression. The motivation comes from modeling and prediction for multivariate time series models in macroeconomics and for linear MIMO (multiple-input and multiple-output) stochastic systems in control theory. By extending the `orthogonal greedy algorithm' and `high-dimensional information criterion' for `weakly sparse' models in Ing and Lai (2011), we can choose a subset of xt and reduce the dimension p to o(n). We can then perform reduced-rank regression of yt on this reduced set of regressors and introduce an information criterion to choose the number of factors and estimate the factors. We provide theoretical results for our algorithm. We carry out simulation studies and an econometric data analysis to evaluate the algorithm.
Book
283 p. : fig., tab. ; 21 cm.
Stanford University Libraries
Status of items at Stanford University Libraries
Stanford University Libraries Status
On order
(no call number) Unavailable On order Request
Book
xii, 530 p. : 31 ill. ; 24 cm.
Green Library
Status of items at Green Library
Green Library Status
Stacks Find it
(no call number) Unavailable On order Request
Book
1 online resource.
The brain implements recognition systems with incredible competence. Our perceptual systems recognize an object from various perspectives as it transforms through space and time. A key property of effective recognition is invariance to changes in the input. In fact, invariant representations focus on high-level information and neglect irrelevant changes, facilitating effective recognition. It is desirable for computational simulations to capture invariant properties. However, quantifying and designing invariance is difficult, because the input signals to a perceptual system are high dimensional, and the number of input variations, conceived in terms of separate dimensions of variation, such as position, rotation, scale can be exponentially large. Natural invariance resides in a subspace of this exponential space, one that, I argue, can be more effectively captured through learning than through design. To capture perceptual invariance, I take the approach of modeling through deep neural networks. These models are classic AI algorithms. The deep neural network is characteristic of composing simple features from lower layers into more complex representations in higher layers. Going up in the hierarchy, the network forms high-level representations which capture various forms of invariance found in natural images. Within this framework, I present three applications. First, I investigate position-preserving invariance properties of a classical architecture, the convolutional neural network. Indeed, with convolutional networks, I show results surpassing the previous state-of-the-art performance in detecting the location of objects in images. In such models, however, translational invariance is designed, limiting their ability to capture the full invariance structure of real inputs. To learn invariance without design, I exploit unsupervised learning from videos using the `slowness' principle. Concretely, the unsupervised learning algorithm discovers invariance arising from transformations such as rotation, out-of-plane changes, or warping from motions in video. When quantitatively measured, the learned invariant features are more robust than ones that are hand-crafted. Using such invariant features, recognition in still images is consistently improved. Finally, I explore the development of invariant representations of number through learning from unlabeled examples in a generic neural network. By learning from examples of `visual numbers', this network forms number representations invariant to object size. With these representations, I illustrate novel simulations for cognitive processes of the `Approximate Number Sense'. Concretely, I correlate simulations with deep networks with the sensitivity of discrimination across a range of numbers. These simulations capture properties of human number representation, focusing on approximate invariance to other stimulus factors.
The brain implements recognition systems with incredible competence. Our perceptual systems recognize an object from various perspectives as it transforms through space and time. A key property of effective recognition is invariance to changes in the input. In fact, invariant representations focus on high-level information and neglect irrelevant changes, facilitating effective recognition. It is desirable for computational simulations to capture invariant properties. However, quantifying and designing invariance is difficult, because the input signals to a perceptual system are high dimensional, and the number of input variations, conceived in terms of separate dimensions of variation, such as position, rotation, scale can be exponentially large. Natural invariance resides in a subspace of this exponential space, one that, I argue, can be more effectively captured through learning than through design. To capture perceptual invariance, I take the approach of modeling through deep neural networks. These models are classic AI algorithms. The deep neural network is characteristic of composing simple features from lower layers into more complex representations in higher layers. Going up in the hierarchy, the network forms high-level representations which capture various forms of invariance found in natural images. Within this framework, I present three applications. First, I investigate position-preserving invariance properties of a classical architecture, the convolutional neural network. Indeed, with convolutional networks, I show results surpassing the previous state-of-the-art performance in detecting the location of objects in images. In such models, however, translational invariance is designed, limiting their ability to capture the full invariance structure of real inputs. To learn invariance without design, I exploit unsupervised learning from videos using the `slowness' principle. Concretely, the unsupervised learning algorithm discovers invariance arising from transformations such as rotation, out-of-plane changes, or warping from motions in video. When quantitatively measured, the learned invariant features are more robust than ones that are hand-crafted. Using such invariant features, recognition in still images is consistently improved. Finally, I explore the development of invariant representations of number through learning from unlabeled examples in a generic neural network. By learning from examples of `visual numbers', this network forms number representations invariant to object size. With these representations, I illustrate novel simulations for cognitive processes of the `Approximate Number Sense'. Concretely, I correlate simulations with deep networks with the sensitivity of discrimination across a range of numbers. These simulations capture properties of human number representation, focusing on approximate invariance to other stimulus factors.
Book
xii, 484 p. : ill. ; 24 cm.
Green Library
Status of items at Green Library
Green Library Status
Stacks Find it
PA3 .B45 V.225 Unavailable In process Request
Book
1 online resource.
Traditionally, new adaptive algorithms were developed `microscopically' by changing the internal structure of LMS, Recursive Least Squares (RLS) and their variants, such as update equations and optimization criteria. This research attempts to reignite the interest in improving adaptive algorithms by considering a different question: by treating any known adaptive algorithm as a black-box learning agent, what can we do to leverage these little learner(s) to form a more intelligent adaptive algorithm? A framework is developed in this thesis to guide the design process, in which algorithms created from the framework are only allowed to manipulate these little black-boxes without hacking into their inner workings. Since it is a block-level (macroscopic) design strategy, the framework is called `Macro-Adaptive Framework' (MAF) and algorithms developed from the framework are called `Macro-Adaptive Algorithms' (MAA), hence the name of the thesis. In this thesis, macro-adaptive framework (MAF) will be defined. Algorithms satisfying the framework, including new ones developed by the author, will be discussed, analyzed, and compared with other existing algorithms, followed by simulation results in adaptive system identification. Since MAF opened the flood-gate for aggressive optimization (squeezing more information out of limited number of samples) that was not previously available, one possible side effect is over-adaptation, which is rarely studied in adaptive filtering literature. In addition to solutions developed in the thesis, the author did some original research on the phenomenon and the results are presented in the thesis as well.
Traditionally, new adaptive algorithms were developed `microscopically' by changing the internal structure of LMS, Recursive Least Squares (RLS) and their variants, such as update equations and optimization criteria. This research attempts to reignite the interest in improving adaptive algorithms by considering a different question: by treating any known adaptive algorithm as a black-box learning agent, what can we do to leverage these little learner(s) to form a more intelligent adaptive algorithm? A framework is developed in this thesis to guide the design process, in which algorithms created from the framework are only allowed to manipulate these little black-boxes without hacking into their inner workings. Since it is a block-level (macroscopic) design strategy, the framework is called `Macro-Adaptive Framework' (MAF) and algorithms developed from the framework are called `Macro-Adaptive Algorithms' (MAA), hence the name of the thesis. In this thesis, macro-adaptive framework (MAF) will be defined. Algorithms satisfying the framework, including new ones developed by the author, will be discussed, analyzed, and compared with other existing algorithms, followed by simulation results in adaptive system identification. Since MAF opened the flood-gate for aggressive optimization (squeezing more information out of limited number of samples) that was not previously available, one possible side effect is over-adaptation, which is rarely studied in adaptive filtering literature. In addition to solutions developed in the thesis, the author did some original research on the phenomenon and the results are presented in the thesis as well.
Book
1 online resource.
Adherent cell functions can be altered by mechanical stimuli through cytoskeleton remodeling and cell-cell junction disruption. Thus, a better understanding of the mechanical response of adherent cells is crucial to the design of pharmacological therapies for cancers and skin blistering diseases. However, a lack of reliable tools to apply mechanical stimuli and probe the cellular response has limited research on the effects of varying strains on adherent cells. Therefore, I develop systems to probe cellular mechanics using microfabrication technology with soft materials specifically designed to exert controlled strain on adherent cells and probe their mechanical response.
Adherent cell functions can be altered by mechanical stimuli through cytoskeleton remodeling and cell-cell junction disruption. Thus, a better understanding of the mechanical response of adherent cells is crucial to the design of pharmacological therapies for cancers and skin blistering diseases. However, a lack of reliable tools to apply mechanical stimuli and probe the cellular response has limited research on the effects of varying strains on adherent cells. Therefore, I develop systems to probe cellular mechanics using microfabrication technology with soft materials specifically designed to exert controlled strain on adherent cells and probe their mechanical response.
Book
1 online resource.
In this thesis, we introduce and apply several new approaches to represent complex geological models in terms of a relatively small number of parameters. These concise representations are then used for history matching, in which the geological parameters characterizing the system are varied in order to generate models that provide predictions in agreement with observed production data. We first introduce a parameterization procedure based on principal component analysis (PCA). Unlike standard PCA-based methods, in which the high-dimensional model is constructed from a (small) set of parameters by simply performing a multiplication using the basis matrix, in this method the mapping is formulated as an optimization problem. This enables the inclusion of bound constraints and regularization, which are shown to be useful for capturing highly-connected geological features and non-Gaussian property distributions. The approach, referred to as optimization-based PCA (O-PCA), is applied here for binary-facies, three-facies and bimodal systems, including nonstationary models. These different types of geological scenarios are represented by varying the form and parameters in the O-PCA regularization term. The O-PCA procedure is applied both to generate new (random) realizations and for data assimilation, though our emphasis in this work is on its use for gradient-based history matching. The gradient of the O-PCA mapping, which is required for gradient-based history matching methods, is determined analytically or semi-analytically, depending on the form of the regularization term. O-PCA is implemented within a Bayesian history-matching framework to provide both the maximum a posteriori (MAP) estimate and to generate multiple history-matched models for uncertainty assessment. For the latter, O-PCA is combined with the randomized maximum likelihood (RML) method. The O-PCA method is shown to perform well for history matching problems, and to provide models that honor hard data, retain the large-scale connectivity features of the geological system, match historical production data and, when used with RML, provide an estimate of prediction uncertainty. O-PCA is also used to directly parameterize upscaled models. In this case the geological parameters represented in O-PCA are directional transmissibilities. Use of this representation should enable history matching to be performed using upscaled models, which could lead to significant computational savings. Finally, we consider kernel PCA (KPCA) methods, which entail the application of PCA in a high-dimensional `feature space.' In order to map the model back to input (physical) space, KPCA requires the solution of the so-called pre-image problem. The pre-image determination is, in general, a challenging optimization problem. A robust pre-image method, which is based on approaches from the machine learning community, is proposed for KPCA representations of complex geological systems. Consistent with the treatment used for O-PCA, we also introduce a regularized version of KPCA, referred to as R-KPCA. The impact of the pre-image method and regularization is demonstrated by generating new (random) realizations and through flow assessments. R-KPCA is also applied for history matching (MAP estimate) and is shown to provide accurate models. Our overall R-KPCA procedure may offer some slight benefit over O-PCA, but the relative simplicity of O-PCA is a key advantage.
In this thesis, we introduce and apply several new approaches to represent complex geological models in terms of a relatively small number of parameters. These concise representations are then used for history matching, in which the geological parameters characterizing the system are varied in order to generate models that provide predictions in agreement with observed production data. We first introduce a parameterization procedure based on principal component analysis (PCA). Unlike standard PCA-based methods, in which the high-dimensional model is constructed from a (small) set of parameters by simply performing a multiplication using the basis matrix, in this method the mapping is formulated as an optimization problem. This enables the inclusion of bound constraints and regularization, which are shown to be useful for capturing highly-connected geological features and non-Gaussian property distributions. The approach, referred to as optimization-based PCA (O-PCA), is applied here for binary-facies, three-facies and bimodal systems, including nonstationary models. These different types of geological scenarios are represented by varying the form and parameters in the O-PCA regularization term. The O-PCA procedure is applied both to generate new (random) realizations and for data assimilation, though our emphasis in this work is on its use for gradient-based history matching. The gradient of the O-PCA mapping, which is required for gradient-based history matching methods, is determined analytically or semi-analytically, depending on the form of the regularization term. O-PCA is implemented within a Bayesian history-matching framework to provide both the maximum a posteriori (MAP) estimate and to generate multiple history-matched models for uncertainty assessment. For the latter, O-PCA is combined with the randomized maximum likelihood (RML) method. The O-PCA method is shown to perform well for history matching problems, and to provide models that honor hard data, retain the large-scale connectivity features of the geological system, match historical production data and, when used with RML, provide an estimate of prediction uncertainty. O-PCA is also used to directly parameterize upscaled models. In this case the geological parameters represented in O-PCA are directional transmissibilities. Use of this representation should enable history matching to be performed using upscaled models, which could lead to significant computational savings. Finally, we consider kernel PCA (KPCA) methods, which entail the application of PCA in a high-dimensional `feature space.' In order to map the model back to input (physical) space, KPCA requires the solution of the so-called pre-image problem. The pre-image determination is, in general, a challenging optimization problem. A robust pre-image method, which is based on approaches from the machine learning community, is proposed for KPCA representations of complex geological systems. Consistent with the treatment used for O-PCA, we also introduce a regularized version of KPCA, referred to as R-KPCA. The impact of the pre-image method and regularization is demonstrated by generating new (random) realizations and through flow assessments. R-KPCA is also applied for history matching (MAP estimate) and is shown to provide accurate models. Our overall R-KPCA procedure may offer some slight benefit over O-PCA, but the relative simplicity of O-PCA is a key advantage.
Book
251 p. : ill. ; 24 cm.
Green Library
Status of items at Green Library
Green Library Status
Stacks Find it
(no call number) Unavailable On order Request
Book
507 p. : some ill. ; 25 cm.
Green Library
Status of items at Green Library
Green Library Status
Stacks Find it
(no call number) Unavailable On order Request
Book
ix, 310 p. ; 25 cm.
Stanford University Libraries
Status of items at Stanford University Libraries
Stanford University Libraries Status
On order
(no call number) Unavailable On order Request
Book
1 online resource.
An ubiquitous challenge in modern data and signal acquisition arises from the ever-growing size of the object under study. Hardware and power limitations often preclude sampling with the desired rate and precision, which motivates the exploitation of signal and/or channel structures in order to enable reduced-rate sampling while preserving information integrity. This thesis is devoted to understanding the fundamental interplay between the underlying signal structures and the data acquisition paradigms, as well as developing efficient and provably effective algorithms for data reconstruction. The main contributions of this thesis are as follows. (1) We investigate the effect of sub-Nyquist sampling upon the capacity of a continuous-time channel. We start by deriving the sub-Nyquist sampled channel capacity under periodic sampling systems that subsume three canonical sampling structures, and then characterize the fundamental upper limit on the capacity achievable by general time-preserving sub-Nyquist sampling methods. Our findings indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest signal-to-noise ratio and is alias-suppressing. In addition, we illuminate an intriguing connection between sampled channels and MIMO channels, as well as a new connection between sampled capacity and MMSE. (2) We study the universal sub-Nyquist design when the sampler is designed to operate independent of instantaneous channel realizations, under a sparse multiband channel model. We evaluate the sampler design based on the capacity loss due to channel-independent sub-Nyquist sampling, and characterize the minimax capacity loss. This fundamental minimax limit can be approached by random sampling in the high-SNR regime, which demonstrates the optimality of random sampling schemes. (3) We explore the problem of recovering a spectrally sparse signal from a few random time-domain samples, where the underlying frequencies of the signal can assume any continuous values in a unit disk. To address a basis mismatch issue that arises in conventional compressed sensing methods, we develop a novel convex program by exploiting the equivalence between (off-the-grid) spectral sparsity and Hankel low-rank structure. The algorithm exploits sparsity while enforcing physically meaningful constraints. Under mild incoherence conditions, our algorithm allows perfect recovery as soon as the sample complexity exceeds the spectral sparsity level (up to a logarithmic gap). (4) We consider the task of covariance estimation with limited storage and low computational complexity. We focus on a quadratic random measurement scheme in processing data streams and high-frequency signals, which is shown to impose a minimal memory requirement and low computational complexity. Three structural assumptions of covariance matrices, including low rank, Toeplitz low rank, and jointly rank-one and sparse structure, are investigated. We show that a covariance matrix with any of these structures can be universally and faithfully recovered from near-minimal sub-Gaussian quadratic measurements via efficient convex programs for the respective structure. All in all, the central theme of this thesis is on the interplay between economical subsampling schemes and the structures of the object under investigation, from both information-theoretic and algorithmic perspectives.
An ubiquitous challenge in modern data and signal acquisition arises from the ever-growing size of the object under study. Hardware and power limitations often preclude sampling with the desired rate and precision, which motivates the exploitation of signal and/or channel structures in order to enable reduced-rate sampling while preserving information integrity. This thesis is devoted to understanding the fundamental interplay between the underlying signal structures and the data acquisition paradigms, as well as developing efficient and provably effective algorithms for data reconstruction. The main contributions of this thesis are as follows. (1) We investigate the effect of sub-Nyquist sampling upon the capacity of a continuous-time channel. We start by deriving the sub-Nyquist sampled channel capacity under periodic sampling systems that subsume three canonical sampling structures, and then characterize the fundamental upper limit on the capacity achievable by general time-preserving sub-Nyquist sampling methods. Our findings indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest signal-to-noise ratio and is alias-suppressing. In addition, we illuminate an intriguing connection between sampled channels and MIMO channels, as well as a new connection between sampled capacity and MMSE. (2) We study the universal sub-Nyquist design when the sampler is designed to operate independent of instantaneous channel realizations, under a sparse multiband channel model. We evaluate the sampler design based on the capacity loss due to channel-independent sub-Nyquist sampling, and characterize the minimax capacity loss. This fundamental minimax limit can be approached by random sampling in the high-SNR regime, which demonstrates the optimality of random sampling schemes. (3) We explore the problem of recovering a spectrally sparse signal from a few random time-domain samples, where the underlying frequencies of the signal can assume any continuous values in a unit disk. To address a basis mismatch issue that arises in conventional compressed sensing methods, we develop a novel convex program by exploiting the equivalence between (off-the-grid) spectral sparsity and Hankel low-rank structure. The algorithm exploits sparsity while enforcing physically meaningful constraints. Under mild incoherence conditions, our algorithm allows perfect recovery as soon as the sample complexity exceeds the spectral sparsity level (up to a logarithmic gap). (4) We consider the task of covariance estimation with limited storage and low computational complexity. We focus on a quadratic random measurement scheme in processing data streams and high-frequency signals, which is shown to impose a minimal memory requirement and low computational complexity. Three structural assumptions of covariance matrices, including low rank, Toeplitz low rank, and jointly rank-one and sparse structure, are investigated. We show that a covariance matrix with any of these structures can be universally and faithfully recovered from near-minimal sub-Gaussian quadratic measurements via efficient convex programs for the respective structure. All in all, the central theme of this thesis is on the interplay between economical subsampling schemes and the structures of the object under investigation, from both information-theoretic and algorithmic perspectives.
Book
1 online resource (xv, 154 pages) : illustrations (some color).
Book
1 online resource.
Magnetic resonance-based velocity (MRV) and concentration (MRC) measurements were performed to measure the time-averaged, three-dimensional, three-component velocity and scalar concentration fields in a double passage vane cascade representative of a high pressure turbine vane from a gas turbine engine. The understanding and prediction of the highly three-dimensional flow and heat transfer in modern gas turbine engines is a problem that has not been solved over many years of turbomachinery research. Turbine vanes and blades are both internally and externally cooled to withstand the hot gas environment. The external film cooling is generally fed by discrete holes on the vane surface, except for at the trailing edge, which is cooled by slots that are cut into the pressure side of the vane. Hot streaks from the combustor and cool streaks from the vane film cooling impose strong inlet temperature variations on the turbine blades, which can lead to local hot or cold spots, high thermal stresses, and fatigue failures. Furthermore, the complex three dimensional flows around the vane may act to concentrate cool or hot fluid exiting the vane row. Experiments were performed to show the validity of the application of the scalar transport analogy to the study of turbulent thermal energy transport using turbulent passive scalar transport studies. These experiments were conducted in a three-dimensional mixing layer in the wake of a blunt splitter plate built into two identical test sections. One test section was magnetic resonance-compatible and used water as the working fluid and the other was adapted for high subsonic Mach number air flows and allowed physical access for a thermocouple probe to take temperature profiles. In the water-based MRV/MRC experiments, the mainstream flow was water and the secondary flow was a copper sulfate solution. In the air experiments, the main flow was room temperature air and the secondary flow was heated. The energy separation effect due to coherent vortex structures in the compressible flow experiments affected the measured temperature profile because of the small difference in stagnation temperature between the two flows. This effect is expected to be negligible in the high temperature difference flows found in real engine conditions. This effect is easily corrected in the temperature profiles extracted from this experiment. The agreement between the corrected temperature and the concentration data was found to be excellent, validating the application of MRC for quantitative measurement of thermal transport in turbomachinery components via the scalar transport analogy. The MRV/MRC experimental technique was applied to the study of turbulent dispersion of coolant injected through trailing edge cooling slots, with the focus on dispersion in the vane wake. A new high concentration MRC technique was developed to provide accurate measurements in the far wake of the turbine vane. Three component velocity data showed the development of the passage vortex, a key element of the vane secondary flows. This mean flow structure is the dominant mechanism for turbulent mixing near the cascade endwalls. However, strong variations in coolant concentration remained in the wake downstream of the center span region. Asymmetric dispersion in this region indicated that longitudinal vortices shed from the coolant injection structures played a dominant role in the wake spreading. A separate experiment was performed to evaluate the behavior of the dispersion of combustor hot streaks in the turbine vane cascade. The velocity and concentration distributions were evaluated using the MRV/MRC experimental technique. Streamtubes and concentration isosurfaces reveal that the streaks spread slowly as they pass through the cascade. This suggests that turbulence suppression by strong acceleration plays a significant role in maintaining the streaks. It is important to note that coherent hot streaks still exist at the exit of the test section in the far wake of the vane. The concluding message from these experiments is that the temperature distribution of the gases impacting the blades downstream of the turbine vanes remains significantly non-uniform and that accurate prediction of the temperature distribution downstream of the vanes is critical for advanced turbine cooling design.
Magnetic resonance-based velocity (MRV) and concentration (MRC) measurements were performed to measure the time-averaged, three-dimensional, three-component velocity and scalar concentration fields in a double passage vane cascade representative of a high pressure turbine vane from a gas turbine engine. The understanding and prediction of the highly three-dimensional flow and heat transfer in modern gas turbine engines is a problem that has not been solved over many years of turbomachinery research. Turbine vanes and blades are both internally and externally cooled to withstand the hot gas environment. The external film cooling is generally fed by discrete holes on the vane surface, except for at the trailing edge, which is cooled by slots that are cut into the pressure side of the vane. Hot streaks from the combustor and cool streaks from the vane film cooling impose strong inlet temperature variations on the turbine blades, which can lead to local hot or cold spots, high thermal stresses, and fatigue failures. Furthermore, the complex three dimensional flows around the vane may act to concentrate cool or hot fluid exiting the vane row. Experiments were performed to show the validity of the application of the scalar transport analogy to the study of turbulent thermal energy transport using turbulent passive scalar transport studies. These experiments were conducted in a three-dimensional mixing layer in the wake of a blunt splitter plate built into two identical test sections. One test section was magnetic resonance-compatible and used water as the working fluid and the other was adapted for high subsonic Mach number air flows and allowed physical access for a thermocouple probe to take temperature profiles. In the water-based MRV/MRC experiments, the mainstream flow was water and the secondary flow was a copper sulfate solution. In the air experiments, the main flow was room temperature air and the secondary flow was heated. The energy separation effect due to coherent vortex structures in the compressible flow experiments affected the measured temperature profile because of the small difference in stagnation temperature between the two flows. This effect is expected to be negligible in the high temperature difference flows found in real engine conditions. This effect is easily corrected in the temperature profiles extracted from this experiment. The agreement between the corrected temperature and the concentration data was found to be excellent, validating the application of MRC for quantitative measurement of thermal transport in turbomachinery components via the scalar transport analogy. The MRV/MRC experimental technique was applied to the study of turbulent dispersion of coolant injected through trailing edge cooling slots, with the focus on dispersion in the vane wake. A new high concentration MRC technique was developed to provide accurate measurements in the far wake of the turbine vane. Three component velocity data showed the development of the passage vortex, a key element of the vane secondary flows. This mean flow structure is the dominant mechanism for turbulent mixing near the cascade endwalls. However, strong variations in coolant concentration remained in the wake downstream of the center span region. Asymmetric dispersion in this region indicated that longitudinal vortices shed from the coolant injection structures played a dominant role in the wake spreading. A separate experiment was performed to evaluate the behavior of the dispersion of combustor hot streaks in the turbine vane cascade. The velocity and concentration distributions were evaluated using the MRV/MRC experimental technique. Streamtubes and concentration isosurfaces reveal that the streaks spread slowly as they pass through the cascade. This suggests that turbulence suppression by strong acceleration plays a significant role in maintaining the streaks. It is important to note that coherent hot streaks still exist at the exit of the test section in the far wake of the vane. The concluding message from these experiments is that the temperature distribution of the gases impacting the blades downstream of the turbine vanes remains significantly non-uniform and that accurate prediction of the temperature distribution downstream of the vanes is critical for advanced turbine cooling design.