- Book
- xxv, 323 pages : illustrations (some color), portraits ; 24 cm.
Many important problems involve decision making under uncertainty -- that is, choosing actions based on often imperfect observations, with unknown outcomes. Designers of automated decision support systems must take into account the various sources of uncertainty while balancing the multiple objectives of the system. This book provides an introduction to the challenges of decision making under uncertainty from a computational perspective. It presents both the theory behind decision making models and algorithms and a collection of example applications that range from speech recognition to aircraft collision avoidance. Focusing on two methods for designing decision agents, planning and reinforcement learning, the book covers probabilistic models, introducing Bayesian networks as a graphical model that captures probabilistic relationships between variables; utility theory as a framework for understanding optimal decision making under uncertainty; Markov decision processes as a method for modeling sequential problems; model uncertainty; state uncertainty; and cooperative decision making involving multiple interacting agents. A series of applications shows how the theoretical concepts can be applied to systems for attribute-based person search, speech applications, collision avoidance, and unmanned aircraft persistent surveillance. Decision Making Under Uncertainty unifies research from different communities using consistent notation, and is accessible to students and researchers across engineering disciplines who have some prior exposure to probability theory and calculus. It can be used as a text for advanced undergraduate and graduate students in fields including computer science, aerospace and electrical engineering, and management science. It will also be a valuable professional reference for researchers in a variety of disciplines.
(source: Nielsen Book Data)9780262029254 20160618
(source: Nielsen Book Data)9780262029254 20160618
Many important problems involve decision making under uncertainty -- that is, choosing actions based on often imperfect observations, with unknown outcomes. Designers of automated decision support systems must take into account the various sources of uncertainty while balancing the multiple objectives of the system. This book provides an introduction to the challenges of decision making under uncertainty from a computational perspective. It presents both the theory behind decision making models and algorithms and a collection of example applications that range from speech recognition to aircraft collision avoidance. Focusing on two methods for designing decision agents, planning and reinforcement learning, the book covers probabilistic models, introducing Bayesian networks as a graphical model that captures probabilistic relationships between variables; utility theory as a framework for understanding optimal decision making under uncertainty; Markov decision processes as a method for modeling sequential problems; model uncertainty; state uncertainty; and cooperative decision making involving multiple interacting agents. A series of applications shows how the theoretical concepts can be applied to systems for attribute-based person search, speech applications, collision avoidance, and unmanned aircraft persistent surveillance. Decision Making Under Uncertainty unifies research from different communities using consistent notation, and is accessible to students and researchers across engineering disciplines who have some prior exposure to probability theory and calculus. It can be used as a text for advanced undergraduate and graduate students in fields including computer science, aerospace and electrical engineering, and management science. It will also be a valuable professional reference for researchers in a variety of disciplines.
(source: Nielsen Book Data)9780262029254 20160618
(source: Nielsen Book Data)9780262029254 20160618
Engineering Library (Terman), eReserve
Engineering Library (Terman) | Status |
---|---|
On reserve: Ask at circulation desk | |
TJ217.5 .K63 2015 | Unknown 2-hour loan |
eReserve | Status |
---|---|
Instructor's copy | |
(no call number) | Unknown |
AA-228-01, CS-238-01
- Course
- AA-228-01 -- Decision Making under Uncertainty
- Instructor(s)
- Kochenderfer, Mykel John
- Course
- CS-238-01 -- Decision Making under Uncertainty
- Instructor(s)
- Kochenderfer, Mykel John
2. Exploiting shared structures in large GPS trajectory datasets under uncertainty [electronic resource] [2017] Online
- Book
- 1 online resource.
As GPS-enabled devices become ubiquitous, large collections of GPS trajectories have been used for mobility pattern mining and intelligent transportation applications, such as suggesting routes and predicting traffic jams. Compared to traditional data sources, such as traffic sensors, cameras and surveys, trajectory data from moving vehicles have the advantage of being dynamic, cheap, and highly available. Thus they allow true data-driven solutions to many problems that are traditionally solved using modeling and simulation approaches. On the other hand, the majority of available trajectory data contain a large amount of uncertainty, due to GPS noise, low sampling rates and missing data. Such uncertainty can significantly degrade the effectiveness of using large trajectory data in practice. Conventional methods for reducing uncertainty in trajectory data, such as map matching and trajectory interpolation tend to process each trajectory independently, without considering the shared structures in large trajectory data that can improve the processing of individual trajectories. In this dissertation, we present three algorithms that exploit shared structures in large trajectory collections to reduce noise and sample sparsity, and to improve trajectory-based travel time prediction under the difficult scenario of having far less GPS-tracking units than normally required in previous studies. These works make use of different kinds of shared structures, such as popular routes on a road map, trajectory clusters that represent unique traffic flows across trajectory junctions, and recurring traffic patterns over a small neighborhood. They also bring novel insights on how to extract robust knowledge from uncertain data, and how to effectively incorporate learned knowledge to individual trajectory processing tasks.
As GPS-enabled devices become ubiquitous, large collections of GPS trajectories have been used for mobility pattern mining and intelligent transportation applications, such as suggesting routes and predicting traffic jams. Compared to traditional data sources, such as traffic sensors, cameras and surveys, trajectory data from moving vehicles have the advantage of being dynamic, cheap, and highly available. Thus they allow true data-driven solutions to many problems that are traditionally solved using modeling and simulation approaches. On the other hand, the majority of available trajectory data contain a large amount of uncertainty, due to GPS noise, low sampling rates and missing data. Such uncertainty can significantly degrade the effectiveness of using large trajectory data in practice. Conventional methods for reducing uncertainty in trajectory data, such as map matching and trajectory interpolation tend to process each trajectory independently, without considering the shared structures in large trajectory data that can improve the processing of individual trajectories. In this dissertation, we present three algorithms that exploit shared structures in large trajectory collections to reduce noise and sample sparsity, and to improve trajectory-based travel time prediction under the difficult scenario of having far less GPS-tracking units than normally required in previous studies. These works make use of different kinds of shared structures, such as popular routes on a road map, trajectory clusters that represent unique traffic flows across trajectory junctions, and recurring traffic patterns over a small neighborhood. They also bring novel insights on how to extract robust knowledge from uncertain data, and how to effectively incorporate learned knowledge to individual trajectory processing tasks.
- Book
- 1 online resource.
The underlying theme of this dissertation is the utilization of new orbits to improve the Global Navigation Satellite System (GNSS) we rely upon in nearly all facets of modern life. This includes new orbits for safety-critical augmentation systems as well as for navigation core-constellations. Today, safety-oriented augmentation is placed in geosynchronous orbit while navigation core-constellations such as GPS are placed in medium Earth orbit. This lack of diversity leads to certain pitfalls. Augmentation systems are limited in platforms on which they can piggyback and do not reach users at high latitude. Navigation systems have limited geometric diversity as well as faint signals, leaving them vulnerable to interference and limited in urban and indoor environments. The orbital diversity introduced here improves the service, reliability, and functionality of GNSS. In the present work, new orbital representations are developed to enable orbits such as medium, inclined geosynchronous, and highly elliptical for augmentation with a tenfold improvement in orbit description accuracy compared to service today. This, along with constellation and frequency diversity on the horizon, is shown to enable safety-critical services for both aviation and maritime operations in the entire northern hemisphere. This is of great importance in the Arctic where commerce and traffic is on the rise due to decreasing sea ice. The addition of these orbit classes also increases SBAS visibility in urban canyons where safety critical systems like autonomous automobiles are beginning operation. For navigation, we propose leveraging the wealth of low Earth orbiting (LEO) satellites coming in the near future. This unprecedented space infrastructure is planned by the likes of OneWeb, SpaceX, and Boeing to deliver broadband Internet globally. These LEO constellations offer a threefold improvement on geometry compared to navigation constellations today, relaxing constraints on other aspects while still maintaining the positioning performance of GPS. Closer to Earth, LEO offers less path loss than MEO, improving signal strength by 1000 fold (30 dB). This strengthens us to interference and aids substantially in urban and indoor environments.
The underlying theme of this dissertation is the utilization of new orbits to improve the Global Navigation Satellite System (GNSS) we rely upon in nearly all facets of modern life. This includes new orbits for safety-critical augmentation systems as well as for navigation core-constellations. Today, safety-oriented augmentation is placed in geosynchronous orbit while navigation core-constellations such as GPS are placed in medium Earth orbit. This lack of diversity leads to certain pitfalls. Augmentation systems are limited in platforms on which they can piggyback and do not reach users at high latitude. Navigation systems have limited geometric diversity as well as faint signals, leaving them vulnerable to interference and limited in urban and indoor environments. The orbital diversity introduced here improves the service, reliability, and functionality of GNSS. In the present work, new orbital representations are developed to enable orbits such as medium, inclined geosynchronous, and highly elliptical for augmentation with a tenfold improvement in orbit description accuracy compared to service today. This, along with constellation and frequency diversity on the horizon, is shown to enable safety-critical services for both aviation and maritime operations in the entire northern hemisphere. This is of great importance in the Arctic where commerce and traffic is on the rise due to decreasing sea ice. The addition of these orbit classes also increases SBAS visibility in urban canyons where safety critical systems like autonomous automobiles are beginning operation. For navigation, we propose leveraging the wealth of low Earth orbiting (LEO) satellites coming in the near future. This unprecedented space infrastructure is planned by the likes of OneWeb, SpaceX, and Boeing to deliver broadband Internet globally. These LEO constellations offer a threefold improvement on geometry compared to navigation constellations today, relaxing constraints on other aspects while still maintaining the positioning performance of GPS. Closer to Earth, LEO offers less path loss than MEO, improving signal strength by 1000 fold (30 dB). This strengthens us to interference and aids substantially in urban and indoor environments.
4. Aggressive terrain following for motion-constrained vehicles in uncertain environments [electronic resource] [2016] Online
- Book
- 1 online resource.
Scientific missions requiring underwater imagery or sample collection have often used highly maneuverable, remotely operated vehicles in order to operate close to the terrain. However, these vehicles come with limited operational range and high cost of operation. Long-range autonomous underwater vehicles (AUVs) can provide access to remote, potentially dangerous sites at a greatly reduced operational cost, but come with the added challenges of limited maneuverability and onboard computation. Recent advances in Terrain-Relative Navigation allow for high-precision map-relative localization. This in turn enables the use of prior map information for path planning purposes. This thesis presents a new approach for planning aggressive terrain-following trajectories that can provide improved imaging coverage while keeping the vehicle safe in an uncertain environment. Two methods are implemented for planning trajectories, incorporating constraints to both satisfy the dynamics of the vehicle and to maintain a safe minimum standoff distance in an uncertain environment. The first, using Model Predictive Control, allows for direct application of known vehicle dynamics, but does not provide real-time performance. The second, using geometric spline-based trajectories, must approximate the dynamics of the vehicle, but can provide trajectories to the vehicle in real time. This allows current measurements of the terrain to be combined with prior map information to provide the most up-to-date terrain information. This thesis also introduces a design tool utilizing the trajectory planning approach that can be used by vehicle designers to understand the terrain imaging capabilities of a particular vehicle configuration. Results of trajectories planned over regions of Monterey Bay are presented, and preliminary field trial results are shown for the vehicle tracking commanded trajectories. The design tool is demonstrated for challenging terrain in Monterey Bay evaluating the performance of the current vehicle and providing direction for the work required to perform imaging missions using the current vehicle.
Scientific missions requiring underwater imagery or sample collection have often used highly maneuverable, remotely operated vehicles in order to operate close to the terrain. However, these vehicles come with limited operational range and high cost of operation. Long-range autonomous underwater vehicles (AUVs) can provide access to remote, potentially dangerous sites at a greatly reduced operational cost, but come with the added challenges of limited maneuverability and onboard computation. Recent advances in Terrain-Relative Navigation allow for high-precision map-relative localization. This in turn enables the use of prior map information for path planning purposes. This thesis presents a new approach for planning aggressive terrain-following trajectories that can provide improved imaging coverage while keeping the vehicle safe in an uncertain environment. Two methods are implemented for planning trajectories, incorporating constraints to both satisfy the dynamics of the vehicle and to maintain a safe minimum standoff distance in an uncertain environment. The first, using Model Predictive Control, allows for direct application of known vehicle dynamics, but does not provide real-time performance. The second, using geometric spline-based trajectories, must approximate the dynamics of the vehicle, but can provide trajectories to the vehicle in real time. This allows current measurements of the terrain to be combined with prior map information to provide the most up-to-date terrain information. This thesis also introduces a design tool utilizing the trajectory planning approach that can be used by vehicle designers to understand the terrain imaging capabilities of a particular vehicle configuration. Results of trajectories planned over regions of Monterey Bay are presented, and preliminary field trial results are shown for the vehicle tracking commanded trajectories. The design tool is demonstrated for challenging terrain in Monterey Bay evaluating the performance of the current vehicle and providing direction for the work required to perform imaging missions using the current vehicle.
5. Trajectory planning and control for an autonomous race vehicle [electronic resource] [2016] Online
- Book
- 1 online resource.
Autonomous vehicle technologies offer potential to eliminate the number of traffic accidents that occur every year, not only saving numerous lives but mitigating the costly economic and social impact of automobile related accidents. The premise behind this dissertation is that autonomous cars of the near future can only achieve this ambitious goal by obtaining the capability to successfully maneuver in friction-limited situations. With automobile racing as an inspiration, this dissertation presents and experimentally validates three vital components for driving at the limits of tire friction. The first contribution is a feedback-feedforward steering algorithm that enables an autonomous vehicle to accurately follow a specified trajectory at the friction limits while preserving robust stability margins. The second contribution is a trajectory generation algorithm that leverages the computational speed of convex optimization to rapidly generate both a longitudinal speed profile and lateral curvature profile for the autonomous vehicle to follow. The final contribution is a set of iterative learning control and search algorithms that enable autonomous vehicles to drive more effectively by learning from previous driving maneuvers. These contributions enable an autonomous Audi TTS test vehicle to drive around a race circuit at a level of performance comparable to a professional human driver. The dissertation concludes with a discussion of how the algorithms presented can be translated into automotive safety systems in the near future.
Autonomous vehicle technologies offer potential to eliminate the number of traffic accidents that occur every year, not only saving numerous lives but mitigating the costly economic and social impact of automobile related accidents. The premise behind this dissertation is that autonomous cars of the near future can only achieve this ambitious goal by obtaining the capability to successfully maneuver in friction-limited situations. With automobile racing as an inspiration, this dissertation presents and experimentally validates three vital components for driving at the limits of tire friction. The first contribution is a feedback-feedforward steering algorithm that enables an autonomous vehicle to accurately follow a specified trajectory at the friction limits while preserving robust stability margins. The second contribution is a trajectory generation algorithm that leverages the computational speed of convex optimization to rapidly generate both a longitudinal speed profile and lateral curvature profile for the autonomous vehicle to follow. The final contribution is a set of iterative learning control and search algorithms that enable autonomous vehicles to drive more effectively by learning from previous driving maneuvers. These contributions enable an autonomous Audi TTS test vehicle to drive around a race circuit at a level of performance comparable to a professional human driver. The dissertation concludes with a discussion of how the algorithms presented can be translated into automotive safety systems in the near future.
- Book
- 1 online resource.
This thesis presents techniques to enable high-fidelity uncertainty quantification and high-fidelity optimization under uncertainty. The techniques developed herein are applied to maximize the Annual Energy Production (AEP) of a wind farm by optimizing the position of the wind turbines. The AEP is the expected power produced by the wind farm over a period of one year, and the wind conditions (e.g., wind direction and wind speed) for the year are described with empirically-determined probability distributions. To compute the AEP of the wind farm, a wake model is used to simulate the power for various sets of input conditions (e.g., wind direction and wind speed). We use polynomial chaos (PC), an uncertainty quantification method, to construct a polynomial approximation of the power from these sets of simulations or samples. We explore both regression and quadrature approaches to compute the PC coefficients. PC based on regression is significantly more efficient than the rectangle rule (the method currently used in practice to compute the expected power): PC based on regression achieves the same accuracy as the rectangle rule with only one-tenth of the required simulations, and, for the same number of samples, its estimates are five times more accurate. We propose a multi-fidelity method built on top of polynomial chaos to further improve the efficiency of computing the AEP. There exists multiple wake models of varying fidelity and cost to compute the power (and hence the AEP). Here, we choose the Floris and Jensen models as our high- and low-fidelity models, respectively. Both models are engineering models that can be evaluated in less than 1 second. The multi-fidelity method creates an approximation to the high-fidelity model and its statistics (such as the AEP) and uses a polynomial chaos expansion that is the combination of a polynomial expansion from a low-fidelity model and a polynomial expansion of a correction function. The correction function is constructed from the differences between the high-fidelity and low-fidelity simulation results. The multi-fidelity method can estimate the high-fidelity AEP to the same accuracy with only one-half to one-fifth of the high-fidelity model evaluations, depending on the layout of the wind farm. Combining the reduction in the number of simulations obtained from using PC and the multi-fidelity method, we have reduced by more than an order of magnitude the number of simulations required to accurately compute the AEP, thus enabling the use of more expensive higher fidelity models in wind farm optimization. Once we can compute the AEP efficiently, we consider the optimization under uncertainty problem of maximizing the AEP of a wind farm by changing its layout subject to geometric constraints— wind turbines must stay within a given area and with a minimum separation between them. We extend polynomial chaos to obtain the gradient of the statistics (AEP) from the gradients of the power at the simulation samples. With the gradient of the AEP, we can make use of a gradient-based optimizer to efficiently maximize the AEP. The optimization problem has many local maxima that are nearly equivalent. To compare the optimizations between methods (polynomial chaos, rectangle rule), we perform a large suite of optimizations with different initial turbine locations and with different samples and numbers of samples to compute the AEP. The optimizations with PC based on regression result in optimized layouts that produce the same AEP as the optimized layouts found with the rectangle rule but using only one-third of the samples. Furthermore, for the same number of samples, the AEP of the optimal layouts found with PC is 1 % higher than the AEP of the layouts found with the rectangle rule. A 1 % increase in the AEP for a modern large wind farm can increase its annual revenue by $2 million.
This thesis presents techniques to enable high-fidelity uncertainty quantification and high-fidelity optimization under uncertainty. The techniques developed herein are applied to maximize the Annual Energy Production (AEP) of a wind farm by optimizing the position of the wind turbines. The AEP is the expected power produced by the wind farm over a period of one year, and the wind conditions (e.g., wind direction and wind speed) for the year are described with empirically-determined probability distributions. To compute the AEP of the wind farm, a wake model is used to simulate the power for various sets of input conditions (e.g., wind direction and wind speed). We use polynomial chaos (PC), an uncertainty quantification method, to construct a polynomial approximation of the power from these sets of simulations or samples. We explore both regression and quadrature approaches to compute the PC coefficients. PC based on regression is significantly more efficient than the rectangle rule (the method currently used in practice to compute the expected power): PC based on regression achieves the same accuracy as the rectangle rule with only one-tenth of the required simulations, and, for the same number of samples, its estimates are five times more accurate. We propose a multi-fidelity method built on top of polynomial chaos to further improve the efficiency of computing the AEP. There exists multiple wake models of varying fidelity and cost to compute the power (and hence the AEP). Here, we choose the Floris and Jensen models as our high- and low-fidelity models, respectively. Both models are engineering models that can be evaluated in less than 1 second. The multi-fidelity method creates an approximation to the high-fidelity model and its statistics (such as the AEP) and uses a polynomial chaos expansion that is the combination of a polynomial expansion from a low-fidelity model and a polynomial expansion of a correction function. The correction function is constructed from the differences between the high-fidelity and low-fidelity simulation results. The multi-fidelity method can estimate the high-fidelity AEP to the same accuracy with only one-half to one-fifth of the high-fidelity model evaluations, depending on the layout of the wind farm. Combining the reduction in the number of simulations obtained from using PC and the multi-fidelity method, we have reduced by more than an order of magnitude the number of simulations required to accurately compute the AEP, thus enabling the use of more expensive higher fidelity models in wind farm optimization. Once we can compute the AEP efficiently, we consider the optimization under uncertainty problem of maximizing the AEP of a wind farm by changing its layout subject to geometric constraints— wind turbines must stay within a given area and with a minimum separation between them. We extend polynomial chaos to obtain the gradient of the statistics (AEP) from the gradients of the power at the simulation samples. With the gradient of the AEP, we can make use of a gradient-based optimizer to efficiently maximize the AEP. The optimization problem has many local maxima that are nearly equivalent. To compare the optimizations between methods (polynomial chaos, rectangle rule), we perform a large suite of optimizations with different initial turbine locations and with different samples and numbers of samples to compute the AEP. The optimizations with PC based on regression result in optimized layouts that produce the same AEP as the optimized layouts found with the rectangle rule but using only one-third of the samples. Furthermore, for the same number of samples, the AEP of the optimal layouts found with PC is 1 % higher than the AEP of the layouts found with the rectangle rule. A 1 % increase in the AEP for a modern large wind farm can increase its annual revenue by $2 million.
7. Advances in flight safety analysis for commercial space transportation [electronic resource] [2016] Online
- Book
- 1 online resource.
The diversity in the kinds of vehicles that are appearing in the commercial space transportation sector raises questions regarding the applicability of the licensing procedures and methodologies that are in place to protect public safety. These licensing procedures are designed to limit risks to public safety in case of a space vehicle explosion. Concerns arise because the methods currently used are derived from expendable launch vehicles (ELVs) developed during the Space Shuttle era, and thus they might not be fully applicable to future vehicles, which include new types of ELVs, suborbital vehicles, reusable launch vehicles (RLVs), and a number of hybrid configurations. This dissertation presents a safety analysis tool, called the Range Safety Assessment Tool (RSAT), that quantifies the risks to people on the ground due to a space vehicle explosion or breakup. This type of problem is characterized by the complexity and uncertainty in the physical modeling. RSAT has been used to analyze both launch and reentry scenarios and can be applied to many possible vehicle configurations. The Space Shuttle Columbia accident was modeled with RSAT, and the results were compared with simulations performed by the Columbia Accident Investigation Board (CAIB). A methodology to perform sensitivity and optimization studies is also presented. This methodology leverages previous work done in active subspaces and Gaussian process regression to generate surrogate models. The proposed sensitivity and optimization methodologies were used to analyze a commercial ELV. The results show that the methodology can handle a large number of stochastic inputs and identify opportunities to decrease risk.
The diversity in the kinds of vehicles that are appearing in the commercial space transportation sector raises questions regarding the applicability of the licensing procedures and methodologies that are in place to protect public safety. These licensing procedures are designed to limit risks to public safety in case of a space vehicle explosion. Concerns arise because the methods currently used are derived from expendable launch vehicles (ELVs) developed during the Space Shuttle era, and thus they might not be fully applicable to future vehicles, which include new types of ELVs, suborbital vehicles, reusable launch vehicles (RLVs), and a number of hybrid configurations. This dissertation presents a safety analysis tool, called the Range Safety Assessment Tool (RSAT), that quantifies the risks to people on the ground due to a space vehicle explosion or breakup. This type of problem is characterized by the complexity and uncertainty in the physical modeling. RSAT has been used to analyze both launch and reentry scenarios and can be applied to many possible vehicle configurations. The Space Shuttle Columbia accident was modeled with RSAT, and the results were compared with simulations performed by the Columbia Accident Investigation Board (CAIB). A methodology to perform sensitivity and optimization studies is also presented. This methodology leverages previous work done in active subspaces and Gaussian process regression to generate surrogate models. The proposed sensitivity and optimization methodologies were used to analyze a commercial ELV. The results show that the methodology can handle a large number of stochastic inputs and identify opportunities to decrease risk.
- Book
- 1 online resource.
The majority of midair collisions involve general aviation aircraft, and these accidents tend to occur in the vicinity of airports. This work proposes a concept for an autonomous air traffic control system for non-towered airports. The system is envisioned to be advisory in nature and would rely on observations from a ground-based surveillance system to issue alerts over the common traffic advisory frequency. The behavior of aircraft in the airport pattern is modeled as a hidden Markov Model (HMM) whose parameters are learned from real-world radar observations. To determine the optimal advisories that reduce the risk of collision, the problem is formulated as a partially observable semi-Markov decision process (POSMDP). In order to address the computational complexity of solving the problem, different approximation methods including exponential sojourn times, phase-type distributions, online algorithms, and particle filters for belief estimation are investigated. Simulation results are presented for both nominal and learned airport models.
The majority of midair collisions involve general aviation aircraft, and these accidents tend to occur in the vicinity of airports. This work proposes a concept for an autonomous air traffic control system for non-towered airports. The system is envisioned to be advisory in nature and would rely on observations from a ground-based surveillance system to issue alerts over the common traffic advisory frequency. The behavior of aircraft in the airport pattern is modeled as a hidden Markov Model (HMM) whose parameters are learned from real-world radar observations. To determine the optimal advisories that reduce the risk of collision, the problem is formulated as a partially observable semi-Markov decision process (POSMDP). In order to address the computational complexity of solving the problem, different approximation methods including exponential sojourn times, phase-type distributions, online algorithms, and particle filters for belief estimation are investigated. Simulation results are presented for both nominal and learned airport models.
9. Combining uncertainty and sensitivity using multi-fidelity probabilistic aerodynamic databases for aircraft maneuvers [electronic resource] [2016] Online
- Book
- 1 online resource.
Balancing cost and accuracy is a fundamental trade throughout the engineering design process. More accurate results typically take more time or resources to generate. To most efficiently use the resources available, it is critical to understand where the increased accuracy is needed. This thesis covers two major areas of research: developing a framework to quantify the uncertainty over the domain of interest and calculating the sensitivity of some defined performance metric to that uncertainty. Probabilistic aerodynamic databases are functions that store a distribution of possible aerodynamic coefficients over the entire flight envelope. Built with Gaussian processes, an additive correction hierarchical model is applied to combine multiple fidelity levels. Each of these fidelity levels has a subject matter expert (SME) provided error term assigned to each individual sample. Deterministic instances of the database respecting physics and SME uncertainties are created through Monte Carlo sampling to provide inputs to currently used industry analysis tools. By repeatedly running the trajectory analysis with different samples, distributions of the performance metrics are created. In the event these distributions of potential solutions exhibit too large an uncertainty on the quantity of interest, an adjoint-based sensitivity method was developed to guide further analyses. Extended from optimal control theory, sensitivities of the objective function with respect to each aerodynamic coefficient at each time step in the trajectory can be calculated for approximately the same cost as solving the forward trajectory problem. Multiple indicator functions combining the uncertainty and sensitivity were proposed. On a cannon ball example where the drag coefficient was uncertain, these different indicator functions were compared to an exhaustive search of adding a single analysis at each point in the domain. The best performing indicator was then applied to the National Aeronautics and Space Administration (NASA) Common Research Model (CRM). Both the cannon ball and NASA CRM were studied through an adaptive sampling methodology. The cannon ball adaptive sampling, guided by the uncertainty-sensitivity indicator functions, was three-to-four times better than uncertainty-only indicators and one-to-two orders of magnitude better than not adaptive sampling when maximizing accuracy for a fixed cost. When minimizing cost for a tolerable accuracy requirement, the uncertainty-sensitivity adaptive sampling reduced the cost by a factor of two compared to the uncertainty-only sampling. In the CRM case, only one indicator was used. Using 10 percent of the computation budget, a 50 percent increase in accuracy was seen compared to sampling over the entire maneuver domain.
Balancing cost and accuracy is a fundamental trade throughout the engineering design process. More accurate results typically take more time or resources to generate. To most efficiently use the resources available, it is critical to understand where the increased accuracy is needed. This thesis covers two major areas of research: developing a framework to quantify the uncertainty over the domain of interest and calculating the sensitivity of some defined performance metric to that uncertainty. Probabilistic aerodynamic databases are functions that store a distribution of possible aerodynamic coefficients over the entire flight envelope. Built with Gaussian processes, an additive correction hierarchical model is applied to combine multiple fidelity levels. Each of these fidelity levels has a subject matter expert (SME) provided error term assigned to each individual sample. Deterministic instances of the database respecting physics and SME uncertainties are created through Monte Carlo sampling to provide inputs to currently used industry analysis tools. By repeatedly running the trajectory analysis with different samples, distributions of the performance metrics are created. In the event these distributions of potential solutions exhibit too large an uncertainty on the quantity of interest, an adjoint-based sensitivity method was developed to guide further analyses. Extended from optimal control theory, sensitivities of the objective function with respect to each aerodynamic coefficient at each time step in the trajectory can be calculated for approximately the same cost as solving the forward trajectory problem. Multiple indicator functions combining the uncertainty and sensitivity were proposed. On a cannon ball example where the drag coefficient was uncertain, these different indicator functions were compared to an exhaustive search of adding a single analysis at each point in the domain. The best performing indicator was then applied to the National Aeronautics and Space Administration (NASA) Common Research Model (CRM). Both the cannon ball and NASA CRM were studied through an adaptive sampling methodology. The cannon ball adaptive sampling, guided by the uncertainty-sensitivity indicator functions, was three-to-four times better than uncertainty-only indicators and one-to-two orders of magnitude better than not adaptive sampling when maximizing accuracy for a fixed cost. When minimizing cost for a tolerable accuracy requirement, the uncertainty-sensitivity adaptive sampling reduced the cost by a factor of two compared to the uncertainty-only sampling. In the CRM case, only one indicator was used. Using 10 percent of the computation budget, a 50 percent increase in accuracy was seen compared to sampling over the entire maneuver domain.
10. Compact envelopes [electronic resource] : an efficient and provably safe approach to air and space traffic integration [2016] Online
- Book
- 1 online resource.
Traditional methods for safely integrating space launch and reentry traffic into the National Airspace System use hazard areas (e.g. Special Use Airspaces, Temporary Flight Restrictions, etc.) that restrict aircraft from larger areas, for longer times, than necessary. This can result in significant disruptions to air traffic through increased fuel burn, flight time, and flight delays. This thesis proposes a new class of hazard area that we have named compact envelopes and that guarantees a quantifiable level of risk while dramatically reducing the disruption to commercial air traffic. These compact envelopes are dynamic in time, are described by contours as a function of altitude, and can be constructed for any launch or re-entry vehicle, for both orbital and sub-orbital operations. The generation of these compact envelopes incorporates a probabilistic risk analysis of off-nominal vehicle operations and leverages expected improvements in air traffic management procedures from NextGen; one key assumption is that airborne aircraft can safely react to any off-nominal event given sufficient advance warning. A probabilistic analysis of the disruption to the NAS caused by traditional hazard areas and compact envelopes during space vehicle operations is presented. Quantities of interest include increased flight time, fuel burn, and total distance flown for aircraft that must be rerouted around these hazard areas. The use of compact envelopes for ensuring aircraft safety is shown to produce a near-complete elimination of airspace disruption on average and a dramatic reduction in the worst-case disruptions compared to traditional hazard area methods. It must be noted that the practical implementation of the compact envelope idea only requires improvements to our air traffic control infrastructure envisioned to be part of NextGen by the year 2020.
Traditional methods for safely integrating space launch and reentry traffic into the National Airspace System use hazard areas (e.g. Special Use Airspaces, Temporary Flight Restrictions, etc.) that restrict aircraft from larger areas, for longer times, than necessary. This can result in significant disruptions to air traffic through increased fuel burn, flight time, and flight delays. This thesis proposes a new class of hazard area that we have named compact envelopes and that guarantees a quantifiable level of risk while dramatically reducing the disruption to commercial air traffic. These compact envelopes are dynamic in time, are described by contours as a function of altitude, and can be constructed for any launch or re-entry vehicle, for both orbital and sub-orbital operations. The generation of these compact envelopes incorporates a probabilistic risk analysis of off-nominal vehicle operations and leverages expected improvements in air traffic management procedures from NextGen; one key assumption is that airborne aircraft can safely react to any off-nominal event given sufficient advance warning. A probabilistic analysis of the disruption to the NAS caused by traditional hazard areas and compact envelopes during space vehicle operations is presented. Quantities of interest include increased flight time, fuel burn, and total distance flown for aircraft that must be rerouted around these hazard areas. The use of compact envelopes for ensuring aircraft safety is shown to produce a near-complete elimination of airspace disruption on average and a dramatic reduction in the worst-case disruptions compared to traditional hazard area methods. It must be noted that the practical implementation of the compact envelope idea only requires improvements to our air traffic control infrastructure envisioned to be part of NextGen by the year 2020.
11. Deep exploration via randomized value functions [electronic resource] [2016] Online
- Book
- 1 online resource.
The "Big Data" revolution is spawning systems designed to make decisions from data. Statistics and machine learning has made great strides in prediction and estimation from any fixed dataset. However, if you want to learn to take actions where your choices can affect both the underlying system and the data you observe, you need reinforcement learning. Reinforcement learning builds upon learning from datasets, but also addresses the issues of partial feedback and long term consequences. In a reinforcement learning problem the decisions you make may affect the data you get, and even alter the underlying system for future timesteps. Statistically efficient reinforcement learning requires "deep exploration" or the ability to plan to learn. Previous approaches to deep exploration have not been computationally tractable beyond small scale problems. For this reason, most practical implementations use statistically inefficient methods for exploration such as epsilon-greedy dithering, which can lead to exponentially slower learning. In this dissertation we present an alternative approach to deep exploration through the use of randomized value functions. Our work is inspired by the Thompson sampling heuristic for multi-armed bandits which suggests, at a high level, to "randomly select a policy according to the probability that it is optimal". We provide insight into why this algorithm can be simultaneously more statistically efficient and more computationally efficient than existing approaches. We leverage these insights to establish several state of the art theoretical results and performance guarantees. Importantly, and unlike previous approaches to deep exploration, this approach also scales gracefully to complex domains with generalization. We complement our analysis with extensive empirical experiments; these include several didactic examples as well as a recommendation system, Tetris, and Atari 2600 games.
The "Big Data" revolution is spawning systems designed to make decisions from data. Statistics and machine learning has made great strides in prediction and estimation from any fixed dataset. However, if you want to learn to take actions where your choices can affect both the underlying system and the data you observe, you need reinforcement learning. Reinforcement learning builds upon learning from datasets, but also addresses the issues of partial feedback and long term consequences. In a reinforcement learning problem the decisions you make may affect the data you get, and even alter the underlying system for future timesteps. Statistically efficient reinforcement learning requires "deep exploration" or the ability to plan to learn. Previous approaches to deep exploration have not been computationally tractable beyond small scale problems. For this reason, most practical implementations use statistically inefficient methods for exploration such as epsilon-greedy dithering, which can lead to exponentially slower learning. In this dissertation we present an alternative approach to deep exploration through the use of randomized value functions. Our work is inspired by the Thompson sampling heuristic for multi-armed bandits which suggests, at a high level, to "randomly select a policy according to the probability that it is optimal". We provide insight into why this algorithm can be simultaneously more statistically efficient and more computationally efficient than existing approaches. We leverage these insights to establish several state of the art theoretical results and performance guarantees. Importantly, and unlike previous approaches to deep exploration, this approach also scales gracefully to complex domains with generalization. We complement our analysis with extensive empirical experiments; these include several didactic examples as well as a recommendation system, Tetris, and Atari 2600 games.
12. Fluid lensing & [and] applications to remote sensing of aquatic environments [electronic resource] [2016] Online
- Book
- 1 online resource.
The optical interaction of light with fluids and aquatic surfaces is a complex phenomenon. The effect is readily observable above 71% of Earth's surface in aquatic systems, in particular, shallow marine reef environments. As visible light interacts with aquatic surface waves, time-dependent nonlinear optical aberrations appear, forming caustic bands of light on the seafloor, and producing refractive lensing that magnifies and demagnifies underwater objects. This research extensively explores this phenomenon in the context of ocean waves, or the ocean wave fluid lensing phenomenon, and develops and validates a novel high-resolution aquatic remote sensing technique for imaging through ocean waves called the General Fluid Lensing Algorithm. Surface wave distortion and optical absorption of light pose a significant challenge for remote sensing of underwater environments, compounding an already profound lack of knowledge about our planet's most commonplace, biologically diverse, and life-sustaining ecosystems. At present, no remote sensing technologies can robustly image underwater objects at the cm-scale or finer due to surface wave distortion and the strong attenuation of light in the water column, in stark contrast to modern terrestrial remote sensing capabilities. As a consequence, our ability to accurately assess the status and health of shallow marine ecosystems, such as coral and stromatolite reefs, is severely impaired. As ocean acidification, global warming, sea level rise, and habitat destruction increasingly impact these ecosystems, there is an urgent need for the development of remote sensing technologies that can image underwater environments at the cm-scale, or 'reef-scale', characteristic of typical reef accretion rates of ~1cm per year. The ocean wave fluid lensing phenomenon is modeled in a full-physical controlled computational environment, the Fluid Lensing Test Pool, to study the complex relationships between light and ocean waves and uncover patterns in caustic behavior and refractive lensing. Combined with an ocean wave fluid lensing theory, observed patterns are used to develop the General Fluid Lensing Algorithm. The General Fluid Lensing Algorithm not only enables robust imaging of underwater objects through refractive distortions from surface waves at sub-cm-scales, but also exploits surface waves as magnifying optical lensing elements, or fluid lensing lenslets, to enhance the spatial resolution and signal-to-noise properties of remote sensing instruments. The algorithm introduces a fluid distortion characterization methodology, caustic bathymetry concepts, Fluid Lensing Lenslet Homography technique, and a 3D Airborne Fluid Lensing Algorithm as novel approaches for characterizing the aquatic surface wave field, modeling bathymetry using caustic phenomena, and robust high-resolution aquatic remote sensing. Results from the Fluid Lensing Test Pool reveal previously unquantified depth-dependent caustic behavior including caustic focusing and the formation of caustic cells. Caustic focusing shows that, in the case of the Test Pool, the intensity of a caustic band at a depth of 2.5m can exceed the above-surface ambient intensity at 0m depth, despite two absorptive optical path lengths in the fluid. The Test Pool further incorporates a number of multispectral resolution test targets to validate the General Fluid Lensing Algorithm, used to quantitatively evaluate the algorithm's ability to robustly image underwater objects. 2D Fluid Lensing results demonstrate multispectral imaging of test targets in depths up to 4.5m at a resolution of at least 0.25cm versus a raw fluid-distorted frame with a resolution less than 25cm. Enhanced signal-to-noise ratio gains of over 4dB are also measured in comparison to a perfectly flat fluid surface scenario with less than one-second of simulated remotely-sensed image data. These results show the application of the General Fluid Lensing Algorithm to addressing the surface wave distortion and optical absorption challenges posed by aquatic remote sensing. In addition to the theoretical and algorithmic components, this thesis demonstrates the 3D Airborne Fluid Lensing Algorithm from unmanned aerial vehicles (UAVs, or drones) in real-world aquatic systems at depths up to 10m. Airborne Fluid Lensing campaigns were conducted over the coral reefs of Ofu Island, American Samoa (2013) and the stromatolite reefs of Shark Bay, Western Australia (2014). Fluid Lensing datasets reveal these reefs with unprecedented resolution, providing the first validated cm-scale 3D image of a reef acquired from above the ocean surface, without wave distortion, in the span of a few flight hours over areas as large as 15km2. These 3D data [JA1]distinguish coral, fish, and invertebrates in American Samoa, and reveal previously undocumented, morphologically distinct, stromatolite structures in Shark Bay. The data represent the highest-resolution remotely-sensed 3D multispectral image of an underwater environment to date, acquired from above a nonplanar fluid interface. Airborne field campaign results, along with validation results from the simulated Fluid Lensing Test Pool, suggest the General Fluid Lensing Algorithm presents a promising advance in aquatic remote sensing technology for large-scale 3D surveys of shallow aquatic habitats, offering robust imaging at sub-cm-scale spatial resolutions, fine temporal sampling on the order of seconds, and enhanced signal-to-noise properties.
The optical interaction of light with fluids and aquatic surfaces is a complex phenomenon. The effect is readily observable above 71% of Earth's surface in aquatic systems, in particular, shallow marine reef environments. As visible light interacts with aquatic surface waves, time-dependent nonlinear optical aberrations appear, forming caustic bands of light on the seafloor, and producing refractive lensing that magnifies and demagnifies underwater objects. This research extensively explores this phenomenon in the context of ocean waves, or the ocean wave fluid lensing phenomenon, and develops and validates a novel high-resolution aquatic remote sensing technique for imaging through ocean waves called the General Fluid Lensing Algorithm. Surface wave distortion and optical absorption of light pose a significant challenge for remote sensing of underwater environments, compounding an already profound lack of knowledge about our planet's most commonplace, biologically diverse, and life-sustaining ecosystems. At present, no remote sensing technologies can robustly image underwater objects at the cm-scale or finer due to surface wave distortion and the strong attenuation of light in the water column, in stark contrast to modern terrestrial remote sensing capabilities. As a consequence, our ability to accurately assess the status and health of shallow marine ecosystems, such as coral and stromatolite reefs, is severely impaired. As ocean acidification, global warming, sea level rise, and habitat destruction increasingly impact these ecosystems, there is an urgent need for the development of remote sensing technologies that can image underwater environments at the cm-scale, or 'reef-scale', characteristic of typical reef accretion rates of ~1cm per year. The ocean wave fluid lensing phenomenon is modeled in a full-physical controlled computational environment, the Fluid Lensing Test Pool, to study the complex relationships between light and ocean waves and uncover patterns in caustic behavior and refractive lensing. Combined with an ocean wave fluid lensing theory, observed patterns are used to develop the General Fluid Lensing Algorithm. The General Fluid Lensing Algorithm not only enables robust imaging of underwater objects through refractive distortions from surface waves at sub-cm-scales, but also exploits surface waves as magnifying optical lensing elements, or fluid lensing lenslets, to enhance the spatial resolution and signal-to-noise properties of remote sensing instruments. The algorithm introduces a fluid distortion characterization methodology, caustic bathymetry concepts, Fluid Lensing Lenslet Homography technique, and a 3D Airborne Fluid Lensing Algorithm as novel approaches for characterizing the aquatic surface wave field, modeling bathymetry using caustic phenomena, and robust high-resolution aquatic remote sensing. Results from the Fluid Lensing Test Pool reveal previously unquantified depth-dependent caustic behavior including caustic focusing and the formation of caustic cells. Caustic focusing shows that, in the case of the Test Pool, the intensity of a caustic band at a depth of 2.5m can exceed the above-surface ambient intensity at 0m depth, despite two absorptive optical path lengths in the fluid. The Test Pool further incorporates a number of multispectral resolution test targets to validate the General Fluid Lensing Algorithm, used to quantitatively evaluate the algorithm's ability to robustly image underwater objects. 2D Fluid Lensing results demonstrate multispectral imaging of test targets in depths up to 4.5m at a resolution of at least 0.25cm versus a raw fluid-distorted frame with a resolution less than 25cm. Enhanced signal-to-noise ratio gains of over 4dB are also measured in comparison to a perfectly flat fluid surface scenario with less than one-second of simulated remotely-sensed image data. These results show the application of the General Fluid Lensing Algorithm to addressing the surface wave distortion and optical absorption challenges posed by aquatic remote sensing. In addition to the theoretical and algorithmic components, this thesis demonstrates the 3D Airborne Fluid Lensing Algorithm from unmanned aerial vehicles (UAVs, or drones) in real-world aquatic systems at depths up to 10m. Airborne Fluid Lensing campaigns were conducted over the coral reefs of Ofu Island, American Samoa (2013) and the stromatolite reefs of Shark Bay, Western Australia (2014). Fluid Lensing datasets reveal these reefs with unprecedented resolution, providing the first validated cm-scale 3D image of a reef acquired from above the ocean surface, without wave distortion, in the span of a few flight hours over areas as large as 15km2. These 3D data [JA1]distinguish coral, fish, and invertebrates in American Samoa, and reveal previously undocumented, morphologically distinct, stromatolite structures in Shark Bay. The data represent the highest-resolution remotely-sensed 3D multispectral image of an underwater environment to date, acquired from above a nonplanar fluid interface. Airborne field campaign results, along with validation results from the simulated Fluid Lensing Test Pool, suggest the General Fluid Lensing Algorithm presents a promising advance in aquatic remote sensing technology for large-scale 3D surveys of shallow aquatic habitats, offering robust imaging at sub-cm-scale spatial resolutions, fine temporal sampling on the order of seconds, and enhanced signal-to-noise properties.
13. Multi-rotor aircraft collision avoidance using partially observable Markov decision processes [electronic resource] [2016] Online
- Book
- 1 online resource.
This dissertation presents an extension of the ACAS X collision avoidance algorithm to multi-rotor aircraft capable of using speed changes to avoid close encounters with neighboring aircraft. The ACAS X family of algorithms currently use either turns or vertical maneuvers to avoid collision. I present a formulation of the algorithm in two dimensions that uses horizontal accelerations for resolution maneuvers and propose a set of optimization metrics that directly specify aircraft behavior in terms of separation from other aircraft and deviation from the desired trajectory. The maneuver strategy is optimized with respect to a partially observable Markov decision process model using dynamic programming. The parameters of the model strongly influence the performance tradeoff between metrics such as alert rate and safety. Finding the parameters that provide the appropriate tradeoff was aided by a Gaussian process-based surrogate model. Sets of algorithm parameters were generated that provide a tradeoff between the two goals. These parameter sets allow a user of the collision avoidance algorithm to select a desired separation distance appropriate for their application that also minimizes trajectory deviations. Three additional collision avoidance algorithms were developed for comparison with the partially observable Markov decision process formulation. The first comparison algorithm is based on a potential field method. The second is an adaptation of a tactical conflict detection and resolution algorithm that uses candidate trajectory predictions to determine a preferred resolution. The third is based on receding-horizon model predictive control. The four algorithms are evaluated under a common set of assumptions, simulation capabilities and metrics. A batch simulation system generates individual trajectory and aggregate metrics related to each algorithm's performance, allowing direct comparison of the benefits and drawbacks of each approach. The first encounter model of hobbyist unmanned aircraft trajectories is presented and used to generate trajectories that have more realistic intruder accelerations than prior methods of simulating such aircraft. All algorithms are shown to have the flexibility to provide different tradeoffs between separation from an intruder and the trajectory deviation necessary to achieve that separation. The ACAS X extension algorithm delivers maximum deviation performance equivalent to the best alternative, the model predictive control algorithm, with only slightly smaller separations, and it does so with less than half of the required velocity change. Proposed extensions of this algorithm may allow it to surpass the others both in terms of collision avoidance performance and suitability for real-world deployment and certification. This research shows that it is feasible to formulate the collision avoidance problem for multi-rotor aircraft as a partially observable Markov decision process and that its performance across multiple metrics can equal, and even surpass, alternative approaches.
This dissertation presents an extension of the ACAS X collision avoidance algorithm to multi-rotor aircraft capable of using speed changes to avoid close encounters with neighboring aircraft. The ACAS X family of algorithms currently use either turns or vertical maneuvers to avoid collision. I present a formulation of the algorithm in two dimensions that uses horizontal accelerations for resolution maneuvers and propose a set of optimization metrics that directly specify aircraft behavior in terms of separation from other aircraft and deviation from the desired trajectory. The maneuver strategy is optimized with respect to a partially observable Markov decision process model using dynamic programming. The parameters of the model strongly influence the performance tradeoff between metrics such as alert rate and safety. Finding the parameters that provide the appropriate tradeoff was aided by a Gaussian process-based surrogate model. Sets of algorithm parameters were generated that provide a tradeoff between the two goals. These parameter sets allow a user of the collision avoidance algorithm to select a desired separation distance appropriate for their application that also minimizes trajectory deviations. Three additional collision avoidance algorithms were developed for comparison with the partially observable Markov decision process formulation. The first comparison algorithm is based on a potential field method. The second is an adaptation of a tactical conflict detection and resolution algorithm that uses candidate trajectory predictions to determine a preferred resolution. The third is based on receding-horizon model predictive control. The four algorithms are evaluated under a common set of assumptions, simulation capabilities and metrics. A batch simulation system generates individual trajectory and aggregate metrics related to each algorithm's performance, allowing direct comparison of the benefits and drawbacks of each approach. The first encounter model of hobbyist unmanned aircraft trajectories is presented and used to generate trajectories that have more realistic intruder accelerations than prior methods of simulating such aircraft. All algorithms are shown to have the flexibility to provide different tradeoffs between separation from an intruder and the trajectory deviation necessary to achieve that separation. The ACAS X extension algorithm delivers maximum deviation performance equivalent to the best alternative, the model predictive control algorithm, with only slightly smaller separations, and it does so with less than half of the required velocity change. Proposed extensions of this algorithm may allow it to surpass the others both in terms of collision avoidance performance and suitability for real-world deployment and certification. This research shows that it is feasible to formulate the collision avoidance problem for multi-rotor aircraft as a partially observable Markov decision process and that its performance across multiple metrics can equal, and even surpass, alternative approaches.
14. Optimal planning with rare catastrophic events [electronic resource] [2016] Online
- Book
- 1 online resource.
Although rare catastrophic events, such as mid-air collisions between aircraft, occur infrequently, their impact is significant. Understanding and mitigating the risk of such events require estimating the likelihood of the events and planning proper actions for avoiding them. Estimation and planning problems are often approached using sampling-based methods. These methods use models for simulation and take into account events of interest. If the problems involve rare catastrophic events, these methods converge slowly and produce high variance estimates. Moreover, in planning, the rare occurrence of the events obstructs the sampling-based method from exploring and exploiting the best action. This thesis presents methods for addressing these challenges, by efficiently estimating the likelihood of the rare catastrophic events and making decisions under uncertainty to minimize the risk of the events while achieving mission objectives. The methods are presented with three real-world applications. First, the thesis explores the use of rare event simulation techniques in aircraft collision risk estimation. The cross-entropy method with weight limits and variable selection is applied for variance reduction. Second, the multilevel splitting method, which is a variance reduction technique, is incorporated into decision-theoretic single-shot decision problems. The resulting method is applied to wildfire surveillance using an unmanned aircraft. Lastly, the thesis proposes new approaches for exploration in sequential decision problems and applies a variance reduction technique. It presents a rerouting problem involving unmanned aircraft in GPS-denied environments. Empirical studies demonstrate significant improvements in performance when using the proposed methods.
Although rare catastrophic events, such as mid-air collisions between aircraft, occur infrequently, their impact is significant. Understanding and mitigating the risk of such events require estimating the likelihood of the events and planning proper actions for avoiding them. Estimation and planning problems are often approached using sampling-based methods. These methods use models for simulation and take into account events of interest. If the problems involve rare catastrophic events, these methods converge slowly and produce high variance estimates. Moreover, in planning, the rare occurrence of the events obstructs the sampling-based method from exploring and exploiting the best action. This thesis presents methods for addressing these challenges, by efficiently estimating the likelihood of the rare catastrophic events and making decisions under uncertainty to minimize the risk of the events while achieving mission objectives. The methods are presented with three real-world applications. First, the thesis explores the use of rare event simulation techniques in aircraft collision risk estimation. The cross-entropy method with weight limits and variable selection is applied for variance reduction. Second, the multilevel splitting method, which is a variance reduction technique, is incorporated into decision-theoretic single-shot decision problems. The resulting method is applied to wildfire surveillance using an unmanned aircraft. Lastly, the thesis proposes new approaches for exploration in sequential decision problems and applies a variance reduction technique. It presents a rerouting problem involving unmanned aircraft in GPS-denied environments. Empirical studies demonstrate significant improvements in performance when using the proposed methods.
15. Machine learning for model uncertainties in turbulence models and Monte Carlo integral approximation [electronic resource] [2015] Online
- Book
- 1 online resource.
While computational fluid dynamics (CFD) is playing an ever-increasing role in the design process, physical experiments are still required for final verification. There is a demand for certification through simulation, but there is a gap in predictive quality. Reynolds-averaged Navier-Stokes flow simulations have known deficiencies, especially for high Reynolds number flows with turbulent transition and separation, and higher fidelity Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) are not generally affordable. Quantification and reduction of uncertainty in simulation results is necessary, and yet it is rare for error bounds to be returned by a simulation, and progress towards more accurate turbulent closures in RANS models seems to have stalled. Today, however, the community is better equipped than ever to address this challenge. The rise in data science has driven the creation of tools and techniques to analyze and synthesize massive data sets. Most importantly, the data needed for statistical inference is available; computational budgets allow for RANS calculations on a number of input conditions and design settings, LES advances to increasingly complex geometries, and DNS continues to expand its Reynolds-number range. This dissertation harnesses data-driven approaches to address issues of uncertainty in predictive tools. First, the dissertation explores creating accurate models from data by replicating the behavior of a known model. Computational data is collected from the Spalart-Allmaras turbulence model, a neural network algorithm is trained on this data, and the learned model is re-embedded within a CFD flow solver. The robustness and accuracy of this procedure is explored as influenced by loss function choice, feature selection, and training data. Next, the dissertation considers model uncertainty in low-fidelity models. High-fidelity data from DNS of combustion (using finite-rate chemistry) are used to augment the low-fidelity flamelet progress variable-based RANS approach (FPVA). Supervised learning approaches are used to construct two error models, one for the local inaccuracies in the model and a second addressing the spatial correlation of these errors. These uncertainty models are combined to estimate the uncertainty in the FPVA model. Finally, a methodology is presented for quantifying the effects of input uncertainty on an output variable of interest. This is done by constructing an approximate model of the system using available data samples, and then using this as a control variate to reduced the squared estimation error in the output. Results are presented which demonstrate improved accuracy for a wide range of problem dimensions, function types, and sampling types. Taken together, these approaches indicate the potential of data-driven techniques to identify and reduce uncertainties in complex flow simulations.
While computational fluid dynamics (CFD) is playing an ever-increasing role in the design process, physical experiments are still required for final verification. There is a demand for certification through simulation, but there is a gap in predictive quality. Reynolds-averaged Navier-Stokes flow simulations have known deficiencies, especially for high Reynolds number flows with turbulent transition and separation, and higher fidelity Large Eddy Simulations (LES) and Direct Numerical Simulations (DNS) are not generally affordable. Quantification and reduction of uncertainty in simulation results is necessary, and yet it is rare for error bounds to be returned by a simulation, and progress towards more accurate turbulent closures in RANS models seems to have stalled. Today, however, the community is better equipped than ever to address this challenge. The rise in data science has driven the creation of tools and techniques to analyze and synthesize massive data sets. Most importantly, the data needed for statistical inference is available; computational budgets allow for RANS calculations on a number of input conditions and design settings, LES advances to increasingly complex geometries, and DNS continues to expand its Reynolds-number range. This dissertation harnesses data-driven approaches to address issues of uncertainty in predictive tools. First, the dissertation explores creating accurate models from data by replicating the behavior of a known model. Computational data is collected from the Spalart-Allmaras turbulence model, a neural network algorithm is trained on this data, and the learned model is re-embedded within a CFD flow solver. The robustness and accuracy of this procedure is explored as influenced by loss function choice, feature selection, and training data. Next, the dissertation considers model uncertainty in low-fidelity models. High-fidelity data from DNS of combustion (using finite-rate chemistry) are used to augment the low-fidelity flamelet progress variable-based RANS approach (FPVA). Supervised learning approaches are used to construct two error models, one for the local inaccuracies in the model and a second addressing the spatial correlation of these errors. These uncertainty models are combined to estimate the uncertainty in the FPVA model. Finally, a methodology is presented for quantifying the effects of input uncertainty on an output variable of interest. This is done by constructing an approximate model of the system using available data samples, and then using this as a control variate to reduced the squared estimation error in the output. Results are presented which demonstrate improved accuracy for a wide range of problem dimensions, function types, and sampling types. Taken together, these approaches indicate the potential of data-driven techniques to identify and reduce uncertainties in complex flow simulations.
Special Collections
Special Collections | Status |
---|---|
University Archives | Request on-site access |
3781 2015 T | In-library use |
16. Surrogate modeling and active subspaces for efficient optimization of supersonic aircraft [electronic resource] [2015] Online
- Book
- 1 online resource.
This dissertation presents approaches for surrogate based optimization of supersonic vehicles analyzed with high fidelity flow simulations. It integrates several developments in surrogate modeling to enable a robust regression procedure in the presence of sparse data and inaccurate gradients. A series of hyperparameter constraints are developed which encourage the learning process to generate a physically representative fit of the data. It identifies the existence of subspaces based on linear combinations of inputs called "active subspaces" that reasonably model the behavior of objectives within aerospace design problems such as lift coefficient, drag coefficient and an equivalent area functional. Coherent physical features were found across several design problems for both two and three dimensional geometries. This dissertation further proposes an approach for adaptive refinement by conditioning the traditional expected improvement sampling criterion to avoid exploration of the design bounds. To begin work on applying active subspaces to optimization, inverse maps were developed to enable the linking of separate active subspaces for objectives and constraints, enabling surrogate based optimization in high dimension. Several design problems are explored, and it is shown that surrogate based optimization in active subspaces could enable the optimization of problems otherwise intractable via gradient based optimization alone.
This dissertation presents approaches for surrogate based optimization of supersonic vehicles analyzed with high fidelity flow simulations. It integrates several developments in surrogate modeling to enable a robust regression procedure in the presence of sparse data and inaccurate gradients. A series of hyperparameter constraints are developed which encourage the learning process to generate a physically representative fit of the data. It identifies the existence of subspaces based on linear combinations of inputs called "active subspaces" that reasonably model the behavior of objectives within aerospace design problems such as lift coefficient, drag coefficient and an equivalent area functional. Coherent physical features were found across several design problems for both two and three dimensional geometries. This dissertation further proposes an approach for adaptive refinement by conditioning the traditional expected improvement sampling criterion to avoid exploration of the design bounds. To begin work on applying active subspaces to optimization, inverse maps were developed to enable the linking of separate active subspaces for objectives and constraints, enabling surrogate based optimization in high dimension. Several design problems are explored, and it is shown that surrogate based optimization in active subspaces could enable the optimization of problems otherwise intractable via gradient based optimization alone.
Special Collections
Special Collections | Status |
---|---|
University Archives | Request on-site access |
3781 2015 L | In-library use |
17. Theory and applications of sparsity for radar sensing of ionospheric plasma [electronic resource] [2014] Online
- Book
- 1 online resource.
In order to enable flexible high-resolution measurements of ionospheric plasma phenomena, a sparsity-based radar waveform inversion technique is formulated and found to eliminate processing artifacts caused by the standard matched filter approach. Taking direction from the theory of compressed sensing, sparsity of the radar target scene is employed as prior knowledge to successfully perform the inversion. The result is cleaner data that limits self-interference of range-spread targets and enables differentiation in crowded and variable environments. Though the approach has been applied to ionospheric radar, it is generally applicable and especially relevant for radar target scenes with multiple or distributed scatterers. As a basis for the technique, a discrete radar model that captures signal sparsity in a delay-frequency dictionary is developed. This model is shown to have a strong connection to existing methods, resulting in an intuitive interpretation of the inversion technique as an iterative thresholding matched filter. An explicit formulation of the discrete model's representation of arbitrary distributed scatterers is derived, and it shows that sparsity is reasonably preserved in the discrete representation. Building on top of the model, waveform inversion is implemented using modern convex optimization techniques tailored for efficient computation and quick convergence. Finally, the real-world flexibility and effectiveness of the inversion technique is demonstrated by the elimination of filtering artifacts from meteor observations made with a variety of standard radar waveforms.
In order to enable flexible high-resolution measurements of ionospheric plasma phenomena, a sparsity-based radar waveform inversion technique is formulated and found to eliminate processing artifacts caused by the standard matched filter approach. Taking direction from the theory of compressed sensing, sparsity of the radar target scene is employed as prior knowledge to successfully perform the inversion. The result is cleaner data that limits self-interference of range-spread targets and enables differentiation in crowded and variable environments. Though the approach has been applied to ionospheric radar, it is generally applicable and especially relevant for radar target scenes with multiple or distributed scatterers. As a basis for the technique, a discrete radar model that captures signal sparsity in a delay-frequency dictionary is developed. This model is shown to have a strong connection to existing methods, resulting in an intuitive interpretation of the inversion technique as an iterative thresholding matched filter. An explicit formulation of the discrete model's representation of arbitrary distributed scatterers is derived, and it shows that sparsity is reasonably preserved in the discrete representation. Building on top of the model, waveform inversion is implemented using modern convex optimization techniques tailored for efficient computation and quick convergence. Finally, the real-world flexibility and effectiveness of the inversion technique is demonstrated by the elimination of filtering artifacts from meteor observations made with a variety of standard radar waveforms.
Special Collections
Special Collections | Status |
---|---|
University Archives | Request on-site access |
3781 2014 V | In-library use |
Articles+
Journal articles, e-books, & other e-resources
- Articles+ results include