1  20
Next
Number of results to display per page
Online 1. Efficient sequential reliabilitybased design optimization with adaptive kriging inverse reliability analysis [2018]
 Fenrich, Richard Walter, author.
 [Stanford, California] : [Stanford University], 2018.
 Description
 Book — 1 online resource.
 Summary

In this thesis, new methods for reliabilitybased design optimization (RBDO) are presented. The Adaptive Kriging Inverse Reliability Analysis (AKIRA) algorithm and a multifidelity sequential RBDO algorithm are introduced and demonstrated on a complex multidisciplinary supersonic nozzle design problem. AKIRA demonstrates competitive performance with other reliability analysis algorithms while also benefiting from the solution of the inverse reliability analysis problem during RBDO. The proposed sequential RBDO algorithm mitigates the cost of solving the RBDO problem by decoupling the optimization and reliability analyses, thereby reducing its solution to a series of deterministic optimizations. The method is motivated by anchored decomposition, has guaranteed convergence inherited from trust region methods, and is shown in certain cases to be a generalization of existing sequential RBDO methods. It also derives enhanced efficiency by incorporating lowerfidelity models when available. The final demonstration of the proposed algorithms on an industrialtype problem, the supersonic nozzle, shows that the solution of RBDO problems for complex realistic engineering applications is well within reach.
 Also online at

Special Collections
Special Collections  Status 

University Archives  
3781 2018 F  Unavailable In process 
Online 2. A heuristic for varying design parametrization applied to a multidisciplinary rotorcraft problem [electronic resource] [2018]
 Sinsay, Jeffrey Daniel.
 2018.
 Description
 Book — 1 online resource.
 Summary

At the aircraft conceptual design stage the potential design space is extremely large and the selection of the right set of design variables can be critical to finding a design optimum of practical value. This selection problem is particularly challenging for the human designer when the space not well understood. A means to improve the parametrization automatically as design and optimization proceeds with a heuristic approach to parametrization discovery is described in this dissertation. The heuristic applies important principles of evolutionary theory including: mutation, competition, and selection in searching for a better parametrization. The principles of parsimony in conjunction with variation in mutation probability are shown to to be important in ensuring efficient use of the degrees of freedom in an optimization problem. These prevent the parametrization size from tending toward infinity over successive generations while still finding better objective function values. Starting from simple geometry matching problems, with well defined objectives, and working up to complex multidisciplinary rotorcraft vehicle design problems, the heuristic is shown to have an ability to discover parametrizations which lead to improved objective function values compared to what can be achieved with the initial parametrization. A set of necessary methods are also developed to enable higherfidelity analysis of rotorcraft designs earlier in the design process, when coupled with the parametrization discovery heuristic.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2018 S  Inlibrary use 
Online 3. Modeling of turbulent mixing and combustion at transcritical conditions [electronic resource] [2018]
 Ma, Peter C.
 2018.
 Description
 Book — 1 online resource.
 Summary

The simulation of transcritical realfluid effects is crucial for many engineering applications, such as fuel injection and combustion in internalcombustion engines, rocket motors and gas turbines. In these systems, the liquid fuel is injected into the ambient gas at a pressure that exceeds its critical value, and the fuel jet will be heated to a supercritical temperature before combustion takes place. At elevated pressures, the mixture properties exhibit liquidlike densities and gaslike diffusivities, and the surface tension and enthalpy of vaporization approach zero. In this thesis, algorithms and modeling tools are developed for the prediction of supercritical and transcritical mixing and combustion. A diffuseinterface method is developed for simulating turbulent flows at transcritical conditions. Realfluid thermodynamics is described efficiently using the cubic equation of state. Spurious pressure oscillations associated with fully conservative (FC) formulations are addressed by a doubleflux model. An entropystable scheme that combines highorder nondissipative and loworder dissipative finitevolume schemes is proposed to preserve the physical realizability of numerical solutions across large density gradients. The resulting algorithms are applied to a series of test cases to demonstrate the capability in simulations of problems that are relevant for multispecies transcritical turbulent flows. The developed quasiconservative (QC) scheme is subsequently analyzed with the traditional FC scheme for multispecies mixing problems. Through numerical analysis, it is shown that mixing processes for isobaric systems follow the limiting cases of adiabatic and isochoric mixing models for FC and QC schemes, respectively, which is confirmed by several numerical test cases. An extension to the classical flamelet/progress variable approach is developed for transcritical combustion simulations. The novelty of the proposed approach lies in the ability to account for pressure and temperature variations from the baseline tabulated values in a thermodynamically consistent fashion. Application cases relevant to rocket combustors are performed to demonstrate the capability of the proposed approach in multidimensional transcritical combustion simulations. Finally, a finiterate chemistry model is employed in conjunction with the developed diffuseinterface method for the prediction of diesel fuel injection and autoignition processes. Simulations of an ECNrelevant dieselfuel injector are performed for both inert and reacting cases at multiple operating points. The performance of the presented numerical framework is demonstrated through comparisons with available experimental data.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2018 M  Inlibrary use 
Online 4. Radiation absorption by inertial particles in a turbulent square duct flow [electronic resource] [2018]
 Banko, Andrew James.
 2018.
 Description
 Book — 1 online resource.
 Summary

Particlebased solar receivers pose an interesting engineering challenge because of the coupled interactions between particle motion in a turbulent flow, radiation transmission and absorption through a random media, and convective heat transfer between the gas and solid phases. In a particle solar receiver, the absorbing wall of a conventional receiver is replaced with a transparent window and small particles dispersed within the working fluid absorb the radiation. This can increase the surface area available for heat transfer to the fluid, reduce radiation losses, and improve system efficiency. However, nearly all studies on particle solar receivers ignore the effect of particle clustering on radiation absorption and the gas temperature rise. In this work, the radiation absorption by preferentially concentrated particles in a turbulent square duct flow was studied experimentally. The turbulent flow of air was laden with small Nickel particles and exposed to monochromatic, infrared radiation over a streamwise length of several duct widths. Measurements were made of the particle phase statistics, mean and fluctuating radiation transmission, and mean and fluctuating gas temperature rise. Simplified heat transfer and radiation transmission models were also developed to understand the basic physical principles and to provide comparisons to the experimental results.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2018 B  Inlibrary use 
Online 5. The role of elasticity in particlefluid interactions and its effect on suspension rheology [electronic resource] [2018]
 Yang, Mengfei.
 2018.
 Description
 Book — 1 online resource.
 Summary

We leverage numerical methods to study two problems that require a detailed understanding of particlefluid interactions and, in particular, the effect of elasticity in these interactions. The first project is on understanding the bulk shear properties (viscosity, first and second normal stress difference coefficients) of nonBrownian rigid sphere suspensions in highly elastic fluids. We use numerical tools to compute the stress contributions due to the interaction of the particles with the elastic fluid. This work was motivated by 1) interesting shearthickening behavior observed in experimental measurements of suspensions in highly elastic fluids that could not be explained by our understanding of suspension rheology, and 2) conflicting results in theoretical attempts at deriving even the first order correction to the suspension bulk stress due to fluid elasticity. Thus there is a great opportunity to use high performance computing to bridge the gap between theory and empirical observations. First we investigate the viscometric functions of dilute suspensions for a wide range of Weissenberg numbers. The Weissenberg number is the shear rate nondimensionalized by the fluid relaxation time and compares elastic forces to viscous forces in the flow. We show that two extra stress contributions come from the addition of rigid particles to the nonlinear elastic fluid: 1) the contribution directly from the particles as they resist deformation leading to an increase in the internal stress —this contribution is known as the stresslet; and 2) the contribution from the fluid as it deforms around the particles leading to extra stresses in the fluid phase —this contribution is known as the particleinduced fluid stress. In the Wi < < 1 regime, we resolve previous discrepancies in the O(Wi) theory for the bulk stress of a dilute suspension by correctly calculating the two stress contributions from the particles. We also numerically determine the Wi scaling for the particle contribution to the viscometric functions of a dilute suspension in an OldroydB fluid (a model that represents polymer fluids as dumb bells suspended in a Newtonian solvent) and aid in the development of a theory that gives the first correction to the suspension viscosity due to fluid elasticity. We show that in weak flows, all the viscometric functions shearthicken. At moderate to high Wi, this shearthickening behavior remains prominent for the viscosity and first nor mal stress coefficient, though in the latter, the behavior can be nonmonotonic if the suspending fluid is slightly shearthinning. We also explore the microstructural origins of the particleinduced fluid stress, which is the dominant contribution to the shearthickening behavior. We determine the scalings of the magnitude and the "volume of interest" for the particleinduced fluid stress to understand the overall suspension behavior. Furthermore, we analyze the flow type in the regions of significant particleinduced fluid stress and find that the stretch of polymers in straindominated flow within closed streamlines around the particles generates the large stresses that contribute to the thickening behavior. Thus, understanding the properties of the suspending fluid in extensional deformation is important for predicting the shear rheology of the suspension. We give experimental evidence that quantitative differences between simulation results and experimental data can be explained by the shortcomings of existing closedform constitutive equations to adequately describe both the shear and extensional rheology of dilute polymer solutions. Finally, we study, via simulation and experiments, nondilute suspensions in Boger fluids to elucidate the effect of particleparticle hydrodynamic interactions on the stress contributions. In the numerical study, we use an immersed boundary method to simulate an ensemble of particles as a function of time until they achieve steady average bulk properties. The simulations include fully resolved particlescale hydro dynamics and fluid stresses. They show that for low volume fraction, nondilute suspensions, the shearthickening of the viscosity can be fully determined by considering a single particle's interactions with the suspending fluid. In fact, we show that the viscosity for suspensions up to a volume fraction of about 0.25 can be characterized by a shift factor that determines the zeroshear viscosity and a master curve that describes the viscosity thickening as a function of the suspension shear stress. This "master curve" for the thickening of the shear viscosity is not only demonstrated in the simulations but also shown to be consistent with all available experimental data, including our own. We also show that the first normal stress difference coefficient can be described similarly by a shift factor and a master curve. The second project is a study of particle separation via continuous flow through microfluidic devices. Within the last decade, a plethora of such microfluidic devices have been developed, largely based on observation and intuition. This is particularly true in the development of vector chromatography, where particles separate out laterally in two dimensions, at vanishingly small Reynolds number for nonBrownian particles. This phenomenon has its origins in the irreversible forces that are at work in the device, since Stokes flow reversibility typically prohibits their function otherwise. We present Boundary Element Method simulations of the vector separation of nonBrownian particles with different sizes and elasticities in Stokes flow through channels whose lower surface is composed of slanted cavities. The simulations are designed to understand the physical principles behind the separation as well as to provide design criteria for devices for separating particles in a given size/flexibility range. We first show that we can get quantitative agreement with the experimental separation data. We then vary the geometric parameters of the simulated devices to demonstrate the sensitivity of the separation efficiency to those parameters —thus making design predictions as to which devices are appropriate for separating particles in different size, shape, and deformability ranges.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2018 Y  Inlibrary use 
 Horwitz, Jeremy Aaron Kolker, author.
 [Stanford, California] : [Stanford University], 2018.
 Description
 Book — 1 online resource.
 Summary

This work is concerned with the computation of particleladen flows, primarily in the dilute regime, using the pointparticle approach. The pointparticle method solves continuum transport equations for the fluid phase coupled to Lagrangian equations for each particle. The coupling is accomplished by modelled source terms (momentum, energy, mass) which function as surrogates for boundary conditions on particle surfaces. This dissertation begins by surveying some of the widely used source terms, especially drag models, and the conditions under which these drag models are applicable. The incorporation of a drag model in a numerical simulation in general requires knowledge of the undisturbed fluid velocity at the location of the particle, which is the fluid velocity the particle experiences at every point along its trajectory, in absence of the particle. Because this quantity is not directly available in coupled simulations of particlefluid interaction, and because the undisturbed fluid velocity is difficult to interpret physically, most modellers neglect to model it and instead incorporate a disturbed fluid velocity at the particle location, which is found by interpolation of the fluid velocity to the particle location using standard interpolation schemes. In this dissertation, we will show that the difference between the disturbed and undisturbed fluid velocity can be large, that the difference scales with the ratio of the particle size to the grid spacing, and that estimating the undisturbed fluid velocity is necessary for successful verification of coupled pointparticle methods. We develop a scheme motivated by Stokesian symmetries which estimates the undisturbed fluid velocity by correlating this quantity to an enhancement in fluid curvature created by pointparticles. The scheme is found to well predict particle settling velocity at low and finite Reynolds numbers, while standard schemes used in literature greatly over predict particle settling velocity. By examining the total particle plus fluid energy equation, we find that accurate estimation of the undisturbed fluid velocity implies a correspondence principalnamely the correct prediction of dissipation rate consistent with the drag model chosen. We then explore the consequences of a verifiable pointparticle method in forced and decaying homogeneous isotropic turbulence. In the former, the incorporation of the undisturbed fluid velocity prediction results in enhanced clustering of particles, especially at smaller separations compared with standard schemes. This is related to a broadening in acceleration probability density function when the undisturbed fluid velocity is used to calculate the drag force. In decaying turbulence, for several fluid and particle statistics, it is found that standard pointparticle approaches do not converge with grid refinement, while incorporation of our proposed correction for the undisturbed fluid velocity can result in grid insensitive results for lowerorder moments. Examination of higher order moments reveals grid dependence for all pointparticle implementations which suggests that not all practical questions surrounding particleladen flows are answerable with the pointparticle method. In the next section, the pointparticle method is directly compared against nondimensionally identical simulations of resolved particles in decaying turbulence. We find that under certain conditions, specification of an appropriate drag model which accounts for finite particle Reynolds number and accurate computation of the undisturbed fluid velocity are necessary for successful validation of the pointparticle method. Under these conditions, good agreement is found for both integral quantities such as fluid dissipation rate, but also in comparison of particle acceleration probability density functions. Interestingly, under the same conditions, it is found that using a less suitable drag model but where the undisturbed fluid velocity is accounted can yield better predictions of the pointparticle method compared with when a more appropriate drag model is used without accounting for the undisturbed fluid velocity. Having spent a large portion of this work on examining momentum/energy coupling in the absence of heat transfer, we move toward the examination of problems where particles can move and exchange internal energy with the fluid. The heat transfer sources, analogous to the drag models discussed previously, depend on the undisturbed fluid temperature at the particle location. We develop a scheme to estimate the undisturbed temperature by correlating it to the measured disturbed curvature in the temperature field created by a pointparticle. We then perform verification of the proposed procedure for a settling particle subject to radiation under low and finite heating conditions. The proposed correction for the undisturbed temperature, combined with the previous method for estimating the undisturbed fluid velocity, together significantly reduce the error in settling velocity and terminal temperature compared with standard pointparticle schemes. Finally, we discuss some outstanding questions in particleladen flows, and how the current methodologies presented can be extended. One such extension concerns calculation of undisturbed quantities on anisotropic grids which are often used in the neighborhood of walls. We outline a general approach to this problem using the method of discrete Green's functions.
 Also online at

Special Collections
Special Collections  Status 

University Archives  
3781 2018 H  Unavailable In process 
Online 7. Computational modeling of wallbounded particleladen turbent flows [electronic resource] [2017]
 Abdehkakha, Hoora.
 2017.
 Description
 Book — 1 online resource.
 Summary

The transport of particles in wallbounded turbulent flows is of relevance in many industrial, environmental, and biological applications. The physical scenario is particularly rich involving a complex force balance (involving drag, lift, gravity) that induces particle motion, complex interactions between particles and walls, particle collisions and twoway momentum transfer between the fluid and the particles. Many studies in the literature have used both experimental techniques and computational tools to investigate this problem; however, many questions on the relative importance of the various physical effects remain especially in realistic configurations under turbulent conditions, such as in ducts with nonsquare crosssections. The objective of this thesis is to develop computational capabilities to study the coupling between flow turbulence and particle dynamics in a range of Reynolds numbers. The detailed analysis carried out in square and squircle crosssection ducts, sheds light on the particle force balance, the impact of collision on particle concentration near walls, and the interplay between particle collision and turbophoresis. Particle preferential accumulation is known to play a major role on the efficiency of particlebased systems. However, the role of turbophoresis, and the resulting increased concentration near solid walls, is not well understood. A unique feature of turbulent flows in noncircular duct, namely the secondary flows of Prandtl's second kind, enhances the transport of momentum, vorticity, and energy from the core of the duct to the corners and creates a distortion in the velocity contours. We performed detailed computational studies to investigate the effect of secondary motion on the flow and particle distribution. Secondary flow transfers particles toward the corners leading to a significant reduction of particle concentration in the core and an even higher preferential concentration of particles in the vicinity of the walls. Different cases are presented where the sharp corners of the duct are gradually smoothed (transition from a square to squircle and, finally, to a circle) to illustrate the importance of the secondary flow on the particle distribution. Particleparticle collisions deeply alter the particle concentration in the nearwall region and provide a mechanism to reduce turbophoresis even in cases when the overall particle loading is very low. Simulations have been carried out in a range of flow conditions and configurations illustrating that, in spite of the relative change of near wall particle concentration, the wall deposition measured experimentally is well reproduced numerically. In addition to using a bruteforce collision algorithm we have developed an efficient and scalable stochastic collision model. This strategy introduces fictitious particles that statistically represent the likelihood of finding collision partners in the vicinity of a given particle. An important improvement of our approach, as compared to others available in the literature, is the introduction of a computational procedure to estimate the velocity fluctuations of the fictitious particle rather than requiring user specified parameters. The simulations using this stochastic collision model compare well with the bruteforce approach and lead to considerably more efficient overall computations.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2017 A  Inlibrary use 
Online 8. Fast linear algebra algorithms and applications to computational flow physics [electronic resource] [2017]
 Description
 Book — 1 online resource.
 Summary

This is an interdisciplinary study on fast linear algebra algorithms and high performance computing methods with applications in flow physics. Fast linear algebra algorithms are the essence of most high performance scientific calculations. In this thesis we study various novel fast linear algebra techniques, including adaptive fast multipole method and fast sparse linear solvers using lowrank approximation and extended sparsification. We also discuss numerical and computational methods developed for high fidelity simulation of heated particleladen flows, which is followed by review of new physics discovered. We show heated particles can modify spectral properties of the background turbulence. The effect of particle preferential concentration in particletogas heat transfer is studied. In addition, we use the developed computational physics framework to benchmark our proposed novel sparse matrix linear solver.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2017 P  Inlibrary use 
Online 9. Fourier collocation methods for unsteady flows [electronic resource] [2017]
 Naik, Kedar R.
 2017.
 Description
 Book — 1 online resource.
 Summary

Fully resolved CFD simulations of unsteady aerodynamics are still too expensive to be deployed during the engineering design process. Most aerodynamicdesign studies, however, only require knowledge of the steadystate flow field, not the transient behavior that precedes it. Collocation methods in time, such as the timespectral method and the harmonicbalance method, obviate the need to model transients by directly solving for the steady state  offering significant cost savings. They do so by using Fourier and Fourierlike basis functions to represent the flow field at a handful of points in time. The harmonicbalance method assumes the underlying spectrum is tonal, i.e. dominated by a finite set of known frequencies. This dissertation presents the discovery of "inadmissible frequency sets, " which cause the harmonicbalance method to fail unconditionally. A mathematically grounded strategy for avoiding such inadmissible sets is also proffered. Selecting harmonicbalance time instances using this new approach is shown to eliminate the presence of corrupted solutions and allows the method to admit all possible frequency sets. This dissertation will also address the use of collocation methods in time to model flow fields where the oscillatory character of the steady state is unknown a priori. Specifically, a new algorithm that allows the harmonicbalance method to be used with unknown frequency content will be discussed. In addition, two new algorithms will be presented that allow the timespectral method, which is used to model periodic flows, to be used in cases where the underlying periodicity is either unknown or naturally occurring due to stable limitcycle oscillations.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2017 N  Inlibrary use 
Online 10. Geometric volumeoffluid framework for simulating twophase flows on unstructured meshes [electronic resource] [2017]
 Ivey, Christopher Blake.
 2017.
 Description
 Book — 1 online resource.
 Summary

Twophase flows appear frequently in nature and industry. An important example, which impacts the efficiency of combustion engines, is the atomization of an injected liquid jet into an evaporable spray. Accurately simulating twophase flows in complex engineering applications poses several challenges for numerical modeling. Material properties are discontinuous and can vary greatly between the two phases; for example, the density ratio of airwater flows in atmospheric conditions is approximately 1:1000. The curvature of the phase interface generates a singular surface tension force, which is only active at the interface. The curvature is a higherorder term that requires second derivatives of the interface position, which are susceptible to numerical error amplification. The accurate tracking of the phases in applications that generate a breadth of interfacial length scales, such as breaking waves and jet atomization, require schemes that prevent the dissipation of the smallscale features. Discretizations on unstructured grids are necessary to simulate twophase flows that include complex engineering devices, such as fuel injectors in a diesel combustor. The present work addresses these challenges using a geometric volumeoffluid framework. In the present volumeoffluid method, the interface evolution is implicitly tracked using the fraction of the liquid volume within each cell. The interface is represented by a series of discontinuous planes, reconstructed by the local liquid volume fraction. As such, the approach naturally handles large changes in the interfacial topology. The piecewiseplanar representation of the interface, combined with tools from computational geometry, facilitates exact numerical integration over the phases on unstructured meshes. The focus of this dissertation is to discuss the novel developments of the unstructured twophase flow solver: an accurate and convergent approach for calculating the interface normals and curvatures, a discretely conservative and bounded volumeoffluid advection method, a twophase fractionalstep method that can handle singular surface tension forces and large density ratios, and a nonconvex polyhedral library to perform the geometric operations required by the developments. The novel components of the framework are assessed using canonical static, kinematic and dynamic test cases on various unstructured meshes to demonstrate its cost, robustness and accuracy. Finally, the relevance of the method to engineering applications is established through a simulation of the atomization of a diesel fuel from a Bosch injector.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2017 I  Inlibrary use 
Online 11. Modeling radiation transport in turbulent particleladen media [electronic resource] [2017]
 Frankel, Ari.
 2017.
 Description
 Book — 1 online resource.
 Summary

Particlebased solar receivers are a promising device for efficient renewable energy systems. In these systems, an array of mirrors focuses sunlight onto a falling curtain of particles that absorbs the light. The heated particles are stored for later energy extraction. In this work I consider a design concept in which the particles and air are in a coflowing configuration; as the particles are heated they conduct the energy to the surrounding air, which may be used directly in a power plant. The formulation of the governing equations to encompass the full physics of the problem will be presented. The impact of turbulence on the opacity of the particle cloud is analyzed. Using results from direct numerical simulations of particleladen turbulence and ray tracing, this work will demonstrate that turbulence can substantially decrease the opacity of a particle cloud. The homogenization of the particles to a concentration field recovers an acceptable representation of radiation transport with the caveat that overrefining the grid can lead to numerical artifacts. The particle homogenization technique is then applied to a novel simulation of a particleladen turbulent duct flow exposed to high intensity radiation to parallel ongoing experiments of a labscale solar receiver. These simulations provide design guidelines by examining the thermal efficiency and flow physics in particlebased solar receivers. The choice of radiation model to capture the heat transfer in the simulations can have a substantial impact on the computed temperature profiles. Initial comparisons between the computations and experiments at higher Reynolds number are also discussed. Finally, the application of the multifidelity Monte Carlo method for uncertainty quantification to radiation transport will be shown, along with a method for evaluating effective variance reduction techniques and its use for treating the uncertainty in the particle radiative properties.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2017 F  Inlibrary use 
Online 12. Numerical simulation of viscoelastic particulate flows using the immersed boundary method [electronic resource] [2017]
 Krishnan, Sreenath.
 2017.
 Description
 Book — 1 online resource.
 Summary

There are no comprehensive simulationbased tools for engineering the flows of viscoelastic fluidparticle suspensions in fully threedimensional geometries. In many engineering applications, such as in oil and gas industry, 3D Printing etc., the need for such a tool is immense. This work describes the development of a highperformance computational approach that targets the threedimensional, timedependent flows of viscoelastic suspensions for a variety of rheological models. The simulation tool is based on an immersed boundary (IB) algorithm, which is a simple, scalable and costeffective approach to simulate flows around complex, moving and deforming bodies without requiring the generation of a computational grid that conforms to the fluid flow boundaries at every time instant; instead the approach uses a background mesh that covers the domain of interest without the moving bodies and accounts for their effect by modifying the mathematical formulation of the problem, resulting in a twoway coupled simulation, where the flow is resolved at the scale of the particle. Typically in IB methods, the computational grids are chosen to be Cartesian for simplicity. Cartesian grids cannot, however, efficiently represent complex geometries often encountered in engineering applications. With the objective of developing a highly flexible tool, an unstructured mesh framework is combined with an immersed boundary based viscoelastic solver for moving bodies, in a finite volume setting. This strategy has not been presented before and represents the primary highlight of the work. The generality of the resulting computational tools enables us to span a variety of relevant geometrical configurations and a broad range of rheological models; this, in turn, allows us to establish detailed explanations for various phenomena with the longterm potential of designing fluid suspensions for different applications. In this approach, the conservation of mass and momentum equations, which include both Newtonian and nonNewtonian stresses, are solved over the entire domain including the region occupied by the particles. It is assumed that this region is filled with a fluid with the density equal to the particle density. The particle is defined on a separate mesh that is free to move over the underlying grid. The motion of the material inside the particle is constrained to be a rigid body motion by adding a rigidity constraint body force in the momentum equation. We also correct the nonNewtonian stress field, to satisfy isotropic condition inside the particle. Since the grid sizes are such that, the lubrication forces are not resolved, we employ a collision model to treat particleparticle and particlewall interactions. The development of the numerical algorithm and measures taken to enable efficient parallelization and transfer of information between the underlying fluid grid and the particle mesh are discussed. A number of flows, simulated using this method are presented to assess the accuracy and correctness of the algorithm. The ability of the tool to capture the underlying physics and mechanisms of fluidparticle interaction is highlighted in a few examples. Finally, the solver is applied to carry out two largescale simulations: particulate flow in an asymmetric TJunction and sedimentation of suspensions in a TaylorCouette cell under the action of orthogonal shear.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2017 K  Inlibrary use 
Online 13. Polynomial chaos and multifidelity approximations to efficiently compute the annual energy production in wind farm layout optimization [electronic resource] [2017]
 Padrón, Andrés Santiago.
 2017.
 Description
 Book — 1 online resource.
 Summary

This thesis presents techniques to enable highfidelity uncertainty quantification and highfidelity optimization under uncertainty. The techniques developed herein are applied to maximize the Annual Energy Production (AEP) of a wind farm by optimizing the position of the wind turbines. The AEP is the expected power produced by the wind farm over a period of one year, and the wind conditions (e.g., wind direction and wind speed) for the year are described with empiricallydetermined probability distributions. To compute the AEP of the wind farm, a wake model is used to simulate the power for various sets of input conditions (e.g., wind direction and wind speed). We use polynomial chaos (PC), an uncertainty quantification method, to construct a polynomial approximation of the power from these sets of simulations or samples. We explore both regression and quadrature approaches to compute the PC coefficients. PC based on regression is significantly more efficient than the rectangle rule (the method currently used in practice to compute the expected power): PC based on regression achieves the same accuracy as the rectangle rule with only onetenth of the required simulations, and, for the same number of samples, its estimates are five times more accurate. We propose a multifidelity method built on top of polynomial chaos to further improve the efficiency of computing the AEP. There exists multiple wake models of varying fidelity and cost to compute the power (and hence the AEP). Here, we choose the Floris and Jensen models as our high and lowfidelity models, respectively. Both models are engineering models that can be evaluated in less than 1 second. The multifidelity method creates an approximation to the highfidelity model and its statistics (such as the AEP) and uses a polynomial chaos expansion that is the combination of a polynomial expansion from a lowfidelity model and a polynomial expansion of a correction function. The correction function is constructed from the differences between the highfidelity and lowfidelity simulation results. The multifidelity method can estimate the highfidelity AEP to the same accuracy with only onehalf to onefifth of the highfidelity model evaluations, depending on the layout of the wind farm. Combining the reduction in the number of simulations obtained from using PC and the multifidelity method, we have reduced by more than an order of magnitude the number of simulations required to accurately compute the AEP, thus enabling the use of more expensive higher fidelity models in wind farm optimization. Once we can compute the AEP efficiently, we consider the optimization under uncertainty problem of maximizing the AEP of a wind farm by changing its layout subject to geometric constraints— wind turbines must stay within a given area and with a minimum separation between them. We extend polynomial chaos to obtain the gradient of the statistics (AEP) from the gradients of the power at the simulation samples. With the gradient of the AEP, we can make use of a gradientbased optimizer to efficiently maximize the AEP. The optimization problem has many local maxima that are nearly equivalent. To compare the optimizations between methods (polynomial chaos, rectangle rule), we perform a large suite of optimizations with different initial turbine locations and with different samples and numbers of samples to compute the AEP. The optimizations with PC based on regression result in optimized layouts that produce the same AEP as the optimized layouts found with the rectangle rule but using only onethird of the samples. Furthermore, for the same number of samples, the AEP of the optimal layouts found with PC is 1 % higher than the AEP of the layouts found with the rectangle rule. A 1 % increase in the AEP for a modern large wind farm can increase its annual revenue by $2 million.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2017 P  Inlibrary use 
 Campos, Alejandro.
 2016.
 Description
 Book — 1 online resource.
 Summary

Structurebased modeling, introduced by Kassinos and Reynolds (1995), uses onepoint tensors to provide a more detailed description of the turbulent fluctuations. This thesis describes further developments to two structurebased models, namely the Algebraic Structurebased Model (ASBM), which belongs to the RANS category, and the Interacting Particle Representation Model (IPRM), which can been interpreted as a PDF model. The ASBM is an engineering model of turbulence for wallbounded flows. A new variant of the model has been formulated, which exhibits (1) a segregated nearwall correction that leads to a new paradigm for model development and comparison, (2) a set of fullyexplicit equations that replaces the original formulation and reveals the highly nonlinear nature of the ASBM, and (3) a new coupling with transport equations that improves the accuracy of the model. The original and newer variants of the ASBM are then applied in the simulation of separated flows, so as to obtain a comprehensive assessment of their predictive capabilities. This is then followed by a thorough study on the ability of the model to provide wellconverged solutions. The IPRM is a stochastic structurebased model of homogeneous turbulence. This thesis documents a new formulation based on an Eulerian reference frame that replaces the original Lagrangian framework and thus avoids the slow convergence and bias of statistical estimators. The derivation of the Eulerian formulation, its solution through radial basis functions, and a comparison against the original solution methods are reported in detail. Taken together, the work performed on both models advances the applicability and understanding of structurebased modeling for turbulent flows.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2016 C  Inlibrary use 
 Kseib, Nicolas S.
 2016.
 Description
 Book — 1 online resource.
 Summary

As computing capabilities grow and the amount of experimental and numerical data increases, computational strategies can be designed to automatically test and assess different modeling assumptions. We introduce a general datadriven statistical framework that bridges the gap between (numerical or laboratory) experimentation, physical modeling and uncertainty quantification. The framework enables the study of uncertainties and bias in physical models estimated from data. We differentiate between two types of modeling uncertainties and bias, the first one due to physical errors in the models and the second one due to noise introduced by the dataacquisition process. We also present different procedures to build models under different noise assumptions and propose a metric to quantify the quality of the datadriven estimations. The framework is tested in the context of combustion science and chemical kinetics and it is driven by empirical data and simple chemistry models. Why reaction rates? A combination of a rigorous application of the statistical framework as well as recently measured kinetic rates data will allow us to propose new modeling strategies for chemical reaction rates, their associated uncertainties, and how these uncertainties propagate into relevant combustion problems. This thesis also shows that the current state of the art of reporting kinetic uncertainties relevant for predictive problems in combustion sciences is incomplete and only focuses on describing the experimental variability. We propose a technique to report uncertainties in a useful manner for scientists interested in studying the predictive capabilities of their numerical simulations where chemical reaction rates are input parameters. Applications include hydrogen chemistry, explosion limits and initial mixture compositions uncertainties in gaseous mixtures. To represent as closely as possible actual experiments in our models, we will review the process of inferring reaction rates from shock tubes devices. Shock tubes are one of the most popular devices used to measure kinetic rates. We will closely examine the uncertainties of measurements inside a shock tube: 1 due to the presence of nonideal phenomena in the real device (departures from the ideal operation sequence), 2 incomplete knowledge (unknown parameters needed to model the operation of shock tubes) and 3 sensor uncertainties. This framework can be extended to complex predictive problems relevant to turbulence, turbulent combustion and safety related applications (e.g. nuclear waste treatment, detonations etc.)  and to more complicated reaction rates and larger chemical mechanisms when both raw experimental signals and processed reaction rates become more accessible.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2016 K  Inlibrary use 
Online 16. Direct numerical simulation of electroconvective chaos near an ionselective membrane [electronic resource] [2016]
 Druzgalski, Clara.
 2016.
 Description
 Book — 1 online resource.
 Summary

Electrokinetic transport plays an essential role in mature industrial applications and emerging technology such as: electrochemical devices, microfluidic chips, and electrodialysis used for water purification and chemical production. These systems use ionselective surfaces and applied electric fields to manipulate aqueous electrolytes. In this dissertation we investigate a model system comprised of an ionselective surface and a liquid electrolyte subject to an applied electric field. The applied field creates steep gradients in concentration and charge density which lead to an electrohydrodynamic instability referred to as electroconvection. At voltages of O(1)V, this instability can lead to chaotic dynamics. We investigate the onset of the instability and transport in the chaotic regime by formulating a specialized parallel numerical algorithm to solve the coupled PoissonNernstPlanck and NavierStokes equations. We developed a direct numerical simulation code called EKaos that can simulate chaotic electrokinetic phenomena in three dimensions with high resolution. The EKaos code was developed using numerical algorithms designed to efficiently solve the governing equations on parallel platforms. The equations are spatially discretized using a second order central finite difference scheme on a structured, staggered mesh. Time integration is performed with a specialised iterative procedure that uses physical and analytical insights to develop discrete operators that are easily invertible and converge quickly to second order temporal accuracy. EKaos efficiently solves 2D and 3D systems using nondissipative algorithms that capture the high wavenumber physics critical to accurately simulating chaotic phenomena. 2D and 3D simulations from EKaos reveal interesting similarities between electroconvective chaos and turbulent flows such as: energy spectra with a wide range of spatiotemporal scales, vortices that interact with each other in an irregular manner, and substantial enhancement of transport and mixing. Quantitative analysis of statistics shows that although 2D and 3D simulations of electroconvective chaos are qualitatively very similar, inclusion of the third dimension is important for the prediction of mean quantities such as concentration, charge density, and current density. We assess the impact of electroconvection on the mean current density and appearance of instantaneous high current density hotspots on the membrane surface. Finally, we introduce ensemble averaged equations and discuss the relative importance of closed and unclosed terms for reduced order modeling.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2016 D  Inlibrary use 
Online 17. Polynomial and rational approximation techniques for nonintrusive uncertainty quantification [electronic resource] [2016]
 Ghili, Saman.
 2016.
 Description
 Book — 1 online resource.
 Summary

With the everincreasing power of computers and the advent parallel computing in the recent decades, scientists and engineers are relying more and more on numerical simulations in their studies of complex physical systems. Given the central role that simulations play in engineering design and decision making, it is crucial to assign confidence in their outputs. One important factor that leads to uncertainty in the output quantity of interest in a physical system, is uncertainty in the inputs like properties of materials, manufacturing details, initial and boundary conditions, \emph{etc}. Our goal in uncertainty quantification is (by definition) to quantify the effects of these input uncertainties on the output quantity of interest. In other words, we are trying to describe the behavior of this output variable as a function of the input uncertainties. In \emph{intrusive} UQ strategies, we solve the governing equations of the physical system in terms of both physical and uncertain variables. Solving these equations is usually significantly more challenging than solving the original (deterministic) equations, and often require writing new code that is substantially different from the deterministic solver. In \emph{nonintrusive} UQ on the other hand, we run the deterministic code for various values of the input parameters, and use the outputs of these simulations to construct an approximation for the behavior of the output quantity of interest as a function of the input uncertainties. Although nonintrusive methods come in many flavors, they are all based on some variation of the following fundamental problem in approximation theory: given the values of a function at a set of points in its domain, how can we efficiently and accurately approximate that function? In this dissertation, we study several variants of this problem. Sometimes, we are free in choosing the points at which the function is evaluated (\emph{i.e.}, the values of the input parameters for which we run the deterministic simulation). In this scenario, we need to choose the points in a way that, given a certain number of function evaluations, we can get the best quality of approximation. As an instance of the problem in this setting, we will look at a nonintrusive \emph{polynomial chaos expansion} (PCE) technique, in which we use weighted least squares to construct a multivariate polynomial surrogate. We present a novel optimization based method for finding the best points for this type of approximation. We are not always free to choose the grid points, and sometimes have to find the best approximation that we can, using a fixed set of points that are just given to us. For these problems, in univariate settings, we present an efficient and accurate method based on the \emph{FloaterHormann} rational interpolation. For multivariate settings, we present a generalization of the nearest neighbor interpolation based on $L_1$ minimization. This method has similar convergence properties to those of the \emph{moving least squares} method, but unlike moving least squares, it does not come with any tunable parameters. We will also look at a hybrid setting, where some of the points are fixed, and we are free in choosing the rest. Assume that we have found the polynomial interpolant a function at a set of \emph{Chebyshev} points, and decide that we need to use a higher order polynomial interpolant. We present a method for finding the best points to use for finding the higher order interpolant, under the constraint that the previous set of points have to be reused.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2016 G  Inlibrary use 
Online 18. Probability distribution methods for nonlinear transport in heterogeneous porous media [electronic resource] [2016]
 Ibrahima, Fayadhoi.
 2016.
 Description
 Book — 1 online resource.
 Summary

Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear twophase flow in porous media is essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo simulations remain the preferred option. In this thesis, we first propose an alternative approach to evaluate the probability distribution of the (water) saturation for nonlinear transport in strongly heterogeneous porous systems. We build a physicsbased, computationally efficient and numerically accurate method to estimate the onepoint probability density and cumulative distribution functions of the saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation probability density function and cumulative distribution function essentially in terms of a deterministic nonlinear mapping of scalar random fields. In a large class of applications these random fields are smooth and can be estimated at low computational costs (few Monte Carlo runs), thus making the distribution method attractive. Once the saturation distribution is determined, any onepoint statistics thereof can be obtained, especially the saturation average and standard deviation. More importantly, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction in the prior knowledge of geophysical input distributions. We provide various examples and comparisons with existing methods to illustrate the performance and applicability of the new developed method. In the second part of the thesis, we present a procedure to analytically obtain the multipoint cumulative distribution function of the saturation for the stochastic twophase BuckleyLeverett model with random totalvelocity field. The multipoint distribution function is determined by first deriving a partial differential equation for the saturation raw cumulative distribution function at each point, then combining these equations into a single partial differential equation for the multipoint raw cumulative distribution function. This latter stochastic partial differential equation, linear in the spacetime variables, can be solved in a closed form and semianalytically for spatial one dimensional problems or numerically for higher spatial dimensions. Finally, the ensemble average of its solution gives the saturation multipoint cumulative distribution function. We provide numerical results of distribution function profiles in one spatial dimension and for two points. Besides, we use the twopoint distribution method to compute the saturation autocovariance function, essential for data assimilation. We confirm the validity of the method by comparing covariance results obtained with the multipoint distribution method and Monte Carlo simulations.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2016 I  Inlibrary use 
Online 19. Threedimensional velocity and concentration measurements of turbulent mixing in discrete hole film cooling flows [electronic resource] [2016]
 Ryan, Kevin J.
 2016.
 Description
 Book — 1 online resource.
 Summary

Magnetic Resonance Velocimetry (MRV) and Magnetic Resonance Concentration (MRC) are used to measure the threedimensional, threecomponent, timeaveraged velocity and scalar concentration fields of ten different discrete hole film cooling configurations. Seven of these configurations feature variations in the mainstream flow, covering changes in streamwise pressure gradient, incoming boundary layer thickness, and injection wall curvature, as well as a baseline case on a flat wall with nominally zero pressure gradient and moderate boundary layer thickness. All configurations use a single film cooling hole with circular crosssection inclined 30 degrees and aligned with the streamwise direction of the mainstream flow. The remaining three configurations have nominal mainstream conditions, but include modifications to the film cooling hole to introduce threedimensional complexity: a skewed film cooling hole, injected at a 30 degrees angle with the mainstream flow; an array of three film cooling holes that interact with one another; and a shaped hole with a noncircular crosssection that diffuses into an expanded exit. A separate water channel is constructed for each configuration, and each experiment is operated at a nominal blowing ratio of unity. The penetration of the jet of fluid from the film cooling hole into the mainstream flow, measurable in both the velocity and concentration fields, is sensitive to the thickness of the mainstream boundary layer at the point of injection. Evidence of this effect is seen in both the boundary layer and pressure gradient cases, with mainstream acceleration and deceleration due to the pressure gradients causing thinning and thickening of the boundary layer. Mainstream acceleration also strengthens the counterrotating vortex pair (CVP), the dominant secondary flow feature for discrete hole film cooling flows. Increasing the strength of the CVP increases the tortuous path for fluid injected from the film cooling hole, but this effect is partially balanced by the stretching effect of the mainstream acceleration. The distinguishing feature of the skewed hole configuration is the development of a single dominant vortex that remains strong throughout the jet region in the mainstream flow. This single vortex preferentially entrains low concentration fluid from the mainstream and low velocity, high turbulence fluid from the boundary layer into one side of the jet region, causing asymmetric mixing and spread of the jet concentration and velocity contours. Mixing of low concentration fluid under the jet decreases the film cooling performance of the skewed jet as compared to the unskewed baseline geometry. The multihole experiment, having an array of three holes, is oriented with one central downstream hole and two flanking holes on either side upstream. The upstream holes are offset 2D on either side of the center hole, and located 3.07D upstream. Flow downstream of the holes is characterized by a CVP triplet, with one CVP emanating out of each hole. The flow between the CVP is a strong common down flow that brings jet fluid toward the bottom wall. This downward flow produces an increase in film cooling performance for the multihole case over the single hole. Superposition of concentration from the upstream and center holes produces a further increase in film cooling performance. The shaped hole differentiates itself from the other nine configurations tested in that the flow out of the hole does not initiate the formation of any strong secondary flows in the mainstream channel. The strong laidback fanshaped expansion of the exit (12 degrees in both the streamwise and lateral directions) reduces the momentum of the fluid exiting the hole, such that its effect on the mainstream flow is negligible. As such, the jet fluid from the hole remains close to the wall after injection, significantly increasing the film cooling performance relative to nonshaped holes with circular crosssection. Finally, a highfidelity large eddy simulation (LES) of the skewed hole case is used to evaluate several common models for turbulent scalar mixing. The Gradient Diffusion Hypothesis (GDH), Generalized Gradient Diffusion Hypothesis (GGDH), and Higher Order Generalized Gradient Diffusion Hypothesis (HOGGDH) are compared based on their abilities to capture the correct anisotropy of the turbulent scalar flux vector, as well as the influence of their modeling errors on the final concentration field. While the anisotropic GGDH and HOGGDH show improvements in the nearinjection region over the isotropic GDH, further downstream the GDH better captures the concentration distribution at the wall.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2016 R  Inlibrary use 
Online 20. Analysis and design of shrouded turbines [electronic resource] [2015]
 Aranake, Aniket C.
 2015.
 Description
 Book — 1 online resource.
 Summary

A shrouded wind turbine consists of a rotor that is enclosed within an aerodynamically shaped flowaccelerating device. This work aims to improve the theoretical knowledge of shrouded turbines and to establish a better understanding of the underlying aerodynamics. A costeffective tool is developed and utilized to improve the design of shrouded turbines. Specifically, the aerodynamics of such a system is investigated using a range of modeling fidelities and a technique for rapid design is also developed and used to demonstrate considerable gains in power extraction.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request onsite access 
3781 2015 A  Inlibrary use 