# %{search_type} search results

## 260,864 catalog results

View results as:
Number of results to display per page

### 1. Genomic and mechanistic interrogation of novel genes and gene signatures in non-small cell lung cancer [electronic resource][2018]

Book
1 online resource.
Lung cancer is the leading cause of cancer deaths worldwide, with 1.7 million deaths per year. Moreover, lung cancer is comprised of several histological subtypes, with non-small cell lung cancer (NSCLC) making up ~85% of the cases. Presently, the most important prognosis of NSCLC is the stage of disease. However, a vast majority of patients have locally advanced or metastatic disease at the time of diagnosis. Despite the number of current therapies consisting of chemotherapy, radiation, and surgery, patients with locally advanced NSCLC have heterogeneous outcomes. Recently, the advent of targeted therapies against specific genetic mutations or rearrangements have improved response rates and overall survival but there remains an unmet need to uncover novel genes involved in this disease. We have applied bioinformatics approaches to better elucidate the role of various genes or gene signatures in non-small lung cancer in several capacities -- 1) to uncover gene expression signatures to improve risk stratification and outcome in those diagnosed with early stage NSCLC; 2) to identify de novo genomic fusions and breakpoints from targeted paired-end DNA sequencing data; and 3) elucidate the function of a novel splicing factor mutation in non-small cell lung cancer. In the first application, we obtained gene expression profiles from 1106 non-squamous NSCLCs for the generation and internal validation of a 9-gene molecular prognostic index (MPI). This was validated on an independent cohort of FFPE tissues from 98 NSCLC patients. As a second application, we demonstrate a practical and robust identification method of DNA rearrangements resulting in gene fusions through the development of a Fusion And Chromosomal Translocation Enumeration and Recovery Algorithm (FACTERA). Some of these recurrent fusions involving ALK, ROS1, RET, and NTRK1 have been identified in NSCLC, leading to the development and approval of targeted therapies. Lastly, the third application identifies a novel recurrent splicing factor mutation in non-small lung cancer. We have characterized the binding and splicing properties of this splicing factor and functional changes associated with the mutation. The approaches described here can be applied to uncover other gene or gene expression signatures in NSCLC and be expanded to other cancer indications.

### 2. The role of pleiotrophin in glioma invasion of the subventricular zone [electronic resource][2018]

Book
1 online resource.
The lateral ventricle subventricular zone (SVZ) is a frequent site of high-grade glioma (HGG) spread, and tumor invasion of the SVZ is a predictor of a worse clinical prognosis. A range of HGG types invade the SVZ stem cell niche, including both adult glioblastoma and pediatric high-grade gliomas such as diffuse intrinsic pontine glioma (DIPG). The cellular and molecular mechanisms mediating this frequent invasion of the SVZ are poorly understood. Here, we demonstrate that neural precursor cell (NPC):glioma cell communication underpins the propensity of glioma to colonize the SVZ. SVZ NPCs secrete chemoattractant signals toward which glioma cells home. Biochemical and proteomic analyses of factors secreted by SVZ NPCs revealed a discrete list of candidate proteins. Necessity and sufficiency testing implicated the neurite outgrowth-promoting factor pleiotrophin, along with three required binding partners (secreted protein acidic and rich in cysteine (SPARC), SPARC-like protein 1, and heat shock protein 90B), as key mediators of this chemoattractant effect. Pleiotrophin protein expression is strongly enriched in the SVZ, and knockdown of pleiotrophin expression starkly reduced glioma invasion of the SVZ in the adult murine brain. Pleiotrophin, in complex with the three binding partners, activated the Rho/Rho kinase pathway in DIPG cells, and inhibition of Rho kinase resulted in decreased DIPG invasion toward SVZ neural precursor cell-secreted factors. These findings demonstrate a pathogenic role for neural precursor cell:glioma interactions and potential therapeutic targets to limit glioma invasion.

### 3. 3D Printing of Bioengineered Functional Cardiac Tissues[2017]Online

Collection
Heart disease is the world’s leading cause of mortality, accounting for over 7.4 million deaths each year. Consequently, there has been immense interest in the medical and scientific community to develop technologies to help regenerate injured hearts. 3D printing of biological materials (i.e., 3D bioprinting) is an innovative technology that may be applied to help us understand and treat cardiac disease. In my thesis work, I utilized 3D bioprinting to create a viable and functional artificial cardiac construct. Human umbilical vein endothelial cells (HUVECs) and human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) were encapsulated in 10% gelatin methacrylate (GelMA) hydrogels and subsequently bioprinted into 3D tissue constructs. Analysis of the printed tissues over time demonstrated cell viability as assessed by their metabolic activity in culture. Cell migration and proliferation were observed in the printed HUVEC tissues 7 days post printing and beating was observed in the hiPSC-CM tissues 3 days post printing. In addition, I demonstrated the endothelialization of acellular printed scaffolds using a perfusion bioreactor system. In the future, I plan to utilize the 3D tissue printing and bioreactor platform for in vitro applications such as personalized drug cardiotoxicity screening and in vivo applications such as generating a cardiac patch for transplantation into an animal model of myocardial infarction.

Collection

### 5. "A Class on Whiteness?" Investigating the Need for Anti-Racist Critical Whiteness Studies at the University Level[2017]Online

Collection
Despite the emergence of formal “critical whiteness studies” in academia over the last twenty years, there currently remains a dearth of research on the teaching of whiteness studies. Additionally, there remains a lack of formal engagement with the topics of race and whiteness, especially by white students in universities across the United States. This paper investigates the potential for implementing anti-racist critical whiteness studies at Stanford University, seeking to understand how the field of whiteness studies is taught on college campuses and what Stanford students think about studying race and whiteness. The paper analyzes results from a survey of 200 Stanford students that aimed to collect student reactions to studying race and whiteness, as well as students’ impressions of their own qualification and comfort in discussing race with friends and family. Results indicate a strong desire among the Stanford student body to study race, and provide an argument for Stanford University to establish critical whiteness studies within a framework of anti-racism education. The paper proceeds to analyze eighteen syllabi for classes on whiteness, concluding that while there are many ways to teach about whiteness, there is a distinct and common approach which uses high-level concepts like race as a social construction with intersectional implications and a universalized understanding of whiteness to promote the visibility of whiteness to students in the class. There remain incredible tensions around analyzing racism without centering or privileging whiteness. However, taken together, Stanford student’s endorsement of race and whiteness studies and the sample syllabi’s formal structure for teaching about anti-racist critical whiteness provide a case for Stanford University to implement such a curriculum.

### 6. A New Journey to the West: The Shanghai Cooperation Organization and Chinese Foreign Policy[2017]Online

Collection
Center for International Security and Cooperation (CISAC) Interschool Honors Program in International Security Studies
Existing literature has attempted to define the Shanghai Cooperation Organization and determine its role in both Chinese and Russian foreign policy. Unfortunately, misunderstandings of the SCO as a security institution mean that efforts to situate the SCO within foreign policies are mistaken. Some observers have described the SCO as an alliance against the United States and other external forces. Others consider the SCO to be a security management institution balancing Chinese and Russian interests in Central Asia. In reality, the SCO does not fit neatly into any categories. Over the course of its history first as the Shanghai Five mechanism and then as the Shanghai Cooperation Organization, the developing institution has attempted to perform the functions of both an alliance and a security management institution. However, the lack of institutionalization, whether purposeful or due to competition between member-states, has prevented the SCO from developing into either option. At the same time, Russia’s obstruction of Chinese interests through the SCO have led China to pursue the Belt and Road Initiative as an unconstrained exercise of its power. This raises the possibility of an increasingly competitive, zero-sum “New Great Game” over Central Asia between China and Russia. The relative power disparity between China and Russia puts this competition in Beijing’s favor, but the exercise of power unrestricted by legitimate institutions may well provoke opposition from regional partners. A stable regional order that preserves China’s place in the sun will ultimately require Beijing to accept limited constraints and create legitimate institutions.

### 7. A Qualitative Study of Physicians' Various Uses of Biomedical Research[2017]Online

Collection
Graduate School of Education Open Archive
Objective To​ ​investigate​ ​the​ ​nature​ ​of​ ​physicians’​ ​use​ ​of​ ​research​ ​evidence​ ​in​ ​experimental​ ​conditions​ ​of open​ ​access​ ​to​ ​inform​ ​training​ ​and​ ​policy. Design This​ ​qualitative​ ​study​ ​was​ ​a​ ​component​ ​of​ ​a​ ​larger​ ​mixed-methods​ ​initiative​ ​that​ ​provided​ ​336 physicians​ ​with​ ​relatively​ ​complete​ ​access​ ​to​ ​research​ ​literature​ ​via​ ​PubMed​ ​and​ ​UpToDate,​ ​for one​ ​year​ ​via​ ​an​ ​online​ ​portal,​ ​with​ ​their​ ​usage​ ​recorded​ ​in​ ​web​ ​logs.​ ​Using​ ​a​ ​semi-structured interview​ ​protocol,​ ​a​ ​subset​ ​of​ ​38​ ​physician​ ​participants​ ​were​ ​interviewed​ ​about​ ​their​ ​use​ ​of research​ ​articles​ ​in​ ​general​ ​and​ ​were​ ​probed​ ​about​ ​their​ ​reasons​ ​for​ ​accessing​ ​specific​ ​articles​ ​as identified​ ​through​ ​their​ ​web​ ​logs.​ ​Transcripts​ ​were​ ​analyzed​ ​using​ ​a​ ​general​ ​inductive​ ​approach. Setting Physician​ ​participants​ ​were​ ​recruited​ ​from​ ​and​ ​registered​ ​in​ ​the​ ​United​ ​States​ ​(U.S.). Participants Thirty-eight​ ​physicians​ ​from​ ​16​ ​U.S.​ ​states,​ ​engaged​ ​in​ ​22​ ​medical​ ​specialties,​ ​possessing​ ​more than​ ​one​ ​year​ ​of​ ​experience​ ​post-residency​ ​training​ ​participated. Results Twenty-six​ ​participants​ ​attested​ ​to​ ​the​ ​value​ ​of​ ​consulting​ ​research​ ​literature​ ​within​ ​the​ ​context of​ ​the​ ​study​ ​by​ ​making​ ​reference​ ​to​ ​their​ ​roles​ ​as​ ​clinicians,​ ​educators,​ ​researchers,​ ​learners, administrators,​ ​and​ ​advocates.​ ​The​ ​physicians​ ​reported​ ​previously​ ​encountering​ ​what​ ​they experienced​ ​as​ ​a​ ​prohibitive​ ​paywall​ ​barrier​ ​to​ ​the​ ​research​ ​literature​ ​and​ ​other​ ​frustrations​ ​with the​ ​nature​ ​of​ ​information​ ​systems,​ ​such​ ​as​ ​the​ ​need​ ​for​ ​passwords. Conclusions The​ ​findings,​ ​against​ ​the​ ​backdrop​ ​of​ ​growing​ ​open​ ​access​ ​to​ ​biomedical​ ​research,​ ​indicate​ ​that​ ​a minority​ ​of​ ​physicians,​ ​at​ ​least​ ​initially,​ ​is​ ​likely​ ​to​ ​seek​ ​out​ ​and​ ​use​ ​research​ ​and​ ​do​ ​so​ ​in​ ​a variety​ ​of​ ​common​ ​roles.​ ​Physicians’​ ​use​ ​of​ ​research​ ​in​ ​these​ ​roles​ ​has​ ​not​ ​traditionally​ ​been​ ​part of​ ​their​ ​training​ ​nor​ ​part​ ​of​ ​the​ ​considerations​ ​for​ ​open​ ​access​ ​policies.​ ​The​ ​findings​ ​have implications​ ​for​ ​educational​ ​and​ ​policy​ ​initiatives​ ​directed​ ​toward​ ​increasing​ ​the​ ​effectiveness​ ​of this​ ​access​ ​to​ ​and​ ​use​ ​of​ ​research​ ​in​ ​improving​ ​the​ ​quality​ ​of​ ​health​ ​care.

### 8. A second chance at life: arts programs for male juveniles changing lives[2017]Online

Collection
Masters Theses in Journalism, Department of Communication, Stanford University
Story about how arts in rehabilitative programs can change the lives of juveniles

### 9. Academic Achievement in At-Risk Elementary School Boys: An Evaluation of Charter School Performance[2017]Online

Collection
This paper examines the impact of an elementary charter school that targets low-income, black and Hispanic boys in Florida. Using student-level demographic data, charter school students were matched with traditional public school students that shared the same demographic characteristics and were within close range of the starting test score in third grade. The results showed the following impact on the difference in scores from third to fourth grade: for math, attending the charter school is associated with testing .72 standard deviations lower than students in the traditional public school; for reading, the charter school is associated with testing .19 standard deviations lower on the test. Both of these results are statistically significant. This model can be used in different scenarios in order to understand where certain demographics of students are succeeding, and can thus lead to further causal research to identify and disseminate success tactics and pedagogies throughout the school system.

### 10. Actively controlled metasurfaces for light manipulation and control [electronic resource][2017]

Book
1 online resource.
Metasurface optics have been developed for a wide variety of optical applications that compete with as well as go beyond the functionality of conventional optics. One upcoming challenge for future metasurfaces is the development of individually addressable optical properties that can be actively changed with controllable input stimuli. In this work, we will explore how electro-mechanics can be used to couple a mechanical degree of freedom to an optical response. We will see how the localized optical resonances within a silicon nanowire placed near a mirror can be leveraged as a building block for engineering an active metasurface. The tuning of the amplitude of scattered light will enable color-tunable active metasurfaces. By designing a structure with a modifiable phase gradient, both light steering and focusing can be controlled by an applied electrical signal. In addition to covering mechanically controlled active metasurfaces, we will also show that it is possible to use passive metasurfaces to gain real-time three dimensional information in a volume and that in certain cases, active metasurface optics are not the only solution to improve optical functionality.

### 11. An activity and flow-based construction model for managing on-site work [electronic resource][2017]

Book
1 online resource.
Construction field managers often struggle to keep projects on schedule, resulting in time and cost overruns. Schedule conformance depends on the activities starting and finishing on time. However, activities are often delayed because the flows necessary to start their execution are unavailable. These flows can be classified into seven types: labor, equipment, workspace, materials, precedence, information, and external flows (Koskela 1999). I tracked a total of 5,843 flows in this research, all of which fell into one of these seven categories. Flows released from upstream activities become inputs to downstream activities. Therefore, delays in upstream activities hinder the timely release of flows, which can cause delays in downstream activities depending on those flows. To manage the flows, field managers need to know the flows' source, their status, and their readiness likelihood. Current construction models do not formally represent, measure, and track all the flow types. Hence, field managers lack formal methods for tracking the flows' status and estimating their readiness likelihood. Instead, they rely on their intuition and experience to manage the flows. This dissertation presents an activity and flow-based construction model, called the Activity-Flow Model (AFM). The AFM allows field managers to proactively manage the on-site work by allowing them to formally represent, measure, and track the construction activities and flows. The AFM consists of an ontology that defines the representation of the activities, the flows, and their interactions; the planning and control methods that enable the AFM's implementation on site; and the predictive models that help anticipate variations in downstream activities. The AFM was developed based on literature, field observations, and feedback from field managers. The AFM was validated prospectively for a total of 26 weeks through its implementation on three building projects that were in different phases (foundations, core and shell, and finishing), locations (Bogota, Copenhagen, and Lima), and used different planning and control methods (master schedule and weekly planning, Last Planner System, and Location-based Management System). The AFM was able to represent all the activities (1,645) and flows (5,843) in the test projects, track their variations, and quantify their variability. The planning and control methods enabled field managers to proactively manage the projects taking both the activities and flows into account. The predictive models supported by the AFM allowed field managers to anticipate variations in downstream activities and outperformed the predictive models supported by the Resource-constrained Critical Path Method (RCPM) (Fondahl 1961) and Location-based Management System (LBMS) (Kenley and Seppänen 2009) representations. The field managers used the analytics of the activities' and flows' performance record to allocate resources, size buffers, and modify the look-ahead schedule. Hence, the AFM can help field managers improve flow readiness, reduce activity delays, and improve schedule conformance.

### 12. Additive manufacturing for musical applications [electronic resource][2017]

Book
1 online resource.
Additive manufacturing (AM) is the automated layer-wise fabrication of 3D objects directly from geometrical computer models. In the music technology lab, AM affords the rapid testing of enclosures and components for both sound capturing and sound producing instruments. These instruments, typically outsourced to manufacturing hubs, are now produced near the site of musical experimentation through desktop fabrication. Parametric modeling and rapid fabrication with AM accelerate the design cycle for the production of instruments for music research. AM alters the site of manufacturing, the duration between the digital sketch and its materialization, and the material constitution of the instruments we might deploy; these shifts have consequences for what gets made and in turn alter artistic practices that integrate such tool-making tools. To illustrate these affordances of AM, I describe a suite of instruments within a taxonomy of usage categories that move from reproduction of known instruments, to the augmentation of found ones and finally a phase of invention. Invention arises not simply from putting such machines in the service of novel ideas but rather from carefully examining the material outcomes of the layering process itself. The term AM is used in place of numerous alternatives to emphasize this anisotropic grain of the printed object. Although the additive method announces a powerful flexibility in the shapes it sheds, its performance in various acoustic and electroacoustic scenarios depends in part on this grain. The practice, therefore, navigates trade-offs between malleable fabrication methods for organizing material and the quality of the resulting forms for organizing sound.

### 13. Addressing the DNA deluge [electronic resource] : algorithmic methods for large-scale genome analysis[2017]

Book
1 online resource.

### 14. Adhesive and cohesive properties of the biofilm matrix [electronic resource][2017]

Book
1 online resource.

### 15. Adic moduli spaces [electronic resource][2017]

Book
1 online resource.
We prove a version of Artin's criteria for representability of moduli functors in the setting of non-archimedean analytic geometry in characteristic zero, and deduce representability of the Picard functor under reasonable hypotheses.

### 16. Advanced techniques for closed-loop reservoir optimization under uncertainty [electronic resource][2017]

Book
1 online resource.
In this work, we introduce and apply several new techniques for oil/gas reservoir optimization under uncertainty. As the first contribution, we develop a general methodology for optimal closed-loop field development (CLFD) under geological uncertainty. CLFD involves three major steps: optimizing the field development plan based on current geological knowledge, drilling new wells and collecting hard (well) data and production data, and updating multiple geological models based on all of the available data. In the optimization step, the number, type, locations and controls for new wells (and future controls for existing wells) are optimized using a hybrid Particle Swarm Optimization -- Mesh Adaptive Direct Search algorithm. The objective in the examples presented is to maximize expected (over multiple realizations) net present value (NPV) of the overall project. History matching is accomplished using an adjoint-gradient-based randomized maximum likelihood (RML) procedure. Different treatments are presented for history matching Gaussian and channelized models. Because the CLFD history matching component is fast relative to the optimization component, we generate a relatively large number of history matched models. Optimization is then performed using a representative subset of these realizations. We introduce a systematic optimization with sample validation (OSV) procedure, in which the number of realizations used for optimization is increased if a validation criterion is not satisfied. The CLFD methodology is applied to two- and three-dimensional example cases. Results show that the use of CLFD increases the NPV for the true' (synthetic) model by 10% --70% relative to that achieved by optimizing over a large number of prior realizations. The CLFD framework includes several components, and different approaches for history matching, optimization, model selection and economic evaluation can be applied. In our second contribution, we address the problem of selecting a subset of representative geological realizations from a large set. Towards this goal, we introduce a general framework, based on clustering, for selecting a representative subset of realizations for use in simulations involving new' sets of decision parameters. Prior to clustering, each realization is represented by a low-dimensional feature vector that contains a combination of permeability-based and flow-based quantities. Calculation of flow-based features requires the specification of a (base) flow problem and simulation over the full set of realizations. Permeability information is captured concisely through use of principal component analysis. By computing the difference between the flow response for the subset and the full set, we quantify the performance of various realization-selection methods. The impact of different weightings for flow and permeability information in the cluster-based selection procedure is assessed for a range of examples involving different types of decision parameters. These decision parameters are generated either randomly, in a manner that is consistent with the solutions proposed in global stochastic optimization procedures such as GA and PSO, or through perturbation around a base case, consistent with the solutions considered in pattern search optimization. We find that flow-based clustering is preferable for problems involving new well settings (e.g., time-varying well bottom-hole pressures) or small changes in well configuration, while both permeability-based and flow-based clustering provide similar results for (new) random multiwell configurations. We also investigate the use of efficient tracer-type simulations for obtaining flow-based features, and demonstrate that this treatment performs nearly as well as full-physics simulations for the cases considered. The various procedures are applied to select realizations for use in production optimization under uncertainty, which greatly accelerates the optimization computations. Optimization performance is shown to be consistent with the realization-selection results for cases involving new decision parameters. In the third contribution, we introduce a methodology for the joint optimization of economic project life and well controls. We present a nested formulation for this joint optimization problem where we maximize NPV, subject to the constraint that the rate of return of operations is greater than the minimum attractive rate of return (MARR) or hurdle rate. The methodology provides the optimal project life and the optimal well controls such that the maximum NPV is obtained at the end of the project life, and the rate of return of the project is essentially equal to MARR. Application of this procedure, enables avoiding situations where NPV increases slowly in time, but the benefit relative to the capital employed is extremely low. We demonstrate the successful application of this treatment for production optimization for two- and three-dimensional reservoir models.

### 17. Advances in morphological and quantitative musculoskeletal MRI [electronic resource][2017]

Book
1 online resource.
Over 1.5 million knee MRI scans are performed in the US annually for reasons ranging from acute knee injuries to researching diseases such as osteoarthritis. MRI can offer high-resolution imaging with excellent soft-tissue contrast and is often a tool of choice of interrogating various pathologies in musculoskeletal system. More recently, there has been a large emphasis on utilizing MRI in order to generate imaging-based biomarkers to track spatial and longitudinal changes in tissues. Using such biomarkers has potential for understanding and describing the pathophysiology of complex diseases such as Osteoarthritis. Despite the excellent image quality and potential biomarkers of disease activity that MRI can generate, it is still challenging to acquire all these features in rapid MRI protocols. High-resolution sequences are usually needed to evaluate fine structures typically observed in musculoskeletal systems. High signal from tissues is usually needed to accurately characterize quantitative biomarkers. At the same time, there is a growing need for rapid imaging methods in order to maximize patient throughput while minimizing patient discomfort. In MRI however, signal, resolution, and imaging time are properties of scans that are challenging to optimize simultaneously. In this work, I will describe how the already-available double-echo steady-state (DESS) sequence was optimized to generate quantitative morphological and biochemical biomarkers for cartilage and meniscus, in only a 5-minute acquisition. Conventional imaging methods used in large clinical studies typically require around 25 minutes of imaging time to generate such biomarkers. The validity of these biomarkers was compared to time-consuming methods and the reliability was evaluated using repeated acquisitions. Both the accuracy and the precision of this rapid DESS method was high enough to be confidently used in clinical studies in order to be able to track small longitudinal changes. This thesis also describes how the same DESS sequence can also be used for routine clinical knee MRI. The contrasts and the resolution that the DESS sequence offers can be used to diagnose internal knee derangement and its corresponding signs in tissues such as the cartilage, meniscus, tendon, ligaments, bone, and synovium. In additional to the morphological image contrasts, this study also probed the utility of having automatic quantitative T2 measurements available during diagnostic review also. Accuracy of the 5-minute DESS method against that of the routine knee MRI protocol was compared, where the DESS results were very promising. This thesis also studied the diagnostic accuracy of pairing one sequence from the conventional imaging protocol with DESS, in order to create an abridged two-sequence protocol. Such protocols have great potential for being able to transition from 30+ minute clinical protocols to protocols that last 5-7 minutes. While the 5-minute DESS sequence was able to interrogate several musculoskeletal tissues, there are still several tissues known as short-T2 tissues that generate very minimal signal using conventional Cartesian MRI sequences. In order to be able to image such tissues and to be able to quantify some of their underlying properties, we developed the Ultrashort Echo Time DESS (UTEDESS) sequence. This method permits imaging with a very high signal to noise ratio. This high signal with UTEDESS can be used to perform morphological and quantitative imaging of the menisci, tendons, and ligaments - all tissues that are very challenging to image with routine sequences. UTEDESS also provides the ability to image with isotropic resolutions in short scan times so that the images can be retrospectively re-sampled in arbitrary planes in order to maximize the diagnostic efficiency of the method. Overall, all the advances described in this thesis have the potential to accelerate the current paradigm of musculoskeletal imaging methods while being able to generate additional data which could be used in both - diagnostic and research settings.

### 18. Advancing diffusion-weighted magnetic resonance imaging methods for neuronal fiber mapping [electronic resource][2017]

Book
1 online resource.
Mapping the complex structural connectivity of the human brain in vivo is essential for understanding healthy brain function and the fundamental basis of many neurological and psychiatric disorders. Diffusion-weighted magnetic resonance imaging (MRI) measures the diffusion pattern of water molecules to infer the underlying tissue microstructure. Coupled with fiber tracking techniques, diffusion-weighted MRI has become a widely utilized non-invasive method for mapping neuronal fiber pathways. Nonetheless, it is challenging to accurately model the diffusion pattern in tissue while a model-free approach requires a lengthy acquisition. Further, the reconstructed fiber model requires rigorous validation for clinical translation. This dissertation addresses these challenges in the course of three projects. First, a thorough analysis of the effects of q-space truncation and sampling on the water molecule displacement ensemble average propagator (EAP) in the model-free q-space imaging (QSI) framework is performed. This study clarifies guidelines for acquiring and reconstructing Cartesian QSI data such that aliasing is prevented in the EAP and and Gibbs ringing is minimized in the estimated fiber orientations. To increase QSI's applicability to different types of data, an intuitive and practical QSI reconstruction framework for obtaining the EAP and fiber orientations from multi-shell q-space samples is proposed. Finally, a retrospective study is conducted to assess the validity and efficacy of diffusion-weighted MRI fiber tracking-based targeting for transcranial MRI-guided focused ultrasound treatment of essential tremor. The studies presented in this dissertation advanced neuronal fiber mapping approaches for diffusion-weighted MRI.

### 19. Advancing energy and climate planning models [electronic resource] : optimisation methods, variable renewables, and smart grids[2017]Online

Book
1 online resource.
This dissertation aims to advance the application of mathematical modelling and computing, in particular optimisation methods, to the planning of solutions to energy and climate problems. The work first addresses two applied modelling problems relating to the electricity sector, a sector that is a major global source of greenhouse gas emissions, but also a potential provider of low carbon energy throughout the global economy. The dissertation then closes with an investigation into the appropriate formulation of the normative models used in planning, focusing on the choosing of model detail. At a high level, this work can be summarised as the development of tractable methods to incorporate necessary detail in models, followed by the introduction of a framework to understand when detail is necessary more generally. The first technical portion of this dissertation investigates how to represent intra-annual temporal variability in models of optimum electricity capacity investment. The mechanisms are shown by which inappropriate aggregation of temporal resolution can introduce substantial error into model outputs and associated economic insight, particularly in systems where variable renewable power sources are cost competitive and/or policy supported. For a sample dataset, a scenario-robust aggregation of hourly (8760) resolution is possible in the order of 10 representative hours when electricity demand is the only source of variability. The inclusion of wind and solar supply variability increases the resolution of the robust aggregation to the order of 1000. A similar scale of expansion is shown for representative days and weeks. These concepts, and underlying methods, can be applied to any such temporal dataset, providing a benchmark that any other aggregation method can aim to emulate. To the author's knowledge, this is the first time that the impact of variable renewable power sources on appropriate temporal representation has been quantified in this way. The next stage of the work considers the potential impact of emerging smart grid technologies, particularly those that enable electricity consumers to shift, automatically and optimally, their electricity demand in response to a price signal. In so doing, a model of a competitive electricity market, where consumers exhibit optimal load shifting behaviour to maximise utility and producers/suppliers maximise their profit under supply capacity constraints, is formulated and analysed. The associated computationally tractable convex optimisation formulation can be used to inform market design or policy analysis in the context of increasing availability of the smart grid technologies that enable optimal load shifting. Analytic and numeric assessment of the model allows assessment of the equilibrium value of optimal electricity load shifting, including how the value reduces as more electricity consumers adopt associated technologies. The sensitivity of the value to the flexibility of load is assessed, along with its relationship to the deployment of renewables. Additionally, a formulation of the model based on the Alternating Direction Method of Multipliers is presented. This particular optimisation method is desirable for its potential to scale to large problems. The applied modelling exercises provide examples for the final portion of the dissertation, a systematic assessment of model formulation, particularly relating to model detail. The normative models used for energy and climate planning explore long term pathways into uncharted territory. The test of predictive power used in other fields to evaluate model formulation is frequently not possible to apply in this long term context, nor necessarily makes sense in the normative context. This work introduces a conceptual framework that can potentially augment the necessary expert judgement in model formulation. It is based on the idea that some modelling decisions are testable, including the choice of model detail under certain conditions. The framework uses information theoretic principles to demonstrate the tradeoff between model detail and model accuracy for a given question, and can specifically aid with representing heterogenous spatial, temporal or population characteristics in models. This section of the dissertation represents an early attempt in a domain where limited systematic analysis has been undertaken to date.

### 20. Advancing the use of crowdsourcing for data-intensive tasks [electronic resource][2017]Online

Book
1 online resource.
All aspects of industry, scholarship, and society have recently been witnessing a rapid growth in the volume of available data and the demand for data-intensive analytics. While automated techniques built on machine learning algorithms are being applied to many problems, a large class of challenging tasks still require human intelligence and input. Crowdsourcing is an effective mechanism for addressing problems that are not easily solved by computers alone, and require human insight. Furthermore, crowdsourcing often plays a crucial role in the development of machine learning algorithms, by being the primary source of high quality labeled training data, and by serving as a tool for the verification of the output of machine-learned models. The goal of this thesis is to characterize the spectrum of crowdsourcing tasks that are posted on real marketplaces, and to develop new and optimized algorithms for some fundamental classes of crowdsourcing tasks. First, we analyze a dataset comprising over 27 million microtasks performed by over 70,000 workers issued to a large crowdsourcing marketplace between 2012-2016. Based on this dataset, we identify two fundamental classes of crowdsourcing task types that are very popular in the marketplace: (a) {\em filtering and rating}, where worker responses are allowed to be any number in a fixed, numerical range $\{1, 2, \ldots, R\}$; (b) {\em counting}, where responses are allowed to be any non-negative integer $\{0, 1, 2, \ldots \}$. Next, we design efficient algorithms for these task types. Specifically: (1) For {\em filtering and rating}, we design an algorithm to discover a globally-optimum maximum likelihood-based solution to identify true answers and worker accuracies. (2) For {\em counting}, we design algorithms to optimally aggregate crowdsourced responses, along with hybrid algorithms that combine crowdsourcing and computer vision techniques to improve the quality and reduce the costs of our inferences even further. Our findings and algorithms have broad ramifications for how to best use crowdsourcing for collecting or processing large volumes of data at low cost and high accuracy.