Online 1. Applications of stochastic and optimization models to healthcare research [electronic resource] [2014]
 Goh, Joel.
 2014.
 Description
 Book — 1 online resource.
 Summary

This dissertation studies how mathematical modeling can be used in conjunction with empirical data to provide insight into health policy and medical decisionmaking. We consider three specific questions. First, how should drug safety regulators implement a postmarketing drug surveillance system that accounts for multiple adverse events? Second, what is the aggregate contribution of workplace stressors toward poor health outcomes and health spending in the U.S.? Third, how should rigorous costeffectiveness analyses be conducted for medical innovations, when data are scarce and unreliable? These are important questions that have thus far eluded definitive answers because existing data sources and models cannot be directly applied to answer these questions satisfactorily. Therefore, we try to address these questions by developing new datadriven mathematical models, which draw ideas from stochastic analysis and optimization theory. In Chapter 1, we develop a new method for postmarketing surveillance of a drug, in order to detect any adverse side effects that were not uncovered during preapproval clinical trials. Because of the recent proliferation of electronic medical records, regulators can now observe personlevel data on drug usage and adverse event incidence in a population. Potentially, they can use these data to monitor the drug, and flag it as unsafe if excessive adverse side effects are observed. There are two key features of this problem that make it challenging. First, the data are accumulated in time, which complicates the regulators' decision process. Second, adverse events that occur in the past can affect the risk that other adverse events occur in the future. We propose a drug surveillance method, called QNMEDS, which simultaneously addresses these two issues. QNMEDS is based on the paradigm of sequential hypothesis testing, and it works by continuously monitoring a vectorvalued teststatistic process until it crosses a stopping boundary. Our analysis focuses on prescribing how this boundary should be designed. We use a queueing network to model the occurrence of events in patients, which also allows us to capture the correlations between adverse events. Exact analysis of the model is intractable, and we develop an asymptotic diffusion approximation to characterize the approximate distribution of the teststatistic process. We then use mathematical optimization to design the stopping boundary to control the false alarm rate below an exogenouslyspecified value and minimize the expected detection time. We conduct simulations to demonstrate that QNMEDS works as designed and has advantages over a heuristic that is based on the classical Sequential Probability Ratio Test. In Chapter 2, we describe a modelbased approach to quantify the relationship between workplace stressors and health outcomes and cost. We considered ten stressors: Unemployment, lack of health insurance, exposure to shift work, long working hours, job insecurity, workfamily conflict, low job control, high job demands, low social support at work, and low organizational justice. There is widespread empirical evidence that individual stressors are associated with poor health outcomes, but the aggregate health effect of the combination of these stressors is not well understood. Our goal was to estimate the overall contribution of these stressors toward (a) annual healthcare spending, and (b) annual mortality in the U.S. The central difficulty in deriving these estimates is the absence of a single, longitudinal dataset that records workers' exposure to various workplace stressors as well as their health outcomes and spending. Therefore, we developed a modelbased approach to tackle this problem. The model has four input parameters which were estimated from separate data sources: (a) the joint distribution of workplace exposures in the U.S., which we estimated from the General Social Survey (GSS); (b) the relative risk of each outcome associated with each exposure, which we estimated from an extensive metaanalysis of the epidemiological literature; (c) the statusquo prevalence of each health outcome; and (d) the incremental cost of each health outcome, which were both estimated using the Medical Panel Expenditure Survey (MEPS). The model separately derives optimistic and conservative estimates of the effect of multiple workplace exposures on health, and uses an optimizationbased approach to calculate upper and lower bounds around each estimate to account for the correlation between exposures. We find that more than 120,000 deaths per year and approximately 58% of annual healthcare costs are associated with and may be attributable to how U.S. companies manage their work force. Our results suggest that more attention should be paid to management practices as important contributors to health outcomes and costs in the U.S. In Chapter 3, we study the problem of assessing the costeffectiveness of a medical innovation when data are scarce or highly uncertain. Models based on Markov chains are typically used for medical costeffectiveness analyses. However, if such models are used for innovations, many elements of the chain's transition matrix may be very imprecise due to data scarcity. While sensitivity analyses can be used to assess the effect of a small number of uncertain parameters, they quickly become computationally intractable as the number of uncertainties grows. At present, only adhoc methods exist for performing such analyses when there are a large number of uncertain parameters. Our analysis focuses on an abstraction of this problem, which is how to calculate the best and worst discounted value of a Markov chain over an infinite horizon with respect to a vector of statewise rewards, when many of its transition elements are only known up to an uncertainty set. We prove the following sharp result: If the uncertainty set has a rowwise property, which is a reasonable assumption for most applied problems, then these values can be tractably computed by iteratively solving certain convex optimization problems. However, in the absence of this rowwise property, evaluating these values is computationally intractable (NPhard). We apply our method to the evaluate the costeffectiveness of a new screening method for colorectal cancer, annual fecal immunochemical testing (FIT) for persons over the age of 55. Our results suggest that FIT is a highly costeffective alternative to the current guidelines, which prescribe screening by colonoscopy at 10year intervals.
 Also online at

Special Collections
Special Collections  Status 

University Archives  Request via Aeon (opens in new tab) 
3781 2014 G  Inlibrary use 