1 - 20
Next
Number of results to display per page
Online 1. Data-driven analytics for clinical decision making, healthcare operations management and public health policy [2021]
- Fairley, Michael Charles Zinzan, author.
- [Stanford, California] : [Stanford University], 2021
- Description
- Book — 1 online resource
- Summary
-
Health care costs in the United States exceed $3.5 trillion annually, with between $760 billion and $935 billion considered waste. Data-driven analytics could reduce costs and provide higher quality care to patients by more efficiently allocating limited resources, just as analytics has done in other industries such as logistics, manufacturing and aviation. In this dissertation, I demonstrate three levels at which analytics provide value in health: clinical decision making, healthcare operations management and public health policy. Clinical decision making refers to decisions at the individual patient level: for example, determining which treatment to provide a patient or predicting an individual's risk of disease. Healthcare operations management refers to decisions about the system that delivers care to patients: for example, determining how to organize patient flow through a hospital or schedule procedures. Finally, public health policy refers to decisions about the overall health of a population: for example, determining how to control an infectious disease or distribute limited resources across different diseases
- Also online at
-
- Russell, William Alton, author.
- [Stanford, California] : [Stanford University], 2021
- Description
- Book — 1 online resource
- Summary
-
Donated blood is a critical component of health systems around the world, but its collection and transfusion involve risk for both donors and recipients. Transfusion-transmitted diseases and non-infectious adverse events pose a risk to transfusion recipients, and repeat blood donation can cause or exacerbate iron deficiency among donors. This dissertation describes four decision-analytic modeling projects that inform blood safety policy. In Chapter 2, I integrate epidemiological, health-economic, and biovigilance data to estimate the efficacy and cost-effectiveness of a 2016 policy mandating that all blood donations are screened for Zika virus in the U.S. The analysis uses a novel microsimulation of individual transfusion recipients that captures the relationship between disease exposure risk and the number and type of blood components transfused. In Chapter 3, I develop the first health-economic assessment of whole blood pathogen inactivation. The analysis is for Ghana and improves on prior blood safety assessments for sub-Saharan Africa by considering the likelihood and timing of clinical detection for chronic viral infections. In Chapter 4, I develop an optimization-based framework for identifying the optimal portfolio of blood safety interventions that overcomes some limitations of traditional cost-effectiveness analyses for blood safety. By applying this framework retrospectively to evaluate U.S. policies for Zika and West Nile virus, I show that the optimal policy can vary by geography, season, and year. Chapter 5 focuses on how frequently donors are allowed to give blood. I develop a machine learning-based decision model that tailors the inter-donation interval to each donor's risk of iron-related adverse outcomes to balance risks to donors against risks to the sufficiency of the blood supply. Together, these model-based analyses introduce novel methods and provide guidance for efficient and effective use of resources for blood safety
- Also online at
-
Online 3. Assessing China's unconventional carbon pricing system [2020]
- Long, Xianling, author.
- [Stanford, California] : [Stanford University], 2020
- Description
- Book — 1 online resource
- Summary
-
Currently, China is the world's largest emitter of CO2, accounting for about 28 percent of global emissions. At the time of this writing, China just announced that it aims to have CO2 emissions peak before 2030 and achieve carbon neutrality before 2060. Achieving this target requires dramatically reducing the reliance on fossil fuel. One of the biggest efforts China is planning is the nationwide carbon emissions trading system that has been piloted in several provinces since 2013. Once implemented, this system will become the world's largest emissions trading system. This dissertation assesses the economic impacts of China's forthcoming nationwide carbon emissions trading system. China's emissions trading system differs from a conventional cap and trade (C&T) system and a carbon tax, the carbon pricing instruments used elsewhere. This nation will employ a tradable performance standard (TPS). A key property of TPS is its rate-based allowance allocation approach. The rate-based approach makes TPS implicitly subsidize output as an implicit output subsidy, which has significant consequences for cost-effectiveness and distributional impacts. This dissertation aims at assessing the economic impacts of this unconventional emissions trading system. It considers the first phase of the system that covers only electricity sector and the second phase that covers electricity, cement and aluminum sectors, offering theoretical analysis and numerical simulations. In Chapter 2, with matching analytically and numerically solved models, we assess the cost-effectiveness and distributional impacts of China's forthcoming TPS for reducing CO2 emissions from the power sector. In Chapter 3, I extend the single-sector partial equilibrium model employed in Chapter 2 to a multi-sector general equilibrium model to examine the impacts of China's TPS across the whole economy. A general equilibrium model is also necessary for the assessments of China's TPS that potentially will be implemented in multiple sectors. In Chapter 4, I examine the impacts of market power on the cost-effectiveness of TPS and C&T. To the best of my knowledge, this chapter is the first study that focuses on the impacts of market power on a rate-based allowance trading system. I consider two types of market power: the market power in the carbon allowance market and the market power in the electricity market. I show how the two types of market power affect TPS and C&T differently
- Also online at
-
Online 4. Bayesian structural learning in decision analysis [2020]
- Kharitonov, Daniel, author.
- [Stanford, California] : [Stanford University], 2020
- Description
- Book — 1 online resource
- Summary
-
This work presents a method of using data to facilitate the creation of influence diagrams as part of model generation. The problem that motivates this dissertation is a bottom-up modeling approach of Decision Analysis that uses expert interviews to define variables, distinctions and capture the relations between the variables that reflect the current state of information about the decision situation. Creation and parameterization of the model defined in such way is a lengthy and error-prone process that poses a significant challenge in real-life applications. The proposed method of using Bayesian Structural Learning assumes the presence of some prior data on the problem and bootstraps relevance diagrams from such dataset. This novel approach assumes a structured coordination between machine learning methods and human experts, and in return offers extended opportunities in model generation while avoiding the common pitfalls stemming from the difficulties of eliciting conditional probabilities. We demonstrate the workings of this method using synthetic and real-life examples, and describe how to take full advantage of artificial intelligence algorithms -- including unsupervised learning with neural networks -- to facilitate work in the decision modeling phase
- Also online at
-
Online 5. Computing stationary distributions : perron vectors, random walks, and ride-sharing competition [2020]
- Ahmadinejad, AmirMahdi, author.
- [Stanford, California] : [Stanford University], 2020
- Description
- Book — 1 online resource
- Summary
-
Stationary distributions appear in a wide variety of problems in areas like computer science, mathematics, statistics, and economics. The knowledge of stationary distribution has a crucial impact on whether we are able to solve such problems, or how efficiently we can solve them. In this thesis, I present three of my projects which have a common theme of dealing with stationary distributions. In the first work, called Perron-Frobenius theory in nearly linear time, we give a nearly linear time algorithm for computing stationary distribution of matrices characterized by the Perron-Frobenius theorem. Through our algorithm, we non-trivially extend the class of matrices known to be solvable in nearly linear time by graph Laplacian solvers. In the second work, called high precision small space estimation of random walk probabilities, we try to make a step toward derandomization of the complexity class RL or randomized log space, which is a long-standing open problem in complexity theory. In this context, the knowledge of stationary distribution for directed graphs seems to be a barrier in achieving such a result. While we are not able to resolve L vs. RL problem, we give a small space deterministic algorithm to estimate random walk probabilities to high precision in undirected graphs, as well as directed Eulerian graphs. In the last work, with the rise of ride-sharing platforms, we study a set of important questions naturally arise in this context. Although we use a different set of technical tools compared to the previous two works, we still deal with characterizing equilibria in stationary systems. One of the main questions we answer is whether the competition between two platforms can lead to market failure by pushing the drivers out of the market
- Also online at
-
Online 6. Decision making for disease treatment : operations research and data analytic modeling [2020]
- Zhong, Huaiyang, author.
- [Stanford, California] : [Stanford University], 2020
- Description
- Book — 1 online resource
- Summary
-
This dissertation focuses on developing and applying various methodologies including operations research and data analytics to solve important problems in healthcare decision making. Healthcare decision making is interesting and challenging because of the probabilistic nature of many healthcare decision problems and because of the range of decision makers involved (e.g., individual patients, clinicians, and policy makers). Healthcare decision making occurs at two levels: clinical decision making at the individual level and policy decision making at the population level. In clinical decision making, practitioners aim to determine which patient needs what and when. The decision can be a simple diagnosis such as predicting arthritis from MRI images or a series of decisions throughout the treatment duration such as HIV treatment management. In policy decision making, policy makers aim to assess decisions undertaken to achieve specific population-level healthcare goals \cite{WHO_HEALTH_2019}. Problems in this field range from disease modeling to policy implementation. In Chapter 2, to extend the boundary of current methodologies in clinical decision making, I develop a theoretical sequential decision making framework, a quantile Markov decision process (QMDP), based on the traditional Markov decision process (MDP). The QMDP model optimizes a specific quantile of the cumulative reward instead of its expectation. I provide analytical results characterizing the optimal QMDP value function and present a dynamic programming-based algorithm to solve for the optimal policy. The algorithm also extends to the MDP problem with a conditional value-at-risk (CVaR) objective. Using the QMDP framework, patients' risk attitudes can be incorporated into the decision making process, thereby enabling patient-centered care. I apply the QMDP framework to an HIV treatment initiation problem, where patients aim to balance the potential benefits and risks of the treatment. In Chapter 3, to inform public health policy regarding treatment for HIV-infected individuals with clinical depression, I develop a microsimulation model of HIV disease and care in Uganda which captures individuals' depression status and the relationship between depression and HIV behaviors. I consider a strategy of screening for depression and providing antidepressant therapy with fluoxetine at initiation of antiretroviral therapy or re-initiation (if a patient has dropped out). I use the model to estimate the effectiveness and cost-effectiveness of such strategies. I show that screening for and treating depression among people living with HIV in sub-Saharan Africa with fluoxetine would be effective in improving HIV treatment outcomes and would be highly cost-effective. In Chapter 4, with the aim of improving policy implementation, I examine the problem of simplifying complex healthcare decision models using metamodeling. Many healthcare decision models, particularly those involving simulation of patient outcomes, are highly complex and may be difficult to use for practical decision making. A metamodel is a simplified version of a more complex model which approximates the relationships between model inputs and outputs, and thus can be used as a surrogate for the more complex model. I develop a framework for metamodeling of simulation models with multivariate outcomes. I apply the methodology to simplify a complex simulation model that evaluates strategies for hepatitis C virus (HCV) screening and treatment in correctional settings. Chapter 5 concludes with discussion of promising areas for further research
- Also online at
-
Online 7. Developing high-performing business models [2020]
- Tidhar, Ron, author.
- [Stanford, California] : [Stanford University], 2020
- Description
- Book — 1 online resource
- Summary
-
Strategy and innovation scholars point to the rising importance of business models for firm performance. While prior research offers some insight into high-performing business models, it largely focuses on established firms with stable business models. Yet, this overlooks how entrepreneurial ventures develop and scale novel business models. This dissertation addresses this gap with three closely linked studies. The first is a comprehensive review of prior literature linking business models to firm performance. The second uses a novel theory-building methodology — machine learning and multiple cases -- to fit revenue models (i.e., value capture) with underlying activities (i.e., value creation) in high-performing business model configurations. The third is a longitudinal multiple-case theory-building study that unpacks how entrepreneurs build scalable business models in nascent markets. Jointly, the studies in this dissertation offer rich theory regarding how entrepreneurs can effectively develop and scale novel business models. Overall, this research contributes to literature on strategy, innovation, and entrepreneurship, as well as to practice
- Also online at
-
Online 8. Essays on artificial intelligence in personalized markets [2020]
- Arrieta-Ibarra, Imanol, author.
- [Stanford, California] : [Stanford University], 2020
- Description
- Book — 1 online resource
- Summary
-
Personalized markets have become ubiquitous in recent years. Matching platforms such as Uber or Airbnb, social networks like Facebook and Twitter, online marketplaces such as Amazon, all provide individualized experiences to different users based on their unique characteristics and preferences. Analyzing these markets in depth requires a multidisciplinary approach. In particular, it requires a closer relation between Economic Theory and Artificial Intelligence. In this work I'll present two examples that interlink both disciplines in order to analyze these markets. Chapter 1 presents a novel algorithm which we call PBDM that personalizes the BDM mechanism, introduced by Becker, DeGroot, and Marschack. The BDM mechanism has been recently used as a treatment assignment mechanism in order to estimate the treatment effects of policy interventions while simultaneously measuring the demand for the intervention. In this work, we develop a personalized extension of the classic BDM mechanism using modern machine learning algorithms to predict an individual's willingness to pay. This lowers the cost for experimenters, provides better balance over covariates that are correlated with both the outcome and willingness to pay, and eliminates biases induced by ad-hoc boundaries in the classic BDM algorithm. Chapter 2 covers an exercise to estimate the economic value of data in algorithms with an application to ride-sharing. We present a novel approach to estimating an upper bound for the economic value of data due to its role in algorithms. Our method does not assume that users have failed to internalize any costs in data production (such as privacy), and show that the price of data is in great part determined by the power dynamics present in markets. We apply our method to ride-sharing by simulating a market using data from a large ride-sharing platform (Uber). We estimate that in our scenario, with users having full market power, data would contribute up to 47% of Uber's revenue. This would translate to average payments to drivers of up to approximately $30 per day, solely as compensation for the value of the data they generate as drivers, which corresponds to 20 to 40 percent of a average driver's daily earnings. Most of the increase would be absorbed by Uber. However, depending on the conditions of the ride-sharing market, these payments could be passed on to riders through rate increases
- Also online at
-
Online 9. Essays on statistical learning and causal inference on panel data [2020]
- Xiong, Ruoxuan, author.
- [Stanford, California] : [Stanford University], 2020
- Description
- Book — 1 online resource
- Summary
-
Panel data that provides multiple observations on each individual over time has become widely available and received growing interests in many domains. For example, in asset pricing, panel data on asset returns over time is central in the study of how financial assets, such as stocks, bonds, and futures, are priced. In public policy, panel data is valuable in estimating and analyzing economic and social policies' effects. Panel data can improve the power of analyses, uncover dynamic relationships of variables, and generate more accurate predictions for individual outcomes. The growing interests of panel data in empirical research have proliferated the studies of new methodologies. In the first part of this thesis, we demonstrate several novel statistical inference methods on large-dimensional panel data with a large number of units and time periods. An effective method to summarize the information in large-dimensional panel data is the factor model that has been successfully used in asset pricing, recommendation systems, and many other topics. We focus on the latent factor models where the factors are unobserved and estimated from the data. Latent factor models can address the model misspecification concern, i.e., we could not fully observe all covariates that affect the outcome. However, latent factors are hard to interpret because they are usually the weighted average of all units. We propose sparse proximate factors for latent factors. Sparse proximate factors are constructed from a few units with the largest signal-to-noise ratio that can approximate latent factors well while being interpretable. When the panel data spans a long time horizon, such as macroeconomic data, it is restrictive to assume the factor structure is static. We generalize the factor structure to depend on some observed state process. For example, the factor model in stock return data can change with the business cycle. We provide an estimator for this state-varying factor model and develop its inferential theory. Many studies in social sciences and healthcare try to answer questions about causal relationships beyond statistical analysis. Many of these studies rely on observational data when running experiments is infeasible, and observational panel data has received more attention because panel data can capture the changes with units over time. A fundamental question to estimate the causal effects from observational data is to estimate the counterfactual outcomes that can be modeled as the missing observations. We connect large-dimensional factor modeling with causal inference. Specifically, we provide an estimator for the latent factor model on large-dimensional panel data with missing observations and derive the inferential theory for our estimator that can be used to test the effect of a treatment at any time and general weighted treatments. An alternative approach to study the treatment effect is to run experiments, which is the gold standard in medical and clinical research and has become increasingly popular to test new products in large technology companies. In the second part of this thesis, we study optimal multi-period experimental design to increase the statistical power, which is a common hurdle in designing experiments. We show that the structure of the multi-period experimental design depends on how long the effect of interventions last. In the presence of pre-experimentation data, we can further optimize our treatment designs and hence reduce the number of required samples, which means lowering the experiment cost
- Also online at
-
Online 10. Fake news risk : modeling management decisions to combat disinformation [2020]
- Trammell, Travis Ira, author.
- [Stanford, California] : [Stanford University], 2020
- Description
- Book — 1 online resource
- Summary
-
Strategically using information to affect the views of a population is not a new phenomenon and dates all the way back to the earliest development of political systems. Some of the earliest examples still in existence today were produced by a Greek writer Herodotus who penned an inaccurate account of historical events in the Persian empire approximately 500 years after the birth of Jesus Christ. Political propaganda is nearly as old as politics. Similarly, fake news, as a specific tactic of propaganda, is not relegated only to the information age. Older examples are easy to find that pre-date the digital revolution. However, the speed of distribution and the number of people that can be reached by leveraging the modern information infrastructure is unprecedented. These factors combine to result in increased risk from fake news that must be addressed. The internet has enabled the distribution of vast amounts of information to an incredibly large population virtually instantaneously and for comparatively low cost. While the development of this capability has resulted in enormous economic development and provided great benefit to the world, it has also exposed the same population to increased risk. The rapid distribution of fake news can cause contagion, manipulate markets, spark conflict, or fracture strategic relationships. The scourge of fake news is even more problematic with the current limitations on fact-checking methodologies which are unable to keep pace with the increased volume of fake news production. In the search for methods to combat fake news, both public and private organizations are struggling. Probabilistic Risk Analysis (PRA) can be leveraged to quantitatively describe the risk associated with fake news. This thesis presents a method for modeling management decisions designed to combat fake news in an online network. It leverages established infectious disease modeling to describe online virality and implements countermeasure regimes to inform opposition decision making. The model was informed by two online surveys of a representative sample of the U.S. voting population that endeavored to measure the impact of limited but targeted fake news. The results point to both the potential effectiveness and limitations of fake news that is leveraged as part of a targeted influence campaign. The online survey results also point to the dangers of the use of modified video and audio, known as "deep fakes, " in fake news of the future. Technological improvements including expertly crafted deep fakes, online microtargeting, smart trolls, and the potential use of artificial intelligence for content production suggests fake news will continue to persist as a scourge in the future and could present a viable threat to democratic self-governance
- Also online at
-
Online 11. Improving healthcare decisions through data-driven methods and models : analysis of policies for personalized medicine [2020]
- Weyant, Christopher (Christopher Favor) author.
- [Stanford, California] : [Stanford University], 2020
- Description
- Book — 1 online resource
- Summary
-
This dissertation develops methods and models for personalized medicine. First, we develop a new modeling framework for personalizing medical treatment decisions and apply it to personalize selection of antipsychotic drugs for patients with schizophrenia. We project that use of this framework can substantially and cost effectively improve patient health outcomes. Second, we demonstrate potential adverse effects of partial personalization, which we define as personalization based on a subset of patient-specific risks and preferences. We develop a new method for partial personalization and show that it avoids these potential adverse effects. Third, we develop a method for simplifying complex models for personalization and apply it to simplify the model that we developed for personalized selection of antipsychotic drugs. This method allows for determination of the optimal degree of personalization, and improves computational performance and interpretability of the original model. Finally, we illustrate how personalized medicine approaches can be used to evaluate policies for population-level health problems. Using a personalized medicine approach, we project the health impacts of climate-change-induced nutritional deficiencies and optimal mitigation strategies. We conclude with a discussion of directions for further research
- Also online at
-
Online 12. Information-directed sampling for reinforcement learning [2020]
- Lu, Xiuyuan, author.
- [Stanford, California] : [Stanford University], 2020
- Description
- Book — 1 online resource
- Summary
-
Reinforcement learning has enjoyed a resurgence in popularity over the past decade thanks to the ever-increasing availability of computing power. Many success stories of reinforcement learning seem to suggest a potential gateway to creating intelligent agents that are capable of performing tasks with human-level proficiency. However, many state-of-the-art reinforcement learning algorithms require a tremendous amount of simulated data, which is not practical when data is generated from actual interactions in the real world. Addressing data efficiency will be crucial for making reinforcement learning practical for real-world applications. In this dissertation, we take an information-theoretic approach to reason about how an agent should acquire information in an environment to improve decision-making. We generalize the information-directed sampling (IDS) decision rule from online decision-making literature to reinforcement learning. This decision rule aims to acquire useful information about the environment while also taking into consideration the costs of information acquisition. We argue that IDS can demonstrate desirable information-seeking behaviors in a reinforcement learning problem where existing methods fail. We hypothesize that in practical environments that are typically rich in observations, IDS has the potential to significantly improve data efficiency relative to existing exploration schemes. Furthermore, we analyze the expected regret of IDS for three stylized classes of environments, linear bandits, tabular Markov decision processes (MDPs), and factored MDPs. We derive regret bounds that are nearly competitive with state-of-the-art regret bounds, which demonstrate promise of our information-theoretic design concept. Lastly, the form of IDS studied in this dissertation should be viewed as an agent design concept rather than a concrete algorithm. Major work needs to be done to design practical algorithms that preserve the benefits of this conceptual decision rule while being computationally tractable. We highlight some key aspects for designing a practical IDS agent and propose several research directions for addressing each aspect
- Also online at
-
Online 13. Online assignment mechanisms with applications in resource allocation [2020]
- Shameli, Seyed Ali, author.
- [Stanford, California] : [Stanford University], 2020
- Description
- Book — 1 online resource
- Summary
-
We study online assignment mechanisms in different contexts, namely online assignment mechanisms under fairness constraints and online bayesian selection problem under combinatorial constraints. In both cases, we design mechanisms that achieve a constant fraction of the optimum online policy in objective value. One common challenge we face in obtaining our results is generating a solution that satisfies all the structural constraints imposed by the problem. In doing so, we introduce several novel techniques for achieving and rounding our results
- Also online at
-
Online 14. Online linear programming : algorithm design and analysis [2020]
- Li, Xiaocheng (Researcher in analytics and operations), author.
- [Stanford, California] : [Stanford University], 2020
- Description
- Book — 1 online resource
- Summary
-
We discuss the problem of online linear programming (LP) in which the objective function and constraints are observed sequentially and not known a priori. We consider (i) stochastic input model where the columns of the constraint matrix along with the corresponding coefficients in the objective function are generated i.i.d. from an unknown distribution and revealed sequentially over time and (ii) random permutation model where the constraint matrix and the corresponding coefficients arrive in a randomly permuted order. Under the stochastic input model, we first establish convergence properties on the dual optimal solution to a large-scale LP problem and develop an adaptive learning algorithm that improves the previous algorithm performances by taking into account the past input data as well as decisions/actions already made. In the end, we present a fast algorithm for approximately solving a class of large-scale binary integer LPs. The algorithm is free of matrix multiplication and requires only one single pass over the inputs of the integer LP
- Also online at
-
Online 15. Secure by default : a behavioral approach to cyber security [2020]
- Simoiu, Camelia Valentina, author.
- [Stanford, California] : [Stanford University], 2020
- Description
- Book — 1 online resource
- Summary
-
Most computer systems that interface with the internet today presume that users will adopt additional security measures to protect themselves against phishing and malware attacks, and are capable of configuring software to obtain optimal security. This assumption is worrying, as prior work has repeatedly shown that not all computer users face similar levels of risk, and at-risk users may not have the resources or know-how to adopt obtain optimal levels of security. The first part of this thesis conducts an empirical analysis of the HTTPS configuration of over 4 million websites in order to assess the security posture of the ecosystem, as well as the factors that influence operators' security decisions. We show that while most websites have secure configurations, this is largely due to major cloud providers that supply secure defaults. Individually configured servers are more often insecure than not. We show that both server software defaults and online configuration recommendations are frequently insecure, and conclude with lessons for improving the HTTPS ecosystem. Among these, is the recommendation that server software should provide optimal security by default, thereby removing the burden of achieving optimal security from users. As technologies to defend against phishing and malware (e.g., two factor authentication or security keys) often impose an additional financial and usability cost on users, a key question is who should adopt these heightened protections. The second part of the thesis uses computational and survey methods to construct data-driven tools that identify at risk users for (1) malware, with a special focus on ransomware, and (2) for e-mail based phishing and malware. We measure over 287 phishing and malware attacks against Gmail users to identify the factors place a user at heightened risk of attack. Secondly, we present a machine learning model that draws on detailed web browsing behavior to predict users at risk of malware infection the following month; lastly, we develop and administer a survey to a representative sample of the U.S. population to first, provide a representative estimate of the prevalence of ransomware attacks within the general population, and second, to develop a proof-of-concept self-assessment of future ransomware risk
- Also online at
-
Online 16. User community innovation : implications for firm strategy, organizing and performance [2020]
- Bremner, Robert Peter, author.
- [Stanford, California] : [Stanford University], 2020
- Description
- Book — 1 online resource
- Summary
-
Community-based innovation has recently emerged as a focal point of research on strategy and innovation. Scholars have argued that communities have several advantages over firms, such that managers must increasingly consider how to best compete and collaborate with this unique organizational form. Extant empirical work provides significant insight. However, it remains unclear how and when firms benefit from user community-based innovation. Across three tightly-linked papers, this dissertation addresses this gap. The first is a comprehensive review of prior work that studies user communities as they relate to firm strategy and performance. The second paper is a comparative case study of innovation by two civilian drone ventures—one that organized as a community, the other that organized as a firm. Finally, using a novel panel dataset of 2,586 video game development projects, the third paper develops and tests theory on whether firms can improve performance via learning from communities. Together, these papers contribute several interrelated findings to literature on firm strategy, organization theory and user community innovation, as well as several insights for management practice
- Also online at
-
Online 17. Bootcamps : a new path for occupational entry [2019]
- Kaynak, Fatma Ece, author.
- [Stanford, California] : [Stanford University], 2019.
- Description
- Book — 1 online resource.
- Summary
-
This dissertation utilizes a unique setting—coding bootcamps—to examine how workers attempt to reskill themselves for growing job areas without traditional organizations serving as the backdrop for their actions. I draw on 80 semi-structured interviews and observations conducted at two bootcamps in Silicon Valley over the course of 17 months of fieldwork. My findings suggest that bootcamps resembled learning collectives where self-learning and learning from peers and near-peers figured more prominently than expert instruction. Under conditions of minimal expert instruction and obstacles to legitimate peripheral participation, I show how aspiring software developers sought out an occupational community in virtual spaces, learned asynchronously from unknown others, developed their practice through mock work among themselves and managed to get hired as a new category of occupational entrant—the bootcamp graduate. This dissertation contributes to our understanding of employability management practices and under-institutionalized learning and socialization processes in contemporary careers.
- Also online at
-
Online 18. Cyber risk management :AI-generated warnings of threats [2019]
- Faber, Isaac Justin, author.
- [Stanford, California] : [Stanford University], 2019.
- Description
- Book — 1 online resource.
- Summary
-
This research presents a warning systems model in which early-stage cyber threat signals are generated using machine learning and artificial intelligence (AI) techniques. Cybersecurity is most often, in practice, reactive. Based on the manual forensics of machine-generated data by humans, security efforts only begin after a loss has taken place. The current security paradigm can be significantly improved. Cyber-threat behaviors can be modeled as a set of discrete, observable steps called a 'kill chain.' Data produced from observing early kill chain steps can support the automation of manual defensive responses before an attack causes losses. However, early AI-based approaches to cybersecurity have been sensitive to exploitation and overly burdensome false positive rates resulting in low adoption and low trust from human experts. To address the problem, this research presents a collaborative decision paradigm with machines making low-impact/high-confidence decisions based on human risk preferences and uncertainty thresholds. Human experts only evaluate signals generated by the AI when decisions exceed these thresholds. This approach unifies core concepts from the disciplines of decision analysis and machine learning by creating a super-agent. An early warning system using these techniques has the potential to avoid more severe downstream consequences by disrupting threats at the beginning of the kill chain.
- Also online at
-
- Taggart, John Marshall, author.
- [Stanford, California] : [Stanford University], 2019.
- Description
- Book — 1 online resource.
- Summary
-
This dissertation examines the potential for electric vehicles to act as a viable technology pathway to deep decarbonization of the transportation sector. Taking an integrative and multi-disciplinary approach, this research applies theories of technology development and diffusion, social welfare optimization, life cycle analysis, and empirical modeling to better understand the processes and prospects for widespread adoption, as well as the emissions benefits that could accrue from a full market transition. The first chapter develops an endogenous model of market diffusion which incorporates positive feedback effects such as learning-by-doing and network externalities, then uses it to consider what an optimal subsidy policy regime could look like in different representative scenarios. The next chapter goes more in-depth on emissions, presenting the first comparative full life cycle analysis of greenhouse gas emissions for mass market, long-range battery electric vehicles, as compared to internal combustion engine vehicles. Lastly, data from real-world trips are utilized to explore heterogeneity in vehicle efficiency and range under different trip conditions, factors which could have significant implications for the scalability of the electric vehicle technology. Broadly, results show very large electric vehicle emissions reductions across all markets, robustness of key performance metrics to local climate conditions, and positive social welfare impacts of subsidies from accelerating early market adoption.
- Also online at
-
Online 20. Equilibria in two-settlement markets : market power, uncertainty, and risk aversion [2019]
- Roumpani, Maria, author.
- [Stanford, California] : [Stanford University], 2019.
- Description
- Book — 1 online resource.
- Summary
-
Forward contracting can deliver benefits in terms of risk hedging, price discovery, and market power mitigation. Aiming to advance the understanding of its role in market design, I examine how the introduction of a forward market changes the strategic and hedging incentives of participants. I challenge the two common literature assumptions of (i) one-sided market power: the assumption that only sellers exercise market power and (ii) perfect arbitrage: the equivalence of forward and (expected) spot price. I provide a framework in which oligopolists face oligopsonists in two sequential markets and allow for a price premium to arise endogenously as the result of hedging and strategic considerations. The introduction of a forward market is found to increase efficiency but does not necessarily do so in a Pareto improving way. The distribution of potential benefits to sellers and buyers depends on their risk aversion and market power. Policy implications are discussed.
- Also online at
-