%{search_type} search results

349 catalog results

RSS feed for this result
Book
1 online resource.
This thesis covers several applications of Operations Research in the domains of finance and healthcare. There are three chapters, each covering a different application. Chapter 1 applies techniques from deep learning to estimate mortgage risk. The near-elimination of feature engineering is substituted by models with several thousand parameters that require large amounts of training data. The predictive performance of these models strongly exceeds that of baseline models, especially for predicting prepayments. This increased accuracy, however, comes at the cost of a more opaque model which is harder for a human to interpret than simpler models like logistic regression, which are currently the industry standard. The work in Chapter 2 explores the design of policies for biometric authentication. It first develops a model for the joint distribution of similarity scores associated with different fingers and irises. In the second step, this model is harnessed to design near-optimal multi-stage policies that would be used for authentication, and are robust to gaming, can be computed in real-time and are personalized for optimal performance. The work shows that a reduction of several orders of magnitude in the error rates is achievable by solely changing the authentication policies -- and leaving the hardware unchanged. Chapter 3 is motivated by a humanitarian cause geared towards helping developing countries -- to reduce mortality in children by identifying effective interventions at the planning stage. It takes a descriptive health model (called LiST), which estimates mortality of children given coverage of interventions, and embeds that into an optimization engine in order to minimize mortality under a fixed budget. In doing so, it allows LiST to be used in a prescriptive framework, where policymakers can identify the optimal intervention set at a fixed budget as well as recognize the trade-off of mortality reduction and budget allocation. We find that a greedy strategy offers near-optimal performance with ease of implementation. The findings also highlight the critical role that optimization plays in mortality reduction.
Book
1 online resource.
Better understanding energy, how to harness and utilise it effectively, and what the implications of its use in these ways means from an individual to a more aggregate level are important topics for business, policy making, and research. Growing evidence pointing to the serious global consequences of society's energy usage further increases the importance of these goals. Furthermore it is increasingly clear that the costs and benefits of energy use are experienced disproportionately across the population. The changing nature of individual decision-making and how the actions of individuals affect an entire economic system also play important roles in our effort to implement effective energy and environmental policy. Both on the demand-side and the supply-side, trends of decentralisation and the increased adoption of more granular technologies are making it increasingly important to understand the role of the individual and their behaviour within the broader energy-economic-environmental system. This dissertation contributes to the intersection of the energy, environmental, and behavioural economics literatures with an emphasis on policy efficacy. It considers questions of efficient retail electricity pricing, externality-correcting policies, the role of behavioural bias in consumer decision making, and the implications of such bias for policy efficacy. Its approach is grounded in neoclassical economics, but it brings the newer field of behavioural economics into the fold where its inclusion strengthens the ability to answer the question at hand. There are two main projects in this dissertation. The first is an in-depth evaluation of the rooftop solar pricing policy of Net Energy Metering (NEM) from both an economic efficiency and distributional equity approach. Treatment of this topic is approached first theoretically and then complemented with a suite of numerical simulations. The second project represents an attempt to codify and unify considerations of behavioural bias in individual decision making and what implications they have for optimal policy efficacy. In this way it showcases behavioural economics as an extension of, rather than a challenge to, neoclassical theory.
Book
1 online resource.
Cardiovascular disease (CVD) is the leading cause of morbidity and mortality in the United States (US). In addition, CVD remains a major cause of health disparities and rising health care costs. Because cardiovascular outcomes depend highly on multiple patient characteristics that evolve stochastically over time, this dissertation focuses on developing novel mathematical models to capture heterogeneous population characteristics, and to support actionable policy recommendations and informed decision-making in CVD prevention and control. Efficient prevention and control of CVD includes two types of interventions: primary prevention to prevent the onset of disease, such as lifestyle interventions, and secondary prevention with drug treatment to reduce the progression of disease. In Chapter 2, given recent data on the relationship between sodium intake, hypertension, and associated cardiovascular diseases, we examine the impact of population-wide expansion of the National Salt Reduction Initiative (NSRI), in which food producers agree to lower sodium to levels deemed feasible for different foods. We developed and validated a stochastic microsimulation model of hypertension, myocardial infarction (MI) and stroke in the US population. The model follows individual dietary intakes and CVD risk factors of the population stratified by demographic characteristics. We find that expanding the NSRI nationwide is expected to significantly reduce hypertension and hypertension-related CVD morbidity and mortality among the majority of the population, even in the context of compensatory consumer behaviors. But older women in particular may be at risk for excessively low sodium intake. This suggests that careful consideration should be made of how to target such large-scale population-wide policy interventions to minimize adverse effects among the most vulnerable. Large socioeconomic disparities exist in US dietary habits and CVD morbidity and mortality. While high fruits and vegetable (F& V) consumption is thought to reduce the risk of chronic diseases, dietary patterns and intakes of F& Vs are particularly lower among low-socioeconomic groups. Providing financial incentives for F& V purchases among low-income households has been demonstrated to increase F& V consumption. In Chapter 3, we extend the microsimulation model developed in Chapter 2 to evaluate lifetime costs and health outcomes of subsidizing F& V purchases among Supplemental Nutrition Assistance Program (SNAP) participants in the US, and to assess its impact on CVD disparities. In this model, type II diabetes and obesity were included in addition to MI and stroke to capture the complex interrelationship between changes in F& V prices and its effects on health outcomes. We find that nationwide expansion of the F& V subsidy among SNAP participants would be expected to significantly lower incidence of long-term chronic diseases in the US and would be cost-saving under a wide range of scenarios. The benefits would be accumulated the most among demographic groups for whom healthcare interventions alone have not been sufficient to reduce large disparities in CVD incidence that have been attributed in part to poor nutrition. Moreover, these benefits would likely persist even if the incentive is imperfectly implemented. As a secondary CVD prevention, hypertension management has traditionally been guided by a treat-to-target strategy focusing on achieving specific levels of blood pressure. Because treatment benefits depend on multiple patient characteristics, it has been recommended to personalize blood pressure treatment decisions. In Chapter 4, we develop a Markov decision process (MDP) model for determining the optimal blood pressure treatment for an individual, incorporating a complex variety of individual-level covariates, treatment effect modifiers, and risks and benefits of treatment alternatives. We find that the MDP-based treatment approach would improve overall population health compared to current blood pressure treatment guidelines, and it would be cost-saving. In order to improve usability and interpretability of the MDP model developed in Chapter 4, we use data analytic techniques to approximate the optimal blood pressure treatment policies derived from the MDP model in Chapter 5. We create a more easily interpretable treatment planning framework that offer comparable results to the optimal decisions rules determined by the MDP. In this dissertation, we present the use of data-driven methods in modeling health policy and healthcare decisions, focusing on CVD prevention and control. The methods we used can be adapted to other diseases and settings to promote informed-decision making.
Book
1 online resource.
How do large-scale shocks affect individuals and firms? Answering that question has been a motivating theme in my research agenda, which is taken up here through the lens of foreclosures on home mortgages during the financial crisis. Foreclosures grew by 200-400% across counties and foreclosures per open mortgage grew by a factor of five between 2006 and 2011. The surge in foreclosures was unprecedented—far above levels observed in the post-war U.S. era. This dissertation examines the causal effects of these foreclosures on the real economy in several dimensions. The first chapter examines how county foreclosures affect local labor market outcomes, such as employment and hiring, and the mechanism behind these declines. Foreclosures are instrumented using quasi-experimental variation in the timing and magnitude of interest rate changes on adjustable rate mortgages, which involve contractually and pre-determined changes in monthly mortgage payments based on national interest rates. These interest rate changes generate substantial movement in the frequency of foreclosures, but are plausibly exogenous because of their idiosyncratic and discontinuous nature. I find declines in labor market operates operating primarily through a banking sector channel. The second chapter examines how zipcode foreclosures affect individual well-being and community dynamism. Measures of well-being are obtained based on a partnership with Gallup over their U.S. Daily Poll, capturing overall sentiment. Using the same identification strategy, I show that increases in foreclosures are associated with declines in well-being even among individuals who do not experience foreclosure themselves, consistent with spillover effects. The third chapter examines what happened to these individuals who were foreclosed upon and why. How did they fare and why did they move where they move? An important theme in this paper is the importance of local factors—for example, if local factors deteriorate, then individuals are less likely to move to a better subsequent location, reflecting the importance of bargaining power in the job search process. Much more work is required to understand how mortgage markets interact with firm outcomes and the underlying mobility decisions among individuals. I look forward to pursuing further work in these areas in the years that follow.
Book
1 online resource.
Every person has hundreds and thousands of habits that they rarely think about having. Every time a person exercises a habit, they automatically choose a course of action. Every time that choice is made is an opportunity to apply decision analysis. However, normal decision analysis is too costly to use for all of a person's habits. In this dissertation I look at ways that decision analysis can be applied to a person's habits to ensure they are good ones. If they are not, they must be updated to match an ever-changing environment, new information, and preferences that evolve over time. Expanding decision analysis to include the numerous tiny decisions that are made with habits opens up the opportunity for massive gains in value when each tiny improvement is aggregated over many habits and a lifetime of use.
Book
1 online resource.
Over the last several decades, the world has become increasingly reliant on the space domain. With the insertion of new spacecraft in various orbits, the risk of collisions with spacecraft or debris has increased. Currently, several space surveillance systems provide signals of such impending collisions and opportunities for owners and operators to respond to these alerts. The monitoring systems, however, are imperfect, and the current number of sensors of different types, costs, and levels of accuracy may need to be increased to allow better space traffic management. The objective of this dissertation is to estimate and optimize the benefits of such improvements. We present first, a general model to evaluate sensor systems based on a Bayesian updating of the prior probability of collision given two types of independent signals: some from more accurate but more expensive sensors, and some from less accurate but cheaper sensors. We use this model to evaluate the risks of collision of space assets given the current monitoring systems. We then assess the risk of losing a satellite constellation, which is where the satellites' values reside. Next, we assess the risk reduction value of adding one or more sensors of either type to the current monitoring systems, based on the classic notion of value of information. For simplicity, we assume a constant risk aversion in rational decisions, both for the sensor system managers and for the satellite operators. We illustrate the model by a fictionalized version of the United States Space Surveillance Network operated by the United States Air Force, and other sensor systems of lesser accuracy. Together, these networks collect optical and radar data, which allow synthesis of observations into a coherent picture of space. The results of this analysis can be used to support the selection of an optimal number of additional space surveillance sensors, either "large" or "small, " under resource constraints. This choice is sensitive to the risk aversion of both the users of the signals and the managers of the monitoring systems. The method can be extended to other types of monitoring system with similar structures.
Book
1 online resource.
We study three related applications, in the field of finance, and in particular of multi-period investment management, of convex optimization and model predictive control. First, we look at the classical multi-period trading problem, consisting in trading assets within a certain universe for a sequence of periods in time. We develop a framework for single- and multi-period optimization: the trades in each period are found by solving a convex optimization problem that trades off expected return, risk, transaction cost and holding cost. Second, we look at the classical Kelly gambling problem, consisting in repeatedly allocating wealth among bets so as to maximize the expected growth rate of wealth. We develop a convex constraint that controls the risk of drawdown, i.e., the risk of losing a certain (high) amount of wealth. Third, we look at an optimal execution problem, consisting in buying, or selling, a given quantity of some asset on a limit-order book market. We study the case when the execution is benchmarked to the market volume weighted average price, and the objective is to minimize the mean-variance of the slippage. In all three cases, we provide extensive numerical simulations (using real-world data, whenever possible), developed as open-source software. In practice, these problems are solved to high accuracy in little time on commodity hardware, thanks to strong theoretical guarantees from modern convex optimization and a rich and growing ecosystem of open source software.
Book
1 online resource.
This research develops a quantitative risk analytic method to assess the risk of deterrence failure given modern nuclear weapon arsenals and policies. The model includes multiple antagonist behaviors, levels of conflict escalation, weapon capabilities and effects, and a spectrum of policies, both for protagonists and antagonists. It is based on infinite-horizon, risk-sensitive Interactive Partially Observable Markov Decision Processes. This model allows multiple agents to identify optimal policies in the management of conflict scenarios given the trade-offs between their political goals and the consequences of various forms of conflict. We develop a set of metrics for deterrence effectiveness based on the probability of specific opponent actions and on the evaluation of different conflict outcomes. The model and analysis capture complex behaviors and escalation dynamics, identify approximately optimal policies in specific conflicts, and can be extended to a large spectrum of possible scenarios. An illustration is presented, based on the analysis of fictitious data for a bilateral conflict scenario between two nuclear-armed, peer states. In this example, we evaluate various nuclear arsenals and stated policies about their use, based on the results of the model, including optimal conflict management and comparison of deterrence-effectiveness metrics. The results can provide valuable insights to policy and decision makers by allowing them to consider a spectrum of consequences involved in various alternatives.
Book
1 online resource.
This research enables decision makers to use time strategically in real-life games. At its core, it is pushing back against the assumption of an obvious decision time that is implicitly made when real-life situations are modeled as turn-based games with no sense of time. When analysts model real-life situations as turn-based, timing-free games, the application of the model's insight is stunted. The analyst has derived an optimal policy for what to do, but not when to do it, and since real-life situations rarely come with turns, a lot of the value can be lost. In every game, a first-person perspective is taken, since the model's main objective is to provide normative advice to a single decision maker. Throughout the game, a focus on the passage of time, which is discretized into time intervals, allows for players to better understand how their actions and choice of timing affect the situation. Players will often have a chance to act now or employ strategic patience, in which case they decide that waiting is their best alternative. If a player waits, the model considers the possibility of the environment changing, of a player learning more about an uncertainty, of a player's alternatives changing with time, and of an opponent taking the initiative. Players also have the ability to affect the timing of their opponents' actions, which is called controlling the tempo of the game. A player can control the tempo by control- ling the information available to other players, using hard constraints like deadlines that remove future alternatives, or providing incentives for quick or slow action. The ability to indirectly increase one's own expected utility through an opponent's tim- ing will prove to be very powerful, which is shown in three intentionally diverse case studies. The first case study explores how a coach can use the clock strategically in a basketball game. A coach can be strategically patient by instructing his team to shoot the ball later in the shot clock and can control the tempo of the game by having his team foul intentionally. The model uses a team's historical play-by-play data to construct a distribution over possession lengths and outcomes, which allows for specific team matchups to be explored. Here, the Indiana Pacers will be the decision maker and the Chicago Bulls will be the opponent. The model constructs an optimal offensive and defensive policy for the Pacers that could help them win an extra game per NBA season. The second case study explores how an investment banking firm can improve the results of its recruitment process for juniors interns by incentivizing quick responses from students. Multiple ways that a firm can control the tempo of the game are explored. Exploding offers prove to be ineffective since the best students will be strategically patient. Offers of decreasing value prove to be more effective since they both incentivize quick action and also do not expire when a student is patient. The specific decision maker explored here is the investment bank JP Morgan, who could see improvements in recruiting by using time strategically. The third case study explores how the United States could have used time strate- gically in the Cuban Missile Crisis. The work reconstructs from historical primary sources what was known/believed at the time by President Kennedy and his advisors. As the President waits, he learns more about the number of offensive weapons in Cuba and the feasibility of a diplomatic solution, but the Soviet surface-to-surface missiles are more likely to become operational. Both the US and the USSR are also uncertain about the other's willingness to escalate to a nuclear exchange/full-scale war. In this context, the model provides normative advice on both what to do and when to do it despite uncertainty over what the opponents wants, knows, and has. The effect of modeling the Soviet Union as less than rational is also discussed. The value of the case study is the framework through which an optimal policy for the United States can be derived; this framework could potentially be applied to a modern-day crisis in which timing plays a key role.
Book
1 online resource.
This dissertation aims to advance the application of mathematical modelling and computing, in particular optimisation methods, to the planning of solutions to energy and climate problems. The work first addresses two applied modelling problems relating to the electricity sector, a sector that is a major global source of greenhouse gas emissions, but also a potential provider of low carbon energy throughout the global economy. The dissertation then closes with an investigation into the appropriate formulation of the normative models used in planning, focusing on the choosing of model detail. At a high level, this work can be summarised as the development of tractable methods to incorporate necessary detail in models, followed by the introduction of a framework to understand when detail is necessary more generally. The first technical portion of this dissertation investigates how to represent intra-annual temporal variability in models of optimum electricity capacity investment. The mechanisms are shown by which inappropriate aggregation of temporal resolution can introduce substantial error into model outputs and associated economic insight, particularly in systems where variable renewable power sources are cost competitive and/or policy supported. For a sample dataset, a scenario-robust aggregation of hourly (8760) resolution is possible in the order of 10 representative hours when electricity demand is the only source of variability. The inclusion of wind and solar supply variability increases the resolution of the robust aggregation to the order of 1000. A similar scale of expansion is shown for representative days and weeks. These concepts, and underlying methods, can be applied to any such temporal dataset, providing a benchmark that any other aggregation method can aim to emulate. To the author's knowledge, this is the first time that the impact of variable renewable power sources on appropriate temporal representation has been quantified in this way. The next stage of the work considers the potential impact of emerging smart grid technologies, particularly those that enable electricity consumers to shift, automatically and optimally, their electricity demand in response to a price signal. In so doing, a model of a competitive electricity market, where consumers exhibit optimal load shifting behaviour to maximise utility and producers/suppliers maximise their profit under supply capacity constraints, is formulated and analysed. The associated computationally tractable convex optimisation formulation can be used to inform market design or policy analysis in the context of increasing availability of the smart grid technologies that enable optimal load shifting. Analytic and numeric assessment of the model allows assessment of the equilibrium value of optimal electricity load shifting, including how the value reduces as more electricity consumers adopt associated technologies. The sensitivity of the value to the flexibility of load is assessed, along with its relationship to the deployment of renewables. Additionally, a formulation of the model based on the Alternating Direction Method of Multipliers is presented. This particular optimisation method is desirable for its potential to scale to large problems. The applied modelling exercises provide examples for the final portion of the dissertation, a systematic assessment of model formulation, particularly relating to model detail. The normative models used for energy and climate planning explore long term pathways into uncharted territory. The test of predictive power used in other fields to evaluate model formulation is frequently not possible to apply in this long term context, nor necessarily makes sense in the normative context. This work introduces a conceptual framework that can potentially augment the necessary expert judgement in model formulation. It is based on the idea that some modelling decisions are testable, including the choice of model detail under certain conditions. The framework uses information theoretic principles to demonstrate the tradeoff between model detail and model accuracy for a given question, and can specifically aid with representing heterogenous spatial, temporal or population characteristics in models. This section of the dissertation represents an early attempt in a domain where limited systematic analysis has been undertaken to date.
Book
1 online resource.
This dissertation is divided into two parts, the first of which revolves around rates of convergence to equilibrium for single-server queues (consisting of Chapters 2 and 3) and the second of which revolves around asymptotic variability and large deviations for departure processes in single-server queues (consisting of Chapters 4 and 5). Throughout the thesis, significant emphasis is placed on the Brownian modeling of queues, using reflected Brownian motion (RBM) both as a direct modeling tool as well as vehicle through which to obtain insights into pre-limit queueing behavior.
Book
1 online resource.
Prior research on how firms in developing countries undergo internationalization has been limited. However, as more firms have successfully internationalized in recent years, there is a need for better understanding why, how, and where they choose to internationalize. This dissertation examines the internationalization of business groups in developing countries and the importance of institutional settings. Based on this examination, this dissertation advances the thesis that business groups have two paths that need to be balanced in order for business groups to thrive. The first path is to enter other institutionally close developing countries by exploiting the firm's existing resources and know-how. The second path is to enter institutionally close developed countries to acquire technological sophistication and improve their organizational learning capabilities through long-term investments. This dual approach enables business groups to survive economic downturns and outperform established industry leaders as seen during the recent financial crisis. In support of this thesis, I incorporate several prior theories on the liability of foreignness and institutional theory; investigate the role of culture in the internationalization of business groups; and analyze two in-depth case studies. This multi-pronged approach provides a framework for scholars to better understand the internationalization and associated strategic motives. This study contributes to the theory of internationalization and to the ongoing research of culture in the field of international business. Lastly, this study makes valuable recommendations to managers and government policymakers on how to optimize the resources and capabilities of business groups to support their domestic markets.
Book
1 online resource.
We introduce an online learning platform that scales collaborative learning. We study the impact of team formation and the team formation process in massive open online classes. We observe that learners prefers team members with similar location, age range and education level. We also observe that team members in more successful teams have diverse skill sets. We model the team formation process as a cooperative game and prove the existence of stable team allocations. We propose a polynomial-time algorithm that finds a stable team allocation for a certain class of utility functions. We use this algorithm to recommend teams to learners. We show that team recommendations increase the percentage of learners who finish the class.
Book
1 online resource.
In this dissertation, I investigate executives' attention to competitors and the implications for firm innovation. Using a hand-collected dataset on competition, innovation, and executive experience in the full population of public U.S. enterprise infrastructure software firms, I examine attention to competitions in three empirical papers. In the first paper, I show that attending to competitors that activate opportunity-based rather than threat-based views of competition has a positive relationship with product innovation. In the second paper, I show that executives are more attentive to competitors at the periphery of an industry when the experience of the executive team amplifies (i.e. mirrors) the experience of the CEO. In the third paper, I use social network analysis to show that personal similarities between CEOs can influence competition between firms. As a whole, my dissertation suggests that executives can be strategic in how they think about competition, with tangible benefits for firm performance.
Book
1 online resource.
As electric sector stakeholders make the decision to upgrade traditional power grid architectures by incorporating smart grid technologies and new intelligent components, the benefits of added connectivity must be weighed against the risk of increased exposure to cyberattacks. Therefore, decision makers must ask: how smart is smart enough? This dissertation presents a probabilistic risk analysis (PRA) framework to this problem, involving systems analysis, stochastic modeling, economic analysis, and decision analysis to quantify the overall benefit and risk facing the network and ultimately help decision makers formally assess tradeoffs and set priorities given limited resources. Central to this approach is a new Bayes-adaptive network security model based on a reformulation of the classic "multi-armed bandits" problem, where instead of projects with uncertain probabilities of success, a network defender faces network nodes that can be attacked at uncertain Poisson-distributed rates. This new technique, which by similarity we call "multi-node bandits, " takes a dynamic approach to cybersecurity investment, exploring how network defenders can optimally allocate cyber defense teams among nodes in their network. In effect, this strategy involves taking teams that traditionally respond to cyber breaches after they occur, and instead employing them in a proactive manner for defensive and information gathering purposes. We apply this model to a case study of an electric utility considering the degree to which to integrate demand response technology into their smart grid network, jointly identifying both the optimal level of connectivity and the optimal strategy for the sequential allocation of cybersecurity resources. Additional analytical and empirical results demonstrate the extension of the model to handling a range of practical network security applications, including sensitivity analysis to organization-specific security factors, settings with dynamic or dependent rates of attack, or handling defense teams as imperfect detectors of cyberattacks.
Book
1 online resource.
This thesis provides an in-depth analysis of two major components in the design of loyalty reward programs. First, we discuss the design of coalition loyalty programs - schemes where customers can earn and spend reward points across multiple merchant partners. And second, we conduct a model based comparison of a standalone loyalty reward program against traditional pricing - we theoretically characterize the conditions under which it is better to run a reward program within a competitive environment. Coalition loyalty programs are agreements between merchants allowing their customers to exchange reward points from one merchant to another at agreed upon exchange rates. Such exchanges lead to transfer of liabilities between merchant partners, which need to be frequently settled using payments. We first conduct an empirical investigation of existing coalitions, and formulate an analytical model of bargaining for merchant partners to agree upon the exchange rate and payment parameters. We show that our bargaining model produces networks that are close to optimal in terms of social welfare, in addition to cohering with empirical observations. Then, we introduce a novel alternate methodology for settling the transferred liabilities between merchants participating in a coalition. Our model has three interesting properties -- it is decentralized, arbitrage-proof, and fair against market power concentration -- which make it a real alternative to how settlements happen in coalition loyalty programs. Finally, we investigate the design of an optimal reward program for a merchant competing against a traditional pricing merchant, for varying customer populations, where customers measure their utility in rational economic terms. We assume customers are either myopic or strategic, and have a prior loyalty bias toward the reward program merchant, drawn from a known distribution. We show that for the reward program to perform better, it is necessary for a minimum fraction of the customer population to be strategic, and the loyalty bias distribution to be within an optimal range. This thesis is a useful read for marketers building promotional schemes within retail, researchers in the field of marketing and behavioral science, and companies investigating the intersection of customer behavior, loyalty, and virtual currencies.
Book
1 online resource.
How can entrepreneurial firms in emerging economies effectively innovate, grow, and achieve high performance? In this dissertation, I conduct a detailed examination of the institutional and relational challenges that entrepreneurial ventures in emerging economies face as well as strategies they may utilize to overcome these challenges. Three distinct papers constitute the core of this cumulative dissertation. In the first paper, I examine how inconsistencies in the institutional environment affect entrepreneurial ventures' ability to innovate and achieve high performance. In the second paper, I examine how entrepreneurial ventures may establish new organizational roles and responsibilities in spite of contradictory roles and hierarchical positions in the social lives of the entrepreneurs that founded these ventures. In the third paper, I examine how the social relationships of venture founders may sometimes give rise to malfeasance—and how these founders may curb this distinct form of malfeasance. Taken together, I examine the institutional and relational challenges that entrepreneurial ventures in emerging economies face while also presenting insights on how such challenges may be overcome.
Book
1 online resource.
Online crowdsourcing marketplaces provide access to millions of individuals with a range of expertise and experiences. To date, however, most research has focused on microtask platforms, such as Amazon Mechanical Turk. While microtask platforms have enabled non-expert workers to complete goals like text shortening and image labeling, highly complex and interdependent goals, such as web development and design, remain out of reach. Goals of this nature require deep knowledge of the subject matter and cannot be decomposed into independent microtasks for anyone to complete. This thesis shifts away from paid microtask work and introduces diverse expert crowds as a core component of crowdsourcing systems. Specifically, this thesis introduces and evaluates two generalizable approaches for crowdsourcing complex work with experts. The first approach, flash teams, is a framework for dynamically assembling and computationally managing crowdsourced expert teams. The second approach, flash organizations, is a framework for creating rapidly assembled and reconfigurable organizations composed of large groups of expert crowd workers. Both of these approaches for interdependent expert crowd work are manifested in Foundry, which is a computational platform we have built for authoring and managing teams of expert crowd workers. Taken together, this thesis envisions a future of work in which digitally networked teams and organizations dynamically assemble from a globally distributed online workforce and computationally orchestrate their efforts to accomplish complex work.
Book
1 online resource.
Strategy formation, the process by which executives decide on the unique set of activities to create and capture value, can be the difference between a firm capitalizing on a promising new opportunity or wasting a lot of resources on the new space only to watch a competitor succeed. Prior research on strategy formation has shown that it can be particularly difficult for executives in entrepreneurial settings due to the high levels of novelty and complexity that executives must tackle. As a result, effective strategy formation processes should combine the benefits of learning from experience and forming integrated understandings, yet prior research has not focused on how to effectively form strategy when both novelty and complexity are an issue. This leaves us unable to identify which processes lead to winning strategies in entrepreneurial settings. Moreover, prior work has been at too high a level to discern what the executives' roles in forming effective complex systems of activities are given the cognitive limitations they face. This dissertation addresses these gaps with three tightly linked studies. The first study is a literature review and agenda setting piece that categorizes prior work into a novel framework around the fundamental tension between novelty and complexity. The second study is an inductive case study of six two-side market ventures as they struggle to find coherent strategies to capture new opportunities. The third study builds on this by leveraging simulation methods to compare the strategic formation process discovered in the case study with other complex problem solving processes and explore the environmental conditions that make each most useful. Together, the studies of this dissertation offer rich theory regarding how executives can form successful strategies for their firms in entrepreneurial settings, paying particularly close attention to how they handle interdependence in their strategic decisions. Overall, this research contributes to the literatures on strategy, entrepreneurship, and organizational theory.
Book
1 online resource.
In this work, I conceptualize innovation as a core institution of the world economy and society, and study the factors involved in its expansion across the globe. Specifically, I examine local conditions and global institutional pressures affecting a country's willingness to take part in the innovation economy, its level of expenditure on innovation, and the actual product of those expenditures. I use event-history models and panel regressions to test the aforementioned relationships on a sample of 132 countries from 1996 to 2012. There are four main findings. First, developing nations that are close to the advanced countries (in the core of the global political economy) are likely to follow the norms of the core and initiate actions to be considered as participants of the innovation economy. Second, countries are likely to mimic neighbors in their innovation spending patterns. Third, most developing regions of the world show signs of decoupling (producing significantly less innovation patents at the same level of R& D spending than the core). And, fourth, local competition is associated to mimetic isomorphism regarding innovation but also to lower levels of decoupling in the production of patents (by increasing the efficiency of R& D investments). These results suggest that there is imitation of the global core and of neighbors (oftentimes) despite sub-standard levels of efficiency, and that participating in innovation is as important as producing actual innovation.