%{search_type} search results

4 catalog results

RSS feed for this result
Book
xix, 625 pages : illustrations ; 26 cm
  • Part I. Introduction: 1. The basic framework: potential outcomes, stability, and the assignment mechanism-- 2. A brief history of the potential-outcome approach to causal inference-- 3. A taxonomy of assignment mechanisms-- Part II. Classical Randomized Experiments: 4. A taxonomy of classical randomized experiments-- 5. Fisher's exact P-values for completely randomized experiments-- 6. Neyman's repeated sampling approach to completely randomized experiments-- 7. Regression methods for completely randomized experiments-- 8. Model-based inference in completely randomized experiments-- 9. Stratified randomized experiments-- 10. Paired randomized experiments-- 11. Case study: an experimental evaluation of a labor-market program-- Part III. Regular Assignment Mechanisms: Design: 12. Unconfounded treatment assignment-- 13. Estimating the propensity score-- 14. Assessing overlap in covariate distributions-- 15. Design in observational studies: matching to ensure balance in covariate distributions-- 16. Design in observational studies: trimming to ensure balance in covariate distributions-- Part IV. Regular Assignment Mechanisms: Analysis: 17. Subclassification on the propensity score-- 18. Matching estimators (Card-Krueger data)-- 19. Estimating the variance of estimators under unconfoundedness-- 20. Alternative estimands-- Part V. Regular Assignment Mechanisms: Supplementary Analyses: 21. Assessing the unconfoundedness assumption-- 22. Sensitivity analysis and bounds-- Part VI. Regular Assignment Mechanisms with Noncompliance: Analysis: 23. Instrumental-variables analysis of randomized experiments with one-sided noncompliance-- 24. Instrumental-variables analysis of randomized experiments with two-sided noncompliance-- 25. Model-based analyses with instrumental variables-- Part VII. Conclusion: 26. Conclusions and extensions.
  • (source: Nielsen Book Data)9780521885881 20161213
Most questions in social and biomedical sciences are causal in nature: what would happen to individuals, or to groups, if part of their environment were changed? In this groundbreaking text, two world-renowned experts present statistical methods for studying such questions. This book starts with the notion of potential outcomes, each corresponding to the outcome that would be realized if a subject were exposed to a particular treatment or regime. In this approach, causal effects are comparisons of such potential outcomes. The fundamental problem of causal inference is that we can only observe one of the potential outcomes for a particular subject. The authors discuss how randomized experiments allow us to assess causal effects and then turn to observational studies. They lay out the assumptions needed for causal inference and describe the leading analysis methods, including matching, propensity-score methods, and instrumental variables. Many detailed applications are included, with special focus on practical aspects for the empirical researcher.
(source: Nielsen Book Data)9780521885881 20161213
Green Library, Stanford Libraries
FINANCE-633-01
Book
xix, 625 pages ; 26 cm
  • Part I. Introduction: 1. The basic framework: potential outcomes, stability, and the assignment mechanism-- 2. A brief history of the potential-outcome approach to causal inference-- 3. A taxonomy of assignment mechanisms-- Part II. Classical Randomized Experiments: 4. A taxonomy of classical randomized experiments-- 5. Fisher's exact P-values for completely randomized experiments-- 6. Neyman's repeated sampling approach to completely randomized experiments-- 7. Regression methods for completely randomized experiments-- 8. Model-based inference in completely randomized experiments-- 9. Stratified randomized experiments-- 10. Paired randomized experiments-- 11. Case study: an experimental evaluation of a labor-market program-- Part III. Regular Assignment Mechanisms: Design: 12. Unconfounded treatment assignment-- 13. Estimating the propensity score-- 14. Assessing overlap in covariate distributions-- 15. Design in observational studies: matching to ensure balance in covariate distributions-- 16. Design in observational studies: trimming to ensure balance in covariate distributions-- Part IV. Regular Assignment Mechanisms: Analysis: 17. Subclassification on the propensity score-- 18. Matching estimators (Card-Krueger data)-- 19. Estimating the variance of estimators under unconfoundedness-- 20. Alternative estimands-- Part V. Regular Assignment Mechanisms: Supplementary Analyses: 21. Assessing the unconfoundedness assumption-- 22. Sensitivity analysis and bounds-- Part VI. Regular Assignment Mechanisms with Noncompliance: Analysis: 23. Instrumental-variables analysis of randomized experiments with one-sided noncompliance-- 24. Instrumental-variables analysis of randomized experiments with two-sided noncompliance-- 25. Model-based analyses with instrumental variables-- Part VII. Conclusion: 26. Conclusions and extensions.
  • (source: Nielsen Book Data)9780521885881 20161213
Most questions in social and biomedical sciences are causal in nature: what would happen to individuals, or to groups, if part of their environment were changed? In this groundbreaking text, two world-renowned experts present statistical methods for studying such questions. This book starts with the notion of potential outcomes, each corresponding to the outcome that would be realized if a subject were exposed to a particular treatment or regime. In this approach, causal effects are comparisons of such potential outcomes. The fundamental problem of causal inference is that we can only observe one of the potential outcomes for a particular subject. The authors discuss how randomized experiments allow us to assess causal effects and then turn to observational studies. They lay out the assumptions needed for causal inference and describe the leading analysis methods, including matching, propensity-score methods, and instrumental variables. Many detailed applications are included, with special focus on practical aspects for the empirical researcher.
(source: Nielsen Book Data)9780521885881 20161213
Business Library
FINANCE-633-01
Book
xxvii, 1,064 p. : ill. ; 24 cm.
  • Introduction
  • Conditional expectations and related concepts in econometrics
  • Basic asymptotic theory
  • Single-equation linear model and ordinary least squares estimation
  • Instrumental variables estimation of single-equation linear models
  • Additional single-equation topics
  • Estimating systems of equations by ordinary least squares and generalized least squares
  • System estimation by instrumental variables
  • Simultaneous equations models
  • Basic linear unobserved effects panel data models
  • More topics in linear unobserved effects models
  • M-estimation, nonlinear regression, and quantile regression
  • Maximum likelihood methods
  • Generalized method of moments and minimum distance estimation
  • Binary response models
  • Multinomial and ordered response models
  • Corner solution responses
  • Count, fractional, and other nonnegative responses
  • Censored data, sample selection, and attrition
  • Stratified sampling and cluster sampling
  • Estimating average treatment effects
  • Duration analysis.
The second edition of this acclaimed graduate text provides a unified treatment of two methods used in contemporary econometric research, cross section and data panel methods. By focusing on assumptions that can be given behavioral content, the book maintains an appropriate level of rigor while emphasizing intuitive thinking. The analysis covers both linear and nonlinear models, including models with dynamics and/or individual heterogeneity. In addition to general estimation frameworks (particular methods of moments and maximum likelihood), specific linear and nonlinear methods are covered in detail, including probit and logit models and their multivariate, Tobit models, models for count data, censored and missing data schemes, causal (or treatment) effects, and duration analysis.Econometric Analysis of Cross Section and Panel Data was the first graduate econometrics text to focus on microeconomic data structures, allowing assumptions to be separated into population and sampling assumptions. This second edition has been substantially updated and revised. Improvements include a broader class of models for missing data problems; more detailed treatment of cluster problems, an important topic for empirical researchers; expanded discussion of "generalized instrumental variables" (GIV) estimation; new coverage (based on the author's own recent research) of inverse probability weighting; a more complete framework for estimating treatment effects with panel data, and a firmly established link between econometric approaches to nonlinear panel data and the "generalized estimating equation" literature popular in statistics and other fields. New attention is given to explaining when particular econometric methods can be applied; the goal is not only to tell readers what does work, but why certain "obvious" procedures do not. The numerous included exercises, both theoretical and computer-based, allow the reader to extend methods covered in the text and discover new insights.
(source: Nielsen Book Data)9780262232586 20160615
Business Library
FINANCE-633-01
Book
xiii, 373 p. : ill ; 22 cm.
  • List of Figures vii List of Tables ix Preface xi Acknowledgments xv Organization of This Book xvii PART I: PRELIMINARIES 1 Chapter 1: Questions about Questions 3 Chapter 2: The Experimental Ideal 11 2.1 The Selection Problem 12 2.2 Random Assignment Solves the Selection Problem 15 2.3 Regression Analysis of Experiments 22 PART II: THE CORE 25 Chapter 3: Making Regression Make Sense 27 3.1 Regression Fundamentals 28 3.2 Regression and Causality 51 3.3 Heterogeneity and Nonlinearity 68 3.4 Regression Details 91 3.5 Appendix: Derivation of the Average Derivative Weighting Function 110 Chapter 4: Instrumental Variables in Action: Sometimes You Get What You Need 113 4.1 IV and Causality 115 4.2 Asymptotic 2SLS Inference 138 4.3 Two-Sample IV and Split-Sample IV 147 4.4 IV with Heterogeneous Potential Outcomes 150 4.5 Generalizing LATE 173 4.6 IV Details 188 4.7 Appendix 216 Chapter 5: Parallel Worlds: Fixed Effects, Differences-in-Differences, and Panel Data 221 5.1 Individual Fixed Effects 221 5.2 Differences-in-Differences 227 5.3 Fixed Effects versus Lagged Dependent Variables 243 5.4 Appendix: More on Fixed Effects and Lagged Dependent Variables 246 PART III: EXTENSIONS 249 Chapter 6: Getting a Little Jumpy: Regression Discontinuity Designs 251 6.1 Sharp RD 251 6.2 Fuzzy RD Is IV 259 Chapter 7: Quantile Regression 269 7.1 The Quantile Regression Model 270 7.2 IV Estimation of Quantile Treatment Effects 283 Chapter 8: Nonstandard Standard Error Issues 293 8.1 The Bias of Robust Standard Error Estimates 294 8.2 Clustering and Serial Correlation in Panels 308 8.3 Appendix: Derivation of the Simple Moulton Factor 323 Last Words 327 Acronyms and Abbreviations 329 Empirical Studies Index 335 References 339 Index 361.
  • (source: Nielsen Book Data)9780691120348 20160528
The core methods in today's econometric toolkit are linear regression for statistical control, instrumental variables methods for the analysis of natural experiments, and differences-in-differences methods that exploit policy changes. In the modern experimentalist paradigm, these techniques address clear causal questions such as: Do smaller classes increase learning? Should wife batterers be arrested? How much does education raise wages?"Mostly Harmless Econometrics" shows how the basic tools of applied econometrics allow the data to speak. In addition to econometric essentials, "Mostly Harmless Econometrics" covers important new extensions - regression-discontinuity designs and quantile regression - as well as how to get standard errors right. Joshua Angrist and Jorn-Steffen Pischke explain why fancier econometric techniques are typically unnecessary and even dangerous. The applied econometric methods emphasized in this book are easy to use and relevant for many areas of contemporary social science. This book features: an irreverent review of econometric essentials; focus on tools that applied researchers use most; chapters on regression-discontinuity designs, quantile regression, and standard errors; many empirical examples; and, a clear and concise resource with wide applications.
(source: Nielsen Book Data)9780691120348 20160528
Business Library
FINANCE-633-01