%{search_type} search results

349 catalog results

RSS feed for this result
Book
xvii, 230 pages : illustrations ; 25 cm.
  • 1. Introduction to Dynamic Models 1.1 Six Examples of Input/Output Dynamics 1.1.1 Smallpox in Montreal 1.1.2 Spread of Disease Equations 1.1.3 Filling a Container 1.1.4 Head Impact and Brain Acceleration 1.1.5 Compartment models and pharmacokinetics 1.1.6 Chinese handwriting 1.1.7 Where to go for More Dynamical Systems 1.2 What This Book Undertakes 1.3 Mathematical Requirements 1.4 Overview 2 DE notation and types 2.1 Introduction and Chapter Overview 2.2 Notation for Dynamical Systems 2.2.1 Dynamical System Variables 2.2.2 Dynamical System Parameters 2.2.3 Dynamical System Data Configurations 2.2.4 Mathematical Background 2.3 The Architecture of Dynamic Systems 2.4 Types of Differential Equations 2.4.1 Linear Differential Equations 2.4.2 Nonlinear Dynamical Systems 2.4.3 Partial Differential Equations 2.4.4 Algebraic and Other Equations 2.5 Data Configurations 2.5.1 Initial and Boundary Value Configurations 2.5.2 Distributed Data Configurations 2.5.3 Unobserved or Lightly Observed Variables 2.5.4 Observational Data and Measurement Models 2.6 Differential Equation Transformations 2.7 A Notation Glossary 3 Linear Differential Equations and Systems 3.1 Introduction and Chapter Overview 3.2 The First Order Stationary Linear Buffer 3.3 The Second Order Stationary Linear Equation 3.4 The mth Order Stationary Linear Buffer 3.5 Systems of Linear Stationary Equations 3.6 A Linear System Example: Feedback Control 3.7 Nonstationary Linear Equations and Systems 3.7.1 The First Order Nonstationary Linear Buffer 3.7.2 First Order Nonstationary Linear Systems 3.8 Linear Differential Equations Corresponding to Sets of Functions 3.9 Green's Functions for Forcing Function Inputs 4 Nonlinear Differential Equations 4.1 Introduction and Chapter Overview 4.2 The Soft Landing Modification 4.3 Existence and Uniqueness Results 4.4 Higher Order Equations 4.5 Input/Output Systems 4.6 Case Studies 4.6.1 Bounded Variation: The Catalytic Equation 4.6.2 Rate Forcing: The SIR Spread of Disease System 4.6.3 From Linear to Nonlinear: The FitzHugh-Nagumo Equations 4.6.4 Nonlinear Mutual Forcing: The Tank Reactor Equations 4.6.5 Modeling Nylon Production 5 Numerical Solutions 5.1 Introduction 5.2 Euler Methods 5.3 Runge-KuttaMethods 5.4 Collocation Methods 5.5 Numerical Problems 5.5.1 Stiffness 5.5.2 Discontinuous Inputs 5.5.3 Constraints and Transformations < 6 Qualitative Behavior 6.1 Introduction 6.2 Fixed Points 6.2.1 Stability 6.3 Global Analysis and Limit Cycles 6.3.1 Use of Conservation Laws 6.3.2 Bounding Boxes 6.4 Bifurcations 6.4.1 Transcritical Bifurcations 6.4.2 Saddle Node Bifurcations 6.4.3 Pitchfork Bifurcations 6.4.4 Hopf Bifurcations 6.5 Some Other Features 6.5.1 Chaos 6.5.2 Fast-Slow Systems 6.6 Non-autonomous Systems 6.7 Commentary 7 Trajectory Matching 7.1 Introduction 7.2 Gauss-Newton Minimization 7.2.1 Sensitivity Equations 7.2.2 Automatic Differentiation 7.3 Inference 7.4 Measurements on Multiple Variables 7.4.1 Multivariate Gauss-Newton Method 7.4.2 VariableWeighting using Error Variance 7.4.3 Estimating s2 7.4.4 Example: FitzHugh-NagumoModels 7.4.5 Practical Problems: Local Minima 7.4.6 Initial Parameter Values for the Chemostat Data 7.4.7 Identifiability 7.5 Bayesian Methods 7.6 Multiple Shooting and Collocation 7.7 Fitting Features 7.8 Applications: Head Impacts 8 Gradient Matching 8.1 Introduction 8.2 Smoothing Methods and Basis Expansions 8.3 Fitting the Derivative 8.3.1 Optimizing Integrated Squared Error (ISSE) 8.3.2 Gradient Matching for the Refinery Data 8.3.3 Gradient Matching and the Chemostat Data 8.4 System Mis-specification and Diagnostics 8.4.1 Diagnostic Plots 8.5 Conducting Inference 8.5.1 Nonparametric Smoothing Variances 8.5.2 Example: Refinery Data 8.6 Related Methods and Extensions 8.6.1 Alternative Smoothing Method 8.6.2 Numerical Discretization Methods 8.6.3 Unobserved Covariates 8.6.4 Nonparametric Models 8.6.5 Sparsity and High Dimensional ODEs 8.7 Integral Matching 8.8 Applications: Head Impacts 9 Profiling for Linear Systems 9.1 Introduction and Chapter Overview 9.2 Parameter Cascading 9.2.1 Two Classes of Parameters 9.2.2 Defining Coefficients as Functions of Parameters 9.2.3 Data/Equation Symmetry 9.2.4 Inner Optimization Criterion J 9.2.5 The Least Squares Cascade Coefficient Function 9.2.6 The Outer Fitting Criterion H 9.3 Choosing the Smoothing Parameter r 9.4 Confidence Intervals for Parameters 9.4.1 Simulation Sample Results 9.5 Multi-Variable Systems 9.6 Analysis of the Head Impact Data 9.7 A Feedback Model for Driving Speed 9.7.1 Two-Variable First Order Cruise Control Model 9.7.2 One-Variable Second Order Cruise Control Model 9.8 The Dynamics of the Canadian Temperature Data 9.9 Chinese Handwriting 9.10 Complexity Bases 9.11 Software and Computation 9.11.1 Rate Function Specifications 9.11.2 Model Term Specifications 9.11.3 Memoization 10 Nonlinear Profiling 10.1 Introduction and Chapter Overview 10.2 Parameter Cascading for Nonlinear Systems 10.2.1 The Setup for Parameter Cascading 10.2.2 Parameter Cascading Computations 10.2.3 Some Helpful Tips 10.2.4 Nonlinear Systems and Other Fitting Criteria 10.3 Lotka-Volterra 10.4 Head Impact 10.5 Compound Model for Blood Ethanol 10.6 Catalytic model for growth 10.7 Aromate Reactions References Glossary Index.
  • (source: Nielsen Book Data)9781493971886 20171002
This text focuses on the use of smoothing methods for developing and estimating differential equations following recent developments in functional data analysis and building on techniques described in Ramsay and Silverman (2005) Functional Data Analysis. The central concept of a dynamical system as a buffer that translates sudden changes in input into smooth controlled output responses has led to applications of previously analyzed data, opening up entirely new opportunities for dynamical systems. The technical level has been kept low so that those with little or no exposure to differential equations as modeling objects can be brought into this data analysis landscape. There are already many texts on the mathematical properties of ordinary differential equations, or dynamic models, and there is a large literature distributed over many fields on models for real world processes consisting of differential equations. However, a researcher interested in fitting such a model to data, or a statistician interested in the properties of differential equations estimated from data will find rather less to work with. This book fills that gap.
(source: Nielsen Book Data)9781493971886 20171002
Science Library (Li and Ma)
Book
1 online resource.
EBSCOhost Access limited to 1 user
Book
1 online resource.
  • Inference Framework and Method.- Measurement Error and Misclassification: Introduction.- Survival Data with Measurement Error.- Recurrent Event Data with Measurement Error.- Longitudinal Data with Covariate Measurement Error.- Multi-State Models with Error-Prone Data.- Case-Control Studies with Measurement Error or Misclassification.- Analysis with Error in Responses.- Miscellaneous Topics.- Appendix.- References.
  • (source: Nielsen Book Data)9781493966387 20170925
This monograph on measurement error and misclassification covers a broad range of problems and emphasizes unique features in modeling and analyzing problems arising from medical research and epidemiological studies. Many measurement error and misclassification problems have been addressed in various fields over the years as well as with a wide spectrum of data, including event history data (such as survival data and recurrent event data), correlated data (such as longitudinal data and clustered data), multi-state event data, and data arising from case-control studies. Statistical Analysis with Measurement Error or Misclassification: Strategy, Method and Application brings together assorted methods in a single text and provides an update of recent developments for a variety of settings. Measurement error effects and strategies of handling mismeasurement for different models are closely examined in combination with applications to specific problems.Readers with diverse backgrounds and objectives can utilize this text. Familiarity with inference methods-such as likelihood and estimating function theory-or modeling schemes in varying settings-such as survival analysis and longitudinal data analysis-can result in a full appreciation of the material, but it is not essential since each chapter provides basic inference frameworks and background information on an individual topic to ease the access of the material. The text is presented in a coherent and self-contained manner and highlights the essence of commonly used modeling and inference methods.This text can serve as a reference book for researchers interested in statistical methodology for handling data with measurement error or misclassification; as a textbook for graduate students, especially for those majoring in statistics and biostatistics; or as a book for applied statisticians whose interest focuses on analysis of error-contaminated data.Grace Y. Yi is Professor of Statistics and University Research Chair at the University of Waterloo. She is the 2010 winner of the CRM-SSC Prize, an honor awarded in recognition of a statistical scientist's professional accomplishments in research during the first 15 years after having received a doctorate. She is a Fellow of the American Statistical Association and an Elected Member of the International Statistical Institute.
(source: Nielsen Book Data)9781493966387 20170925
Book
1 online resource.
  • Preface
  • Getting Started
  • Regression Models
  • Generalized Linear Models
  • Multilevel Analysis
  • Principal Components (PCA)
  • Exploratory Factor Analysis (EFA)
  • Confirmatory Factor Analysis(CFA)
  • Structural Equation Models (SEM) with Latent Variables
  • Analysis of Longitudinal Data
  • Multiple Groups
  • Appendix
  • Basic Matrix Algebra and Statistics
  • Testing Normality
  • Computational Notes on Censored Regression
  • Normal Scores
  • Assessment of Fit
  • General Statistical Theory
  • Iteration Algorithms
  • References. .
This book traces the theory and methodology of multivariate statistical analysis and shows how it can be conducted in practice using the LISREL computer program. It presents not only the typical uses of LISREL, such as confirmatory factor analysis and structural equation models, but also several other multivariate analysis topics, including regression (univariate, multivariate, censored, logistic, and probit), generalized linear models, multilevel analysis, and principal component analysis. It provides numerous examples from several disciplines and discusses and interprets the results, illustrated with sections of output from the LISREL program, in the context of the example. The book is intended for masters and PhD students and researchers in the social, behavioral, economic and many other sciences who require a basic understanding of multivariate statistical theory and methods for their analysis of multivariate data. It can also be used as a textbook on various topics of multivariate statistical analysis.
Book
1 online resource (xvii, 327 pages).
  • Prior Processes.- Inference Based on Complete Data.- Inference Based on Incomplete Data.
  • (source: Nielsen Book Data)9783319327884 20160912
This book presents a systematic and comprehensive treatment of various prior processes that have been developed over the past four decades for dealing with Bayesian approach to solving selected nonparametric inference problems. This revised edition has been substantially expanded to reflect the current interest in this area. After an overview of different prior processes, it examines the now pre-eminent Dirichlet process and its variants including hierarchical processes, then addresses new processes such as dependent Dirichlet, local Dirichlet, time-varying and spatial processes, all of which exploit the countable mixture representation of the Dirichlet process. It subsequently discusses various neutral to right type processes, including gamma and extended gamma, beta and beta-Stacy processes, and then describes the Chinese Restaurant, Indian Buffet and infinite gamma-Poisson processes, which prove to be very useful in areas such as machine learning, information retrieval and featural modeling. Tailfree and Polya tree and their extensions form a separate chapter, while the last two chapters present the Bayesian solutions to certain estimation problems pertaining to the distribution function and its functional based on complete data as well as right censored data. Because of the conjugacy property of some of these processes, most solutions are presented in closed form. However, the current interest in modeling and treating large-scale and complex data also poses a problem - the posterior distribution, which is essential to Bayesian analysis, is invariably not in a closed form, making it necessary to resort to simulation. Accordingly, the book also introduces several computational procedures, such as the Gibbs sampler, Blocked Gibbs sampler and slice sampling, highlighting essential steps of algorithms while discussing specific models. In addition, it features crucial steps of proofs and derivations, explains the relationships between different processes and provides further clarifications to promote a deeper understanding. Lastly, it includes a comprehensive list of references, equipping readers to explore further on their own.
(source: Nielsen Book Data)9783319327884 20160912
Book
1 online resource.
Book
1 online resource.
  • 1.Preliminaries.- 2. The Linear Hypothesis.- 3.Estimation.- 4.Hypothesis Testing.- 5.Inference Properties.- 6.Testing Several Hypotheses.- 7.Enlarging the Model.- 8.Nonlinear Regression Models.- 9.Multivariate Models.- 10.Large Sample Theory: Constraint-Equation Hypotheses.- 11.Large Sample Theory: Freedom-Equation Hypotheses.- 12.Multinomial Distribution.- Appendix.- Index.
  • (source: Nielsen Book Data)9783319219295 20160619
This book provides a concise and integrated overview of hypothesis testing in four important subject areas, namely linear and nonlinear models, multivariate analysis, and large sample theory. The approach used is a geometrical one based on the concept of projections and their associated idempotent matrices, thus largely avoiding the need to involvematrix ranks. It is shown that all the hypotheses encountered are either linear or asymptotically linear, and that all the underlying models used are either exactly or asymptotically linear normal models. This equivalence can be used, for example, to extend the concept of orthogonality to other models in the analysis of variance, and to show that the asymptotic equivalence of the likelihood ratio, Wald, and Score (Lagrange Multiplier) hypothesis tests generally applies.
(source: Nielsen Book Data)9783319219295 20160619
Book
xxvii, 909 p. : ill.
dx.doi.org SpringerLink
Book
xviii, 353 p. : ill. ; 24 cm.
  • One-Sample Problems.- Preliminaries (Building Blocks).- Graphical Tools.- Smooth Tests.- Methods Based on the Empirical Distribution Function.- Two-Sample and K-Sample Problems.- Preliminaries (Building Blocks).- Graphical Tools.- Some Important Two-Sample Tests.- Smooth Tests.- Methods Based on the Empirical Distribution Function.- Two Final Methods and Some Final Thoughts.
  • (source: Nielsen Book Data)9780387927091 20160605
Provides a self-contained comprehensive treatment of both one-sample and K-sample goodness-of-fit methods by linking them to a common theory backbone Contains many data examples, including R-code and a specific R-package for comparing distributions Emphesises informative statistical analysis rather than plain statistical hypothesis testing.
(source: Nielsen Book Data)9780387927091 20160605
dx.doi.org SpringerLink
Book
xviii, 384 p. : ill. ; 25 cm.
  • Introduction.- Matching to control bias from measured covariates.- Addressing bias from covariates that were not measured.
  • (source: Nielsen Book Data)9781441912121 20160528
dx.doi.org SpringerLink
Science Library (Li and Ma)
Book
xiv, 259 p.
dx.doi.org SpringerLink
Book
xv, 297 p. : ill., maps ; 25 cm.
  • Second order spatial models and geostatistics.- Gibbs-Markov random fields on networks.- Spatial point processes.- Simulation of spatial models.- Statistics for spatial models.
  • (source: Nielsen Book Data)9780387922560 20160528
dx.doi.org SpringerLink
Science Library (Li and Ma)
Book
1 online resource.
  • Introduction.- Wigner matrices and semicircular law.- Sample covariance matrices and the Marcenko-Pastur law.- Product of two random matrices.- Limits of extreme eigenvalues.- Spectrum separation.- Semicircle law for Hadamard products.- Convergence rates of ESD.- CLT for linear spectral statistics.- Eigenvectors of sample covariance matrices.- Circular law.- Some applications of RMT.
  • (source: Nielsen Book Data)9781441906601 20160528
dx.doi.org SpringerLink
Book
1 online resource (xviii, 353 p.) : ill.
  • One-Sample Problems.- Preliminaries (Building Blocks).- Graphical Tools.- Smooth Tests.- Methods Based on the Empirical Distribution Function.- Two-Sample and K-Sample Problems.- Preliminaries (Building Blocks).- Graphical Tools.- Some Important Two-Sample Tests.- Smooth Tests.- Methods Based on the Empirical Distribution Function.- Two Final Methods and Some Final Thoughts.
  • (source: Nielsen Book Data)9780387927091 20160605
Provides a self-contained comprehensive treatment of both one-sample and K-sample goodness-of-fit methods by linking them to a common theory backbone Contains many data examples, including R-code and a specific R-package for comparing distributions Emphesises informative statistical analysis rather than plain statistical hypothesis testing.
(source: Nielsen Book Data)9780387927091 20160605
Book
xxii, 745 p. : ill. ; 24 cm.
  • Introduction.- Overview of supervised learning.- Linear methods for regression.- Linear methods for classification.- Basis expansions and regularization.- Kernel smoothing methods.- Model assessment and selection.- Model inference and averaging.- Additive models, trees, and related methods.- Boosting and additive trees.- Neural networks.- Support vector machines and flexible discriminants.- Prototype methods and nearest-neighbors.- Unsupervised learning.
  • (source: Nielsen Book Data)9780387848570 20160619
During the past decade there has been an explosion in computation and information technology. With it have come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It is a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting---the first comprehensive treatment of this topic in any book. This major new edition features many topics not covered in the original, including graphical models, random forests, ensemble methods, least angle regression & path algorithms for the lasso, non-negative matrix factorization, and spectral clustering. There is also a chapter on methods for "wide" data (p bigger than n), including multiple testing and false discovery rates.
(source: Nielsen Book Data)9780387848570 20160619
dx.doi.org SpringerLink
Marine Biology Library (Miller), Science Library (Li and Ma)
Book
xii, 214 p. : ill. ; 25 cm.
  • Nonparametric estimators.- Lower bounds on the minimax risk.- Asymptotic efficiency and adaptation.- Appendix.- References.- Index.
  • (source: Nielsen Book Data)9780387790510 20160528
This is a concise text developed from lecture notes and ready to be used for a course on the graduate level. The main idea is to introduce the fundamental concepts of the theory while maintaining the exposition suitable for a first approach in the field. Therefore, the results are not always given in the most general form but rather under assumptions that lead to shorter or more elegant proofs. The book has three chapters. Chapter 1 presents basic nonparametric regression and density estimators and analyzes their properties. Chapter 2 is devoted to a detailed treatment of minimax lower bounds. Chapter 3 develops more advanced topics: Pinsker's theorem, oracle inequalities, Stein shrinkage, and sharp minimax adaptivity. This book will be useful for researchers and grad students interested in theoretical aspects of smoothing techniques. Many important and useful results on optimal and adaptive estimation are provided. As one of the leading mathematical statisticians working in nonparametrics, the author is an authority on the subject.
(source: Nielsen Book Data)9780387790510 20160528
This is a revised and extended version of the French book. The main changes are in Chapter 1 where the former Section 1. 3 is removed and the rest of the material is substantially revised. Sections 1. 2. 4, 1. 3, 1. 9, and 2. 7. 3 are new. Each chapter now has the bibliographic notes and contains the exercises section. I would like to thank Cristina Butucea, Alexander Goldenshluger, Stephan Huckenmann, Yuri Ingster, Iain Johnstone, Vladimir Koltchinskii, Alexander Korostelev, Oleg Lepski, Karim Lounici, Axel Munk, Boaz Nadler, AlexanderNazin, PhilippeRigollet, AngelikaRohde, andJonWellnerfortheir valuable remarks that helped to improve the text. I am grateful to Centre de Recherche en Economie et Statistique (CREST) and to Isaac Newton Ins- tute for Mathematical Sciences which provided an excellent environment for ?nishing the work on the book. My thanks also go to Vladimir Zaiats for his highly competent translation of the French original into English and to John Kimmel for being a very supportive and patient editor. Alexandre Tsybakov Paris, June 2008 Preface to the French Edition The tradition of considering the problem of statistical estimation as that of estimation of a ?nite number of parameters goes back to Fisher. However, parametric models provide only an approximation, often imprecise, of the - derlying statistical structure. Statistical models that explain the data in a more consistent way are often more complex: Unknown elements in these models are, in general, some functions having certain properties of smoo- ness.
(source: Nielsen Book Data)9781441927095 20160611
Science Library (Li and Ma)
Book
1 online resource (xii, 214 p.) : ill.
  • Nonparametric estimators.- Lower bounds on the minimax risk.- Asymptotic efficiency and adaptation.- Appendix.- References.- Index.
  • (source: Nielsen Book Data)9780387790510 20160612
This is a revised and extended version of the French book. The main changes are in Chapter 1 where the former Section 1. 3 is removed and the rest of the material is substantially revised. Sections 1. 2. 4, 1. 3, 1. 9, and 2. 7. 3 are new. Each chapter now has the bibliographic notes and contains the exercises section. I would like to thank Cristina Butucea, Alexander Goldenshluger, Stephan Huckenmann, Yuri Ingster, Iain Johnstone, Vladimir Koltchinskii, Alexander Korostelev, Oleg Lepski, Karim Lounici, Axel Munk, Boaz Nadler, AlexanderNazin, PhilippeRigollet, AngelikaRohde, andJonWellnerfortheir valuable remarks that helped to improve the text. I am grateful to Centre de Recherche en Economie et Statistique (CREST) and to Isaac Newton Ins- tute for Mathematical Sciences which provided an excellent environment for ?nishing the work on the book. My thanks also go to Vladimir Zaiats for his highly competent translation of the French original into English and to John Kimmel for being a very supportive and patient editor. Alexandre Tsybakov Paris, June 2008 Preface to the French Edition The tradition of considering the problem of statistical estimation as that of estimation of a ?nite number of parameters goes back to Fisher. However, parametric models provide only an approximation, often imprecise, of the - derlying statistical structure. Statistical models that explain the data in a more consistent way are often more complex: Unknown elements in these models are, in general, some functions having certain properties of smoo- ness.
(source: Nielsen Book Data)9780387790510 20160612
dx.doi.org SpringerLink
Book
xvi, 373 p. : ill. ; 25 cm.
  • The Monte Carlo method.- Sampling from known distributions.- Pseudorandom number generators.- Variance reduction techniques.- Quasi-Monte Carlo constructions.- Using quasi-Monte Carlo constructions.- Using quasi-Monte Carlo in practice.- Financial applications.- Beyond numerical integration.- Review of algebra.- Error and variance analysis for Halton sequences.- References.- Index.
  • (source: Nielsen Book Data)9780387781648 20160528
Quasi-Monte Carlo methods have become an increasingly popular alternative to Monte Carlo methods over the last two decades. Their successful implementation on practical problems, especially in finance, has motivated the development of several new research areas within this field to which practitioners and researchers from various disciplines currently contribute. This book presents essential tools for using quasi-Monte Carlo sampling in practice. The first part of the book focuses on issues related to Monte Carlo methods - uniform and non-uniform random number generation, variance reduction techniques - but the material is presented to prepare the readers for the next step, which is to replace the random sampling inherent to Monte Carlo by quasi-random sampling. The second part of the book deals with this next step. Several aspects of quasi-Monte Carlo methods are covered, including constructions, randomizations, the use of ANOVA decompositions, and the concept of effective dimension. The third part of the book is devoted to applications in finance and more advanced statistical tools like Markov chain Monte Carlo and sequential Monte Carlo, with a discussion of their quasi-Monte Carlo counterpart. The prerequisites for reading this book are a basic knowledge of statistics and enough mathematical maturity to follow through the various techniques used throughout the book. This text is aimed at graduate students in statistics, management science, operations research, engineering, and applied mathematics. It should also be useful to practitioners who want to learn more about Monte Carlo and quasi-Monte Carlo methods and researchers interested in an up-to-date guide to these methods. Christiane Lemieux is an Associate Professor and the Associate Chair for Actuarial Science in the Department of Statistics and Actuarial Science at the University of Waterloo in Canada. She is an Associate of the Society of Actuaries and was the winner of a 'Young Researcher Award in Information-Based Complexity' in 2004.
(source: Nielsen Book Data)9780387781648 20160528
dx.doi.org SpringerLink
Science Library (Li and Ma)
Book
xv, 781 p. : ill. ; 24 cm.
  • Variability, information, prediction.- Kernel smoothing.- Spline smoothing.- New wave nonparametrics.- Supervised learning: Partition methods.- Alternative nonparametrics.- Computational comparisons.- Unsupervised learning: Clustering.- Learning in high dimensions.- Variable selection.- Multiple testing.
  • (source: Nielsen Book Data)9780387981345 20160528
It's time for a high math level treatment of the basic techniques that are on the interface of Stats and Compsci, or data mining and machine learning more specifically. People are using these techniques and really have little idea why they work, how they inter-relate with other techniques, and what their general properties are. This is a more theoretical book on the same subject as the book on statistical learning by Hastie/Tibshirani/Friedman.
(source: Nielsen Book Data)9780387981345 20160528
dx.doi.org SpringerLink
Green Library

Articles+

Journal articles, e-books, & other e-resources
Articles+ results include