Number of results to display per page
1. A student's guide to Bayesian statistics [2018]
 Lambert, Ben, author.
 London : SAGE Publications Ltd, 2018.
 Description
 Book — xx, 498 pages : illustrations ; 25 cm
 Summary

 Chapter 1: How to best use this book The purpose of this book Who is this book for? Prerequisites Book outline Route planner  suggested journeys through Bayesland Video Problem sets Code R and Stan Why don't more people use Bayesian statistics? What are the tangible (nonacademic) benefits of Bayesian statistics? Part I: An introduction to Bayesian inference
 Chapter 2: The subjective worlds of Frequentist and Bayesian statistics Bayes' rule  allowing us to go from the effect back to its cause The purpose of statistical inference The world according to Frequentists The world according to Bayesians Do parameters actually exist and have a point value? Frequentist and Bayesian inference Bayesian inference via Bayes' rule Implicit versus Explicit subjectivity
 Chapter 3: Probability  the nuts and bolts of Bayesian inference Probability distributions: helping us explicitly state our ignorance Independence Central Limit Theorems A derivation of Bayes' rule The Bayesian inference process from the Bayesian formula Part II: Understanding the Bayesian formula
 Chapter 4: Likelihoods What is a likelihood? Why use `likelihood' rather than `probability'? What are models and why do we need them? How to choose an appropriate likelihood? Exchangeability vs random sampling Maximum likelihood  a short introduction
 Chapter 5: Priors What are priors, and what do they represent? The explicit subjectivity of priors Combining a prior and likelihood to form a posterior Constructing priors A strong model is less sensitive to prior choice
 Chapter 6: The devil's in the denominator An introduction to the denominator The difficulty with the denominator How to dispense with the difficulty: Bayesian computation
 Chapter 7: The posterior  the goal of Bayesian inference Expressing parameter uncertainty in posteriors Bayesian statistics: updating our predata uncertainty The intuition behind Bayes' rule for inference Point parameter estimates Intervals of uncertainty From posterior to predictions by sampling Part III: Analytic Bayesian methods
 Chapter 8: An introduction to distributions for the mathematicallyuninclined The interrelation among distributions Sampling distributions for likelihoods Prior distributions How to choose a likelihood Table of common likelihoods, their uses, and reasonable priors Distributions of distributions, and mixtures  link to website, and relevance
 Chapter 9: Conjugate priors and their place in Bayesian analysis What is a conjugate prior and why are they useful? Gammapoisson example Normal example: giraffe height Table of conjugate priors The lessons and limits of a conjugate analysis
 Chapter 10: Evaluation of model fit and hypothesis testing Posterior predictive checks Why do we call it a p value? Statistics measuring predictive accuracy: AIC, Deviance, WAIC and LOOCV Marginal likelihoods and Bayes factors Choosing one model, or a number? Sensitivity analysis
 Chapter 11: Making Bayesian analysis objective? The illusion of the 'uninformative' uniform prior Jeffreys' priors Reference priors Empirical Bayes A move towards weakly informative priors Part IV: A practical guide to doing real life Bayesian analysis: Computational Bayes
 Chapter 12: Leaving conjugates behind: Markov Chain Monte Carlo The difficulty with real life Bayesian inference Discrete approximation to continuous posteriors The posterior through quadrature Integrating using independent samples: an introduction to Monte Carlo Why is independent sampling easier said than done? Ideal sampling from a posterior using only the unnormalised posterior Moving from independent to dependent sampling What's the catch with dependent samplers?
 Chapter 13: Random Walk Metropolis Sustainable fishing Prospecting for gold Defining the Metropolis algorithm When does Metropolis work? Efficiency of convergence: the importance of choosing the right proposal scale MetropolisHastings Judging convergence Effective sample size revisited
 Chapter 14: Gibbs sampling Back to prospecting for gold Defining the Gibbs algorithm Gibbs' earth: the intuition behind the Gibbs algorithm The benefits and problems with Gibbs and Random Walk Metropolis A change of parameters to speed up exploration
 Chapter 15: Hamiltonian Monte Carlo Hamiltonian Monte Carlo as a sledge NLP space Solving for the sledge motion over NLP space How to shove the sledge The acceptance probability of HMC The complete Hamiltonian Monte Carlo algorithm The performance of HMC versus Random Walk Metropolis and Gibbs Optimal step length of HMC: introducing the "No UTurn Sampler"
 Chapter 16: Stan Why Stan, and how to get it Getting setup with Stan using RStan Our first words in Stan Essential Stan reading What to do when things go wrong How to get further help Part V: Hierarchical models and regression
 Chapter 17: Hierarchical models The spectrum from fullypooled to heterogeneous Noncentered parameterisations in hierarchical models Case study: Forecasting the EU referendum result The importance of fake data simulation for complex models
 Chapter 18: Linear regression models Example: high school test scores in England Pooled model Interactions Heterogeneous coefficient model Hierarchical model Incorporating LEAlevel data
 Chapter 19: Generalised linear models and other animals Example: electoral participation in European countries Discrete parameter models in Stan.
 (source: Nielsen Book Data)
(source: Nielsen Book Data)
 Online
Science Library (Li and Ma)
Science Library (Li and Ma)  Status 

Stacks  
QA279.5 .L36 2018  Unknown 