Bayesian model selection and statistical modeling /

Saved in:
Bibliographic Details
Author / Creator:Ando, Tomohiro.
Imprint:Boca Raton, FL : Chapman & Hall/CRC, c2010.
Description:xiv, 286 p. : ill. ; 25 cm.
Language:English
Series:Statistics : textbooks and monographs
Statistics, textbooks and monographs.
Subject:
Format: Print Book
URL for this record:http://pi.lib.uchicago.edu/1001/cat/bib/8107574
Hidden Bibliographic Details
ISBN:9781439836149 (hardcover : alk. paper)
1439836140 (hardcover : alk. paper)
Notes:Includes bibliographical references and index.
Table of Contents:
  • Preface
  • 1. Introduction
  • 1.1. Statistical models
  • 1.2. Bayesian statistical modeling
  • 1.3. Book organization
  • 2. Introduction to Bayesian analysis
  • 2.1. Probability and Bayes' theorem
  • 2.2. Introduction to Bayesian analysis
  • 2.3. Bayesian inference on statistical models
  • 2.4. Sampling density specification
  • 2.4.1. Probability density specification
  • 2.4.2. Econometrics: Quantifying price elasticity of demand
  • 2.4.3. Financial econometrics: Describing a stock market behavior
  • 2.4.4. Bioinformatics: Tumor classification with gene expression data
  • 2.4.5. Psychometrics: Factor analysis model
  • 2.4.6. Marketing: Survival analysis model for quantifying customer lifetime value
  • 2.4.7. Medical science: Nonlinear logistic regression models
  • 2.4.8. Under the limited computer resources
  • 2.5. Prior distribution
  • 2.5.1. Diffuse priors
  • 2.5.2. The Jeffreys' prior
  • 2.5.3. Conjugate prior
  • 2.5.4. Informative priors
  • 2.5.5. Other priors
  • 2.6. Summarizing the posterior inference
  • 2.6.1. Point estimates
  • 2.6.2. Interval estimates
  • 2.6.3. Densities
  • 2.6.4. Predictive distributions
  • 2.7. Bayesian inference on linear regression models
  • 2.8. Bayesian model selection problems
  • 2.8.1. Example: Subset variable selection problem
  • 2.8.2. Example: Smoothing parameter selection problem
  • 2.8.3. Summary
  • 3. Asymptotic approach for Bayesian inference
  • 3.1. Asymptotic properties of the posterior distribution
  • 3.1.1. Consistency
  • 3.1.2. Asymptotic normality of the posterior mode
  • 3.1.3. Example: Asymptotic normality of the posterior mode of logistic regression
  • 3.2. Bayesian central limit theorem
  • 3.2.1. Bayesian central limit theorem
  • 3.2.2. Example: Poisson distribution with conjugate prior
  • 3.2.3. Example: Confidence intervals
  • 3.3. Laplace method
  • 3.3.1. Laplace method for integral
  • 3.3.2. Posterior expectation of a function of parameter
  • 3.3.3. Example: Bernoulli distribution with a uniform prior
  • 3.3.4. Asymptotic approximation of the Bayesian predictive distribution
  • 3.3.5. Laplace method for approximating marginal posterior distribution
  • 4. Computational approach for Bayesian inference
  • 4.1. Monte Carlo integration
  • 4.2. Markov chain Monte Carlo methods for Bayesian inference
  • 4.2.1. Gibbs sampler
  • 4.2.2. Metropolis-Hastings sampler
  • 4.2.3. Convergence check
  • 4.2.4. Example: Gibbs sampling for seemingly unrelated regression model
  • 4.2.5. Example: Gibbs sampling for auto-correlated errors
  • 4.3. Data augmentation
  • 4.3.1. Probit model
  • 4.3.2. Generating random samples from the truncated normal density
  • 4.3.3. Ordered probit model
  • 4.4. Hierarchical modeling
  • 4.4.1. Lasso
  • 4.4.2. Gibbs sampling for Bayesian Lasso
  • 4.5. MCMC studies for the Bayesian inference on various types of models
  • 4.5.1. Volatility time series models
  • 4.5.2. Simultaneous equation model
  • 4.5.3. Quantile regression
  • 4.5.4. Graphical models
  • 4.5.5. Multinomial probit models
  • 4.5.6. Markov switching models
  • 4.6. Noniterative computation methods for Bayesian inference
  • 4.6.1. The direct Monte Carlo
  • 4.6.2. Importance sampling
  • 4.6.3. Rejection sampling
  • 4.6.4. Weighted bootstrap
  • 5. Bayesian approach for model selection
  • 5.1. General framework
  • 5.2. Definition of the Bayes factor
  • 5.2.1. Example: Hypothesis testing 1
  • 5.2.2. Example: Hypothesis testing 2
  • 5.2.3. Example: Poisson models with conjugate priors
  • 5.3. Exact calculation of the marginal likelihood
  • 5.3.1. Example: Binomial model with conjugate prior
  • 5.3.2. Example: Normal regression model with conjugate prior and Zellner's g-prior
  • 5.3.3. Example: Multi-response normal regression model
  • 5.4. Laplace's method and asymptotic approach for computing the marginal likelihood
  • 5.5. Definition of the Bayesian information criterion
  • 5.5.1. Example: Evaluation of the approximation error
  • 5.5.2. Example: Link function selection for binomial regression
  • 5.5.3. Example: Selecting the number of factors in factor analysis model
  • 5.5.4. Example: Survival analysis
  • 5.5.5. Consistency of the Bayesian information criteria
  • 5.6. Definition of the generalized Bayesian information criterion
  • 5.6.1. Example: Nonlinear regression models using basis expansion predictors
  • 5.6.2. Example: Multinomial logistic model with basis expansion predictors
  • 5.7. Bayes factor with improper prior
  • 5.7.1. Intrinsic Bayes factors
  • 5.7.2. Partial Bayes factor and fractional Bayes factor
  • 5.7.3. Posterior Bayes factors
  • 5.7.4. Pseudo Bayes factors based on cross validation
  • 5.7.4.1. Example: Bayesian linear regression model with improper prior
  • 5.8. Expected predictive likelihood approach for Bayesian model selection
  • 5.8.1. Predictive likelihood for model selection
  • 5.8.2. Example: Normal model with conjugate prior
  • 5.8.3. Example: Bayesian spatial modeling
  • 5.9. Other related topics
  • 5.9.1. Bayes factors when model dimension grows
  • 5.9.2. Bayesian p-values
  • 5.9.3. Bayesian sensitivity analysis
  • 5.9.3.1. Example: Sensitivity analysis of Value at Risk
  • 5.9.3.2. Example: Bayesian change point analysis
  • 6. Simulation approach for computing the marginal likelihood
  • 6.1. Laplace-Metropolis approximation
  • 6.1.1. Example: Multinomial probit models
  • 6.2. Gelfand-Day's approximation and the harmonic mean estimator
  • 6.2.1. Example: Bayesian analysis of the ordered probit model
  • 6.3. Chib's estimator from Gibb's sampling
  • 6.3.1. Example: Seemingly unrelated regression model with informative prior
  • 6.3.1.1. Calculation of the marginal likelihood
  • 6.4. Chib's estimator from MH sampling
  • 6.5. Bridge sampling methods
  • 6.6. The Savage-Dickey density ratio approach
  • 6.6.1. Example: Bayesian linear regression model
  • 6.7. Kernel density approach
  • 6.7.1. Example: Bayesian analysis of the probit model
  • 6.8. Direct computation of the posterior model probabilities
  • 6.8.1. Reversible jump MCMC
  • 6.8.2. Example: Reversible jump MCMC for seemingly unrelated regression model with informative prior
  • 6.8.3. Product space search and metropolized product space search
  • 6.8.4. Bayesian variable selection for large model space
  • 7. Various Bayesian model selection criteria
  • 7.1. Bayesian predictive information criterion
  • 7.1.1. The posterior mean of the log-likelihood and the expected log-likelihood
  • 7.1.2. Bias correction for the posterior mean of the log-likelihood
  • 7.1.3. Definition of the Bayesian predictive information criterion
  • 7.1.4. Example: Bayesian generalized state space modeling
  • 7.2. Deviance information criterion
  • 7.2.1. Example: Hierarchical Bayesian modeling for logistic regression
  • 7.3. A minimum posterior predictive loss approach
  • 7.4. Modified Bayesian information criterion
  • 7.4.1. Example: P-spline regression model with Gaussian noise
  • 7.4.2. Example: P-spline logistic regression
  • 7.5. Generalized information criterion
  • 7.5.1. Example: Heterogeneous error model for the analysis motorcycle impact data
  • 7.5.2. Example: Microarray data analysis
  • 8. Theoretical development and comparisons
  • 8.1. Derivation of Bayesian information criteria
  • 8.2. Derivation of generalized Bayesian information criteria
  • 8.3. Derivation of Bayesian predictive information criterion
  • 8.3.1. Derivation of BPIC
  • 8.3.2. Further simplification of BPIC
  • 8.4. Derivation of generalized information criterion
  • 8.4.1. Information theoretic approach
  • 8.4.2. Derivation of GIC
  • 8.5. Comparison of various Bayesian model selection criteria
  • 8.5.1. Utility function
  • 8.5.2. Robustness to the improper prior
  • 8.5.3. Computational cost
  • 8.5.4. Estimation methods
  • 8.5.5. Misspecified models
  • 8.5.6. Consistency
  • 9. Bayesian model averaging
  • 9.1. Definition of Bayesian model averaging
  • 9.2. Occam's window method
  • 9.3. Bayesian model averaging for linear regression models
  • 9.4. Other model averaging methods
  • 9.4.1. Model averaging with AIC
  • 9.4.2. Model averaging with predictive likelihood
  • Bibliography
  • Index