Bayesian networks and decision graphs /

Saved in:
Bibliographic Details
Author / Creator:Jensen, Finn V.
Edition:2nd ed.
Imprint:New York : Springer, c2007.
Description:xvi, 447 p. : ill. ; 25 cm.
Language:English
Series:Information science and statistics
Subject:Bayesian statistical decision theory -- Data processing.
Machine learning.
Neural networks (Computer science)
Decision-making.
Statistique bayésienne -- Informatique.
Apprentissage automatique.
Réseaux neuronaux (Informatique)
Prise de décision.
Bayesian statistical decision theory -- Data processing.
Decision making.
Machine learning.
Neural networks (Computer science)
Format: Print Book
URL for this record:http://pi.lib.uchicago.edu/1001/cat/bib/6828582
Hidden Bibliographic Details
Other authors / contributors:Nielsen, Thomas Dyhre.
ISBN:0387682813
9780387682815
Notes:Includes bibliographical references (p. [431]-439) and index.
Table of Contents:
  • Preface
  • 1. Prerequisites on Probability Theory
  • 1.1. Two Perspectives on Probability Theory
  • 1.2. Fundamentals of Probability Theory
  • 1.2.1. Conditional Probabilities
  • 1.2.2. Probability Calculus
  • 1.2.3. Conditional Independence
  • 1.3. Probability Calculus for Variables
  • 1.3.1. Calculations with Probability Tables: An Example
  • 1.4. An Algebra of Potentials
  • 1.5. Random Variables
  • 1.5.1. Continuous Distributions
  • 1.6. Exercises
  • Part I. Probabilistic Graphical Models
  • 2. Causal and Bayesian Networks
  • 2.1. Reasoning Under Uncertainty
  • 2.1.1. Car Start Problem
  • 2.1.2. A Causal Perspective on the Car Start Problem
  • 2.2. Causal Networks and d-Separation
  • 2.2.1. d-separation
  • 2.3. Bayesian Networks
  • 2.3.1. Definition of Bayesian Networks
  • 2.3.2. The Chain Rule for Bayesian Networks
  • 2.3.3. Inserting Evidence
  • 2.3.4. Calculating Probabilities in Practice
  • 2.4. Graphical Models - Formal Languages for Model Specification
  • 2.5. Summary
  • 2.6. Bibliographical Notes
  • 2.7. Exercises
  • 3. Building Models
  • 3.1. Catching the Structure
  • 3.1.1. Milk Test
  • 3.1.2. Cold or Angina?
  • 3.1.3. Insemination
  • 3.1.4. A Simplified Poker Game
  • 3.1.5. Naive Bayes Models
  • 3.1.6. Causality
  • 3.2. Determining the Conditional Probabilities
  • 3.2.1. Milk Test
  • 3.2.2. Stud Farm
  • 3.2.3. Poker Game
  • 3.2.4. Transmission of Symbol Strings
  • 3.2.5. Cold or Angina?
  • 3.2.6. Why Causal Networks?
  • 3.3. Modeling Methods
  • 3.3.1. Undirected Relations
  • 3.3.2. Noisy-Or
  • 3.3.3. Divorcing
  • 3.3.4. Noisy Functional Dependence
  • 3.3.5. Expert Disagreements
  • 3.3.6. Object-Oriented Bayesian Networks
  • 3.3.7. Dynamic Bayesian Networks
  • 3.3.8. How to Deal with Continuous Variables
  • 3.3.9. Interventions
  • 3.4. Special Features
  • 3.4.1. Joint Probability Tables
  • 3.4.2. Most-Probable Explanation
  • 3.4.3. Data Conflict
  • 3.4.4. Sensitivity Analysis
  • 3.5. Summary
  • 3.6. Bibliographical Notes
  • 3.7. Exercises
  • 4. Belief Updating in Bayesian Networks
  • 4.1. Introductory Examples
  • 4.1.1. A Single Marginal
  • 4.1.2. Different Evidence Scenarios
  • 4.1.3. All Marginals
  • 4.2. Graph-Theoretic Representation
  • 4.2.1. Task and Notation
  • 4.2.2. Domain Graphs
  • 4.3. Triangulated Graphs and Join Trees
  • 4.3.1. Join Trees
  • 4.4. Propagation in Junction Trees
  • 4.4.1. Lazy Propagation in Junction Trees
  • 4.5. Exploiting the Information Scenario
  • 4.5.1. Barren Nodes
  • 4.5.2. d-Separation
  • 4.6. Nontriangulated Domain Graphs
  • 4.6.1. Triangulation of Graphs
  • 4.6.2. Triangulation of Dynamic Bayesian Networks
  • 4.7. Exact Propagation with Bounded Space
  • 4.7.1. Recursive Conditioning
  • 4.8. Stochastic Simulation in Bayesian Networks
  • 4.8.1. Probabilistic Logic Sampling
  • 4.8.2. Likelihood Weighting
  • 4.8.3. Gibbs Sampling
  • 4.9. Loopy Belief Propagation
  • 4.10. Summary
  • 4.11. Bibliographical Notes
  • 4.12. Exercises
  • 5. Analysis Tools for Bayesian Networks
  • 5.1. IEJ Trees
  • 5.2. Joint Probabilities and A-Saturated Junction Trees
  • 5.2.1. A-Saturated Junction Trees
  • 5.3. Configuration of Maximal Probability
  • 5.4. Axioms for Propagation in Junction Trees
  • 5.5. Data Conflict
  • 5.5.1. Insemination
  • 5.5.2. The Conflict Measure conf
  • 5.5.3. Conflict or Rare Case
  • 5.5.4. Tracing of Conflicts
  • 5.5.5. Other Approaches to Conflict Detection
  • 5.6. SE Analysis
  • 5.6.1. Example and Definitions
  • 5.6.2. h-Saturated Junction Trees and SE Analysis
  • 5.7. Sensitivity to Parameters
  • 5.7.1. One-Way Sensitivity Analysis
  • 5.7.2. Two-Way Sensitivity Analysis
  • 5.8. Summary
  • 5.9. Bibliographical Notes
  • 5.10. Exercises
  • 6. Parameter Estimation
  • 6.1. Complete Data
  • 6.1.1. Maximum Likelihood Estimation
  • 6.1.2. Bayesian Estimation
  • 6.2. Incomplete Data
  • 6.2.1. Approximate Parameter Estimation: The EM Algorithm
  • 6.2.2. Why We Cannot Perform Exact Parameter Estimation
  • 6.3. Adaptation
  • 6.3.1. Fractional Updating
  • 6.3.2. Fading
  • 6.3.3. Specification of an Initial Sample Size
  • 6.3.4. Example: Strings of Symbols
  • 6.3.5. Adaptation to Structure
  • 6.3.6. Fractional Updating as an Approximation
  • 6.4. Tuning
  • 6.4.1. Example
  • 6.4.2. Determining grad dist(x, y) as a Function of t
  • 6.5. Summary
  • 6.6. Bibliographical Notes
  • 6.7. Exercises
  • 7. Learning the Structure of Bayesian Networks
  • 7.1. Constraint-Based Learning Methods
  • 7.1.1. From Skeleton to DAG
  • 7.1.2. From Independence Tests to Skeleton
  • 7.1.3. Example
  • 7.1.4. Constraint-Based Learning on Data Sets
  • 7.2. Ockham's Razor
  • 7.3. Score-Based Learning
  • 7.3.1. Score Functions
  • 7.3.2. Search Procedures
  • 7.3.3. Chow-Liu Trees
  • 7.3.4. Bayesian Score Functions
  • 7.4. Summary
  • 7.5. Bibliographical Notes
  • 7.6. Exercises
  • 8. Bayesian Networks as Classifiers
  • 8.1. Naive Bayes Classifiers
  • 8.2. Evaluation of Classifiers
  • 8.3. Extensions of Naive Bayes Classifiers
  • 8.4. Classification Trees
  • 8.5. Summary
  • 8.6. Bibliographical Notes
  • 8.7. Exercises
  • Part II. Decision Graphs
  • 9. Graphical Languages for Specification of Decision Problems
  • 9.1. One-Shot Decision Problems
  • 9.1.1. Fold or Call?
  • 9.1.2. Mildew
  • 9.1.3. One Decision in General
  • 9.2. Utilities
  • 9.2.1. Instrumental Rationality
  • 9.3. Decision Trees
  • 9.3.1. A Couple of Examples
  • 9.3.2. Coalesced Decision Trees
  • 9.3.3. Solving Decision Trees
  • 9.4. Influence Diagrams
  • 9.4.1. Extended Poker Model
  • 9.4.2. Definition of Influence Diagrams
  • 9.4.3. Repetitive Decision Problems
  • 9.5. Asymmetric Decision Problems
  • 9.5.1. Different Sources of Asymmetry
  • 9.5.2. Unconstrained Influence Diagrams
  • 9.5.3. Sequential Influence Diagrams
  • 9.6. Decision Problems with Unbounded Time Horizons
  • 9.6.1. Markov Decision Processes
  • 9.6.2. Partially Observable Markov Decision Processes
  • 9.7. Summary
  • 9.8. Bibliographical Notes
  • 9.9. Exercises
  • 10. Solution Methods for Decision Graphs
  • 10.1. Solutions to Influence Diagrams
  • 10.1.1. The Chain Rule for Influence Diagrams
  • 10.1.2. Strategies and Expected Utilities
  • 10.1.3. An Example
  • 10.2. Variable Elimination
  • 10.2.1. Strong Junction Trees
  • 10.2.2. Required Past
  • 10.2.3. Policy Networks
  • 10.3. Node Removal and Arc Reversal
  • 10.3.1. Node Removal
  • 10.3.2. Arc Reversal
  • 10.3.3. An Example
  • 10.4. Solutions to Unconstrained Influence Diagrams
  • 10.4.1. Minimizing the S-DAG
  • 10.4.2. Determining Policies and Step Functions
  • 10.5. Decision Problems Without a Temporal Ordering: Troubleshooting
  • 10.5.1. Action Sequences
  • 10.5.2. A Greedy Approach
  • 10.5.3. Call Service
  • 10.5.4. Questions
  • 10.6. Solutions to Decision Problems with Unbounded Time Horizon
  • 10.6.1. A Basic Solution
  • 10.6.2. Value Iteration
  • 10.6.3. Policy Iteration
  • 10.6.4. Solving Partially Observable Markov Decision Processes
  • 10.7. Limited Memory Influence Diagrams
  • 10.8. Summary
  • 10.9. Bibliographical Notes
  • 10.10. Exercises
  • 11. Methods for Analyzing Decision Problems
  • 11.1. Value of Information
  • 11.1.1. Test for Infected Milk?
  • 11.1.2. Myopic Hypothesis-Driven Data Request
  • 11.1.3. Non-Utility-Based Value Functions
  • 11.2. Finding the Relevant Past and Future of a Decision Problem
  • 11.2.1. Identifying the Required Past
  • 11.3. Sensitivity Analysis
  • 11.3.1. Example
  • 11.3.2. One-Way Sensitivity Analysis in General
  • 11.4. Summary
  • 11.5. Bibliographical Notes
  • 11.6. Exercises
  • List of Notation
  • References
  • Index