LM7 Approximate Inference in BN
LM7 Approximate Inference in BN
AD8601/AI-II/III YEARAI&DS/KG-KiTE
Course Outcomes
OUTCOMES:
On completion of the course, the students will be able to:
CO1: Explain the probabilistic reasoning using Bayesian inference
CO2: Apply appropriate Probabilistic reasoning techniques for solving uncertainty
problems
CO3:Explain use of game theory for decision making.
CO4:Explain and apply probabilistic models for various use cases
CO5: Apply AI techniques for robotics
AD8601/AI-II/III YEARAI&DS/KG-KiTE
INFERENCES IN BAYESIAN NETWORKS
Purpose of inferences:
•The basic task for any probabilistic inference system is to compute the posterior
probability distribution for a set of query variables, given some observed event - that
But sometimes, that’s too hard to do, in which case we can use
approximation techniques based on statistical sampling.
AL3391/AI/II AI&DS/III SEM/KG-KiTE
APPROXIMATE INFERENCE IN BN
For example, an unbiased coin can be thought of as a random variable Coin with
values <heads,tails> and a prior distribution P(Coin) = <0.5,0.5>.
Sampling from this distribution is exactly like flipping the coin: with probability
0.5 it will return heads, and with probability 0.5 it will return tails.
Gibbs Sampling
Gibbs sampling or a Gibbs sampler is a MCMC algorithm for obtaining a
sequence of observations which are approximated from a
specified multivariate probability distribution, when direct sampling is difficult.
Gibbs Sampling
Gibbs sampling is commonly used as a means of statistical inference,
especially Bayesian inference.
It is a randomized algorithm (i.e. an algorithm that makes use of random
numbers), and is an alternative to deterministic algorithms for statistical inference
such as the expectation-maximization algorithm (EM).
As with other MCMC algorithms, Gibbs sampling generates a Markov chain of
samples, each of which is correlated with nearby samples.
Generally, samples from the beginning of the chain (the burn-in period) may not
accurately represent the desired distribution and are usually discarded.
Gibbs Sampling
Gibbs sampling is applicable when the joint distribution is not known explicitly
or is difficult to sample from directly, but the conditional distribution of each
variable is known and is easy (or at least, easier) to sample from.
The Gibbs sampling algorithm generates an instance from the distribution of each
variable in turn, conditional on the current values of the other variables.
It can be shown that the sequence of samples constitutes a Markov chain, and the
stationary distribution of that Markov chain is just the sought-after joint
distribution.
AL3391/AI/II AI&DS/III SEM/KG-KiTE