0% found this document useful (0 votes)
136 views

LM7 Approximate Inference in BN

KGiSL Institute of Technology is located in Coimbatore, India and is approved by AICTE and affiliated with Anna University. The document outlines the syllabus for the course AD8601 Artificial Intelligence II, which is taught to third year B.Tech students in the Department of Artificial Intelligence & Data Science. The course covers topics like probabilistic reasoning using Bayesian networks, decision making under uncertainty, learning probabilistic models, and applications of reinforcement learning and robotics. Approximate inference methods like Markov chain Monte Carlo and Gibbs sampling are discussed as alternatives to exact inference in Bayesian networks when the latter is intractable.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
136 views

LM7 Approximate Inference in BN

KGiSL Institute of Technology is located in Coimbatore, India and is approved by AICTE and affiliated with Anna University. The document outlines the syllabus for the course AD8601 Artificial Intelligence II, which is taught to third year B.Tech students in the Department of Artificial Intelligence & Data Science. The course covers topics like probabilistic reasoning using Bayesian networks, decision making under uncertainty, learning probabilistic models, and applications of reinforcement learning and robotics. Approximate inference methods like Markov chain Monte Carlo and Gibbs sampling are discussed as alternatives to exact inference in Bayesian networks when the latter is intractable.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 18

KGiSL Institute of Technology

(Approved by AICTE, New Delhi; Affiliated to Anna University, Chennai)


Recognized by UGC, Accredited by NBA (IT)
365, KGiSL Campus, Thudiyalur Road, Saravanampatti, Coimbatore – 641035.

Department of Artificial Intelligence & Data Science


Name of the Faculty : Ms.T.Suganya

Subject Name & Code : AD8601 / ARTIFICIAL INTELLIGENCE II

Branch & Department : B.Tech & AI&DS

Year & Semester : III / VI

Academic Year :2022-23(EVEN)


UNIT I PROBABILISTIC REASONING I 9
Acting under uncertainty – Bayesian inference – naïve bayes models Probabilistic reasoning – Bayesian networks – exact
inference in BN – approximate inference in BN – causal networks

UNIT II PROBABILISTIC REASONING II 9


Probabilistic reasoning over time – time and uncertainty – inference in temporal models – Hidden Markov Models – Kalman
filters – Dynamic Bayesian networks Probabilistic programming

UNIT III DECISIONS UNDER UNCERTAINITY 9


Basis of utility theory – utility functions – Multiattribute utility functions – decision networks – value of information –
unknown preferences Sequential decision problems – MDPs – Bandit problems – partially observable MDPs Multiagent
environments – non-cooperative game theory – cooperative game theory – making collective decisions

UNIT IV LEARNING PROBABILISTIC MODELS 9


Statistical learning theory – maximum-likelihood parameter learning – naïve bayes models – generative and descriptive models
– continuous models – Bayesisn parameter learning – Bayesian linear regression – learning Bayesian net structures – density
estimation EM Algorithm – unsupervised clustering – Gaussian mixture models – learning Bayes net parameters – learning
HMM – learning Bayes net structures with hidden variables

UNIT V REINFORCEMENT LEARNING AND ROBOTICS 9


Learning from rewards – passive reinforcement learning – active reinforcement learning – generalization in reinforcement
learning – policy search – inverse reinforcement learning – applications Robots – robotic perception – planning movements –
reinforcement learning in robotics – robotic frameworks -- applications of robotics Philosophy, ethics, and safety of AI – the
future of A
SYLLABUS

UNIT I PROBABILISTIC REASONING I 9

Acting under uncertainty – Bayesian inference – naïve bayes models


Probabilistic reasoning – Bayesian networks – exact inference in BN
– approximate inference in BN – causal networks.

AD8601/AI-II/III YEARAI&DS/KG-KiTE
Course Outcomes

OUTCOMES:
On completion of the course, the students will be able to:
CO1: Explain the probabilistic reasoning using Bayesian inference
CO2: Apply appropriate Probabilistic reasoning techniques for solving uncertainty
problems
CO3:Explain use of game theory for decision making.
CO4:Explain and apply probabilistic models for various use cases
CO5: Apply AI techniques for robotics

AD8601/AI-II/III YEARAI&DS/KG-KiTE
INFERENCES IN BAYESIAN NETWORKS

Purpose of inferences:

•The basic task for any probabilistic inference system is to compute the posterior

probability distribution for a set of query variables, given some observed event - that

is, some assignment of values to a set of evidence variables.

AL3391/AI/II AI&DS/III SEM/KG-KiTE


APPROXIMATE INFERENCE IN BN

 A method of estimating probabilities in Bayesian networks.

 Also called ‘Monte Carlo’ algorithms(provide approximate answers whose


accuracy depends on Monte Carlo the number of samples generated).

 Two types of algorithms: direct sampling and Markov chain sampling.

 In exact inference, we analytically compute the conditional probability


distribution over the variables of interest.

 But sometimes, that’s too hard to do, in which case we can use
approximation techniques based on statistical sampling.
AL3391/AI/II AI&DS/III SEM/KG-KiTE
APPROXIMATE INFERENCE IN BN

Why use approximate inference?

Exact inference becomes intractable for large multiply-connected networks.

Variable elimination can have exponential time and space complexity.

Exact inference is strictly HARDER than NP-complete problems (#P-hard)

AL3391/AI/II AI&DS/III SEM/KG-KiTE


APPROXIMATE INFERENCE IN BN
 Direct Sampling Take samples of events.

 The primitive element in any sampling algorithm is the generation of samples


from a known probability distribution.

 For example, an unbiased coin can be thought of as a random variable Coin with
values <heads,tails> and a prior distribution P(Coin) = <0.5,0.5>.

 Sampling from this distribution is exactly like flipping the coin: with probability
0.5 it will return heads, and with probability 0.5 it will return tails.

AL3391/AI/II AI&DS/III SEM/KG-KiTE


APPROXIMATE INFERENCE IN BN
 We expect the frequency of the samples to converge on the probability of the
event.
 Rejection sampling
 Likelihood Weighting
 Rejection sampling - Used to compute conditional probabilities P(X|e).
 Generate samples as before.
 Reject samples that do not match evidence.
 Estimate by counting the how often event X is in the resulting samples
 Likelihood Weighting-Avoid inefficiency of rejection sampling.
 Fix values for evidence variables and only sample the remaining variable.
 Weight samples with regard to how likely they are

AL3391/AI/II AI&DS/III SEM/KG-KiTE


APPROXIMATE INFERENCE IN BN

AL3391/AI/II AI&DS/III SEM/KG-KiTE


APPROXIMATE INFERENCE IN BN

AL3391/AI/II AI&DS/III SEM/KG-KiTE


APPROXIMATE INFERENCE IN BN

AL3391/AI/II AI&DS/III SEM/KG-KiTE


APPROXIMATE INFERENCE IN BN

AL3391/AI/II AI&DS/III SEM/KG-KiTE


APPROXIMATE INFERENCE IN BN

AL3391/AI/II AI&DS/III SEM/KG-KiTE


APPROXIMATE INFERENCE IN BN
Markov Chain Sampling
Markov chain Monte Carlo (MCMC) methods comprise a class
of algorithms for sampling from a probability distribution.

Generate events by making a random change to the preceding event.

i.e Instead of generating each sample from scratch, MCMC algorithms


generate each sample by making a random change to the preceding sample.

It is therefore helpful to think of an MCMC algorithm as being in a particular


current state specifying a value for every variable and generating a next state by
making random changes to the current state.
AL3391/AI/II AI&DS/III SEM/KG-KiTE
Gibbs Sampling is a form of MCMC.
APPROXIMATE INFERENCE IN BN

Gibbs Sampling
 Gibbs sampling or a Gibbs sampler is a MCMC algorithm for obtaining a
sequence of observations which are approximated from a
specified multivariate probability distribution, when direct sampling is difficult.

This sequence can be used to approximate the joint distribution; to approximate


the marginal distribution of one of the variables, or some subset of the
variables ;or to compute an integral.

Typically, some of the variables correspond to observations whose values are


known, and hence do not need to be sampled.

AL3391/AI/II AI&DS/III SEM/KG-KiTE


APPROXIMATE INFERENCE IN BN

Gibbs Sampling
Gibbs sampling is commonly used as a means of statistical inference,
especially Bayesian inference.
It is a randomized algorithm (i.e. an algorithm that makes use of random
numbers), and is an alternative to deterministic algorithms for statistical inference
such as the expectation-maximization algorithm (EM).
As with other MCMC algorithms, Gibbs sampling generates a Markov chain of
samples, each of which is correlated with nearby samples.
Generally, samples from the beginning of the chain (the burn-in period) may not
accurately represent the desired distribution and are usually discarded.

AL3391/AI/II AI&DS/III SEM/KG-KiTE


APPROXIMATE INFERENCE IN BN

Gibbs Sampling
Gibbs sampling is applicable when the joint distribution is not known explicitly
or is difficult to sample from directly, but the conditional distribution of each
variable is known and is easy (or at least, easier) to sample from.

The Gibbs sampling algorithm generates an instance from the distribution of each
variable in turn, conditional on the current values of the other variables.

It can be shown that the sequence of samples constitutes a Markov chain, and the
stationary distribution of that Markov chain is just the sought-after joint
distribution.
AL3391/AI/II AI&DS/III SEM/KG-KiTE

You might also like