Bayesian Networks 2 - Forward-Backward - Stanford CS221: AI
Stanford University via YouTube
-
11
-
- Write review
Most AI Pilots Fail to Scale. MIT Sloan Teaches You Why — and How to Fix It
Free courses from frontend to fullstack and AI
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn about advanced concepts in Bayesian networks and probabilistic inference in this Stanford University lecture from the CS221: AI course. Explore hidden Markov models, lattice representations, and particle filtering techniques. Dive into topics such as beam search, object tracking, and Gibbs sampling. Gain a deeper understanding of forward-backward algorithms and their applications in artificial intelligence through comprehensive explanations and demonstrations.
Syllabus
Introduction.
Review: Bayesian network.
Review: probabilistic inference.
Hidden Markov model inference.
Lattice representation.
Summary.
Hidden Markov models.
Review: beam search.
Step 1: propose.
weight.
Step 3: resample.
Application: object tracking.
Particle filtering demo.
Roadmap.
Gibbs sampling.
Taught by
Stanford Online