Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This course introduces the foundational principles of artificial intelligence through the lens of reasoning and decision-making under uncertainty. Students begin by examining how intelligent agents act in uncertain environments using probability theory, Bayes’ Rule, and independence assumptions to update beliefs—concepts that underpin probabilistic machine learning and data-driven decision-making. The course then explores Bayesian Networks as a structured framework for representing complex dependencies and performing inference, connecting to modern graphical models and causal reasoning. Building on this, students study probabilistic reasoning over time using temporal models such as Hidden Markov Models, with links to contemporary sequence modeling and state estimation in applications like speech recognition and robotics. Finally, the course addresses sequential decision-making through Markov Decision Processes, where students learn to compute optimal policies using value iteration, policy iteration, and the Bellman equation—ideas that form the foundation of modern reinforcement learning methods used in systems such as autonomous agents and game-playing AI.