Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

DataCamp

Deep Reinforcement Learning in Python

via DataCamp

Overview

DataCamp Flash Sale:
50% Off - Build Data and AI Skills!
Grab it
Learn and use powerful Deep Reinforcement Learning algorithms, including refinement and optimization techniques.

Discover the cutting-edge techniques that empower machines to learn and interact with their environments. You will dive into the world of Deep Reinforcement Learning (DRL) and gain hands-on experience with the most powerful algorithms driving the field forward. You will use PyTorch and the Gymnasium environment to build your own agents.

Master the Fundamentals of Deep Reinforcement Learning



Our journey begins with the foundations of DRL and their relationship to traditional Reinforcement Learning. From there, we swiftly move on to implementing Deep Q-Networks (DQN) in PyTorch, including advanced refinements such as Double DQN and Prioritized Experience Replay to supercharge your models.

Take your skills to the next level as you explore policy-based methods. You will learn and implement essential policy-gradient techniques such as REINFORCE and Actor-Critic methods.

Use Cutting-edge Algorithms



You will encounter powerful DRL algorithms commonly used in the industry today, including Proximal Policy Optimization (PPO). You will gain practical experience with the techniques driving breakthroughs in robotics, game AI, and beyond. Finally, you will learn to optimize your models using Optuna for hyperparameter tuning.

By the end of this course, you will have acquired the skills to apply these cutting-edge techniques to real-world problems and harness DRL's full potential!

Syllabus

  • Introduction to Deep Reinforcement Learning
    • Discover how deep reinforcement learning improves upon traditional Reinforcement Learning while studying and implementing your first Deep Q Learning algorithm.
  • Deep Q-learning
    • Dive into Deep Q-learning by implementing the original DQN algorithm, featuring Experience Replay, epsilon-greediness and fixed Q-targets. Beyond DQN, you will then explore two fascinating extensions that improve the performance and stability of Deep Q-learning: Double DQN and Prioritized Experience Replay.
  • Introduction to Policy Gradient Methods
    • Learn about the foundational concepts of policy gradient methods found in DRL. You will begin with the policy gradient theorem, which forms the basis for these methods. Then, you will implement the REINFORCE algorithm, a powerful approach to learning policies. The chapter will then guide you through Actor-Critic methods, focusing on the Advantage Actor-Critic (A2C) algorithm, which combines the strengths of both policy gradient and value-based methods to enhance learning efficiency and stability.
  • Proximal Policy Optimization and DRL Tips
    • Explore Proximal Policy Optimization (PPO) for robust DRL performance. Next, you will examine using an entropy bonus in PPO, which encourages exploration by preventing premature convergence to deterministic policies. You'll also learn about batch updates in policy gradient methods. Finally, you will learn about hyperparameter optimization with Optuna, a powerful tool for optimizing performance in your DRL models.

Taught by

Timothée Carayol

Reviews

Start your review of Deep Reinforcement Learning in Python

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.