Python, Prompt Engineering, Data Science — Build the Skills Employers Want Now
Pass the PMP® Exam on Your First Try — Expert-Led Training
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore a comprehensive analysis of the paper "Regularizing Trajectory Optimization with Denoising Autoencoders" in this informative video. Delve into the challenges of planning with learned world models in reinforcement learning and discover a novel solution that regularizes trajectory optimization using denoising autoencoders. Learn how this approach improves planning accuracy with both gradient-based and gradient-free optimizers, leading to rapid initial learning in popular motor control tasks. Gain insights into the paper's methodology, experiments, and implications for enhancing sample efficiency in model-based reinforcement learning.
Syllabus
Introduction
What is Reinforcement Learning
Exploiting Inaccurate Models
Proposed Approach
Regularization
Denoising Autoencoders
Optimal Denoising Function
Gradient Descent
Experiments
Taught by
Yannic Kilcher