Python, Prompt Engineering, Data Science — Build the Skills Employers Want Now
Start speaking a new language. It’s just 3 weeks away.
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This lecture explores the intricacies of representation learning for image-based goals in robot learning, focusing on visual goal-conditioned reinforcement learning. Discover how to translate image-based goals into functional latent representations and understand the properties of effective models. Learn why viewing goals and tasks as distributions rather than fixed points can overcome limitations in current methods. The presentation covers how goal-conditioned RL enables instructing agents through desired outcomes in image space, while highlighting the challenges of determining which image features are truly task-relevant. Explore the application of Variational Autoencoders (VAEs) for learning latent representations that capture essential pose and task-relevant information while filtering out irrelevant details. Understand practical implementation aspects including computing rewards based on latent space distances and training robust representations. The lecture concludes by connecting these concepts to recent foundational models for robotics and discussing key ingredients for improving learning representations in large models.
Syllabus
Robot Learning: Visual Goal-Condition Reinforcement Learning
Taught by
Montreal Robotics