Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Google

RLHF for Finer Alignment with Gemma 3

Google via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This 10-minute presentation from Google explores techniques for Reinforcement Learning from Human Feedback (RLHF) developed specifically for Gemma 3 models. Learn about the challenges of using reward models as proxies for human preferences, including the inevitable problem of reward hacking during prolonged training. Discover the innovative approaches implemented by the Gemma team to mitigate these issues and enable extended training periods, resulting in better alignment of language models for human interaction. Speaker Louis Rouillard explains how these techniques create finer alignment between AI systems and human preferences, ultimately producing more helpful and appropriate AI responses.

Syllabus

RLHF for finer alignment with Gemma 3

Taught by

Google Developers

Reviews

Start your review of RLHF for Finer Alignment with Gemma 3

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.