Get 20% off all career paths from fullstack to AI
Learn Python with Generative AI - Self Paced Online
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This 10-minute presentation from Google explores techniques for Reinforcement Learning from Human Feedback (RLHF) developed specifically for Gemma 3 models. Learn about the challenges of using reward models as proxies for human preferences, including the inevitable problem of reward hacking during prolonged training. Discover the innovative approaches implemented by the Gemma team to mitigate these issues and enable extended training periods, resulting in better alignment of language models for human interaction. Speaker Louis Rouillard explains how these techniques create finer alignment between AI systems and human preferences, ultimately producing more helpful and appropriate AI responses.
Syllabus
RLHF for finer alignment with Gemma 3
Taught by
Google Developers