Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

How to Make Smaller LLMs R1-Smart

Discover AI via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This video lecture from UC Berkeley explores how to enhance the reasoning capabilities of smaller Large Language Models (LLMs) to achieve R1-level smartness. Over 31 minutes, researchers from UC Berkeley and the Allen Institute for AI present their findings from the paper "Climbing the Ladder of Reasoning: What LLMs Can—and Still Can't—Solve after SFT?" (arXiv:2504.11741v1). Learn about the challenges and methodologies for improving reasoning capabilities in smaller language models, understanding the limitations that persist even after supervised fine-tuning, and discover practical approaches to enhance LLM performance without requiring massive computational resources. The presentation offers valuable insights for AI researchers and practitioners interested in optimizing smaller language models for complex reasoning tasks.

Syllabus

Make Smaller LLMs R1-Smart (UC Berkeley)

Taught by

Discover AI

Reviews

Start your review of How to Make Smaller LLMs R1-Smart

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.