Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Weak to Strong Generalization in Random Feature Models

Paul G. Allen School via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
In this Distinguished Seminar in Optimization & Data, Professor Nathan Srebro from the Toyota Technological Institute at Chicago presents his research on "Weak to Strong Generalization in Random Feature Models." Explore how a strong student model with a large number of random features can outperform a weak teacher model despite being trained only on data labeled by that teacher. Srebro demonstrates that weak-to-strong generalization doesn't require complex learners like GPT-4 or pre-training, but can occur in simple random feature models through mechanisms like early stopping. The talk also addresses the quantitative limits of this phenomenon. Professor Srebro, recognized for his significant contributions to machine learning including work on Markov networks, nuclear norm applications, optimization techniques, and fairness measures, shares insights from his joint work with Marko Medvedev, Kaifeng Lyu, Dingli Yu, Sanjeev Arora, and Zhiyuan Li. This hour-long seminar presented by the Paul G. Allen School offers valuable perspectives on an intriguing machine learning phenomenon with implications for model development and training strategies.

Syllabus

Distinguished Seminar in Optimization & Data: Nathan Srebro (TTIC)

Taught by

Paul G. Allen School

Reviews

Start your review of Weak to Strong Generalization in Random Feature Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.