Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Leveraging Per-Instance Privacy for Machine Unlearning

Google TechTalks via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about a principled approach to machine unlearning that uses per-instance privacy analysis to quantify the difficulty of removing individual data points from trained models. Explore how this Google TechTalk presentation sharpens the analysis of noisy gradient descent for unlearning by replacing worst-case privacy loss bounds with per-instance privacy losses, leading to better utility-unlearning tradeoffs. Discover how per-instance privacy losses bound the Renyi divergence to retraining without individual data points and examine empirical results demonstrating that theoretical predictions hold for both Stochastic Gradient Langevin Dynamics (SGLD) and standard fine-tuning without explicit noise. Understand the correlation between per-instance privacy losses and existing data difficulty metrics, learn how this approach identifies harder groups of data points, and explore novel evaluation methods based on loss barriers that provide foundations for more efficient and adaptive unlearning strategies tailored to individual data point properties.

Syllabus

Leveraging Per-Instance Privacy for Machine Unlearning

Taught by

Google TechTalks

Reviews

Start your review of Leveraging Per-Instance Privacy for Machine Unlearning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.