Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about a novel framework for evaluating machine unlearning methods through this 18-minute conference presentation from USENIX Security '25. Discover RULI (Rectified Unlearning via Likelihood Inference), a groundbreaking approach that addresses critical gaps in assessing inexact unlearning techniques by introducing dual-objective attacks to measure both unlearning efficacy and privacy risks at individual sample levels. Explore how this research reveals significant vulnerabilities in current state-of-the-art unlearning benchmarks, demonstrating higher attack success rates that expose previously underestimated privacy risks. Understand the limitations of existing evaluation frameworks and the advantages of targeted analysis for identifying vulnerable samples in machine learning models. Examine the game-theoretic foundation underlying RULI and its empirical evaluations, which provide a rigorous, scalable, and fine-grained methodology for evaluating unlearning techniques. Gain insights into the challenges of machine unlearning, including the balance between computational efficiency and privacy guarantees, and how this work establishes groundwork for future unlearning algorithm designs focused on practical privacy guarantees and robust efficacy measurements.
Syllabus
USENIX Security '25 - Rectifying Privacy and Efficacy Measurements in Machine Unlearning...
Taught by
USENIX