Most AI Pilots Fail to Scale. MIT Sloan Teaches You Why — and How to Fix It
Finance Certifications Goldman Sachs & Amazon Teams Trust
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn about a novel framework for measuring machine unlearning effectiveness in this 13-minute conference presentation from USENIX Security '25. Explore the critical challenges in data privacy and security that make machine unlearning—the process of removing specific data influences from trained models without complete retraining—increasingly important. Discover how existing assessment methods like Membership Inference Attacks (MIAs) face significant limitations, including prohibitive computational costs and inability to capture granular changes in approximate unlearning scenarios. Examine the proposed Interpolated Approximate Measurement (IAM) framework, which natively addresses unlearning inference by quantifying sample-level unlearning completeness through interpolating the model's generalization-fitting behavior gap on queried samples. Understand how IAM achieves strong performance in binary inclusion tests for exact unlearning while maintaining high correlation for approximate unlearning, with scalability extending to Large Language Models using just one pre-trained shadow model. Gain insights into the theoretical analysis of IAM's scoring mechanism and its efficiency maintenance capabilities. Investigate the application of IAM to recent approximate unlearning algorithms, which reveals general risks of both over-unlearning and under-unlearning, highlighting the critical need for stronger safeguards in approximate unlearning systems.
Syllabus
USENIX Security '25 - Towards Lifecycle Unlearning Commitment Management: Measuring Sample-level...
Taught by
USENIX