AI Adoption - Drive Business Value and Organizational Impact
AI Engineer - Learn how to integrate AI into software applications
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about a novel framework for measuring machine unlearning effectiveness in this 13-minute conference presentation from USENIX Security '25. Explore the critical challenges in data privacy and security that make machine unlearning—the process of removing specific data influences from trained models without complete retraining—increasingly important. Discover how existing assessment methods like Membership Inference Attacks (MIAs) face significant limitations, including prohibitive computational costs and inability to capture granular changes in approximate unlearning scenarios. Examine the proposed Interpolated Approximate Measurement (IAM) framework, which natively addresses unlearning inference by quantifying sample-level unlearning completeness through interpolating the model's generalization-fitting behavior gap on queried samples. Understand how IAM achieves strong performance in binary inclusion tests for exact unlearning while maintaining high correlation for approximate unlearning, with scalability extending to Large Language Models using just one pre-trained shadow model. Gain insights into the theoretical analysis of IAM's scoring mechanism and its efficiency maintenance capabilities. Investigate the application of IAM to recent approximate unlearning algorithms, which reveals general risks of both over-unlearning and under-unlearning, highlighting the critical need for stronger safeguards in approximate unlearning systems.
Syllabus
USENIX Security '25 - Towards Lifecycle Unlearning Commitment Management: Measuring Sample-level...
Taught by
USENIX