Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore patient-level privacy risks in medical AI deployment through this Google TechTalk that investigates how machine learning models in healthcare may pose disparate privacy threats to individual patients. Learn about the limitations of current privacy attack research that measures success rates in aggregate rather than at the individual patient level, and discover new findings on patient-level privacy auditing methodologies. Examine how medical AI models, while promising to improve global access to high-quality diagnostics, create privacy vulnerabilities that disproportionately affect under-represented patient groups. Understand the implications of membership inference attacks and other privacy threats when patients contribute multiple records to training datasets, and gain insights into the evolving landscape of privacy protection in medical machine learning applications.
Syllabus
Disparate Privacy Risks from Medical AI - An Investigation into Patient-level Privacy Risk
Taught by
Google TechTalks