Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Going Back and Beyond - Emerging Old Threats in LLM Privacy and Poisoning

Google TechTalks via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore emerging privacy and security threats in Large Language Models through this Google TechTalk that examines how traditional machine learning vulnerabilities manifest in modern LLM deployments. Learn how adversaries can exploit LLMs' inferential capabilities to reconstruct sensitive user attributes from textual data, moving beyond conventional memorization concerns to demonstrate reconstruction attacks similar to those found in ML fairness research. Discover how common deployment practices like quantization and model fine-tuning can be weaponized to introduce stealthy backdoor attacks into LLMs, representing a significant evolution from traditional data poisoning methods. Understand the broader threat landscape facing LLM-driven applications as users increasingly share personal data with these systems, and examine potential defensive measures to mitigate privacy risks. Gain insights into why expanding beyond well-defined scenarios like training data memorization is crucial for comprehensive LLM security, with practical examples of how inferential capabilities can be turned against user privacy. The presentation emphasizes the importance of adopting broader threat models to ensure the security and privacy of generative AI systems in real-world deployments.

Syllabus

Going Back and Beyond: Emerging (Old) Threats in LLM Privacy and Poisoning

Taught by

Google TechTalks

Reviews

Start your review of Going Back and Beyond - Emerging Old Threats in LLM Privacy and Poisoning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.