PowerBI Data Analyst - Create visualizations and dashboards from scratch
All Coursera Certificates 40% Off
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about privacy-preserving techniques for adapting large language models through this comprehensive technical presentation that examines the privacy vulnerabilities in LLM prompting and proposes novel solutions. Discover how membership inference attacks can exploit data contained in prompts, demonstrating the validity of privacy concerns in current LLM usage. Explore innovative approaches to private prompt learning, including gradient descent methods for obtaining soft prompts privately and ensemble-based techniques using "flocks of stochastic parrots" for discrete prompt generation. Examine comprehensive evaluations comparing privacy guarantees and performance across different adaptation methods, including private parameter-efficient fine-tuning (PEFT) and full fine-tuning approaches. Analyze threat models and performance comparisons under various privacy levels using differential privacy frameworks, different LLM architectures, and multiple datasets for both classification and generation tasks. Understand the critical findings regarding data leakage in closed LLMs, where query data and training data can be exposed to LLM providers, while methods protecting private data require local open LLMs. Compare performance differences between closed and open LLM adaptation methods, and evaluate the monetary costs associated with training and querying different privacy-preserving approaches. Gain insights into why open LLMs currently provide superior privacy protection, performance, and cost-effectiveness for truly privacy-preserving LLM adaptations.
Syllabus
Private Adaptations of Large Language Models
Taught by
Google TechTalks