Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore security vulnerabilities and auditing methodologies for Large Language Models in this 23-minute conference talk from Conf42 LLMs 2025. Begin with an introduction to transformers in natural language processing and understand the training and fine-tuning processes of language models. Examine critical security concerns including injection attacks and their mitigation strategies, then delve into adversarial attacks targeting machine learning models. Learn about specialized tools for evaluating model robustness and discover privacy-focused security auditing frameworks. Gain practical insights into protecting LLM deployments through comprehensive security assessment techniques and best practices for maintaining model integrity in production environments.
Syllabus
00:00 Introduction to Security Auditing Tools
00:47 Understanding Transformers in NLP
02:20 Training and Fine-Tuning Language Models
03:22 Security Concerns in Language Models
03:52 Injection Attacks and Mitigation
08:38 Adversarial Attacks on Machine Learning Models
15:09 Tools for Evaluating Model Robustness
16:53 Privacy and Security Auditing Tools
22:22 Conclusion and Resources
Taught by
Conf42