35% Off Finance Skills That Get You Hired - Code CFI35
You’re only 3 weeks away from a new language
Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore critical security vulnerabilities and defense strategies for Large Language Model applications in this 20-minute conference talk from Conf42 LLMs 2025. Begin with an introduction to the unique security challenges posed by LLMs and understand how these AI systems differ from traditional applications in terms of attack vectors. Examine prompt injection attacks and learn how malicious inputs can manipulate model behavior, then investigate insecure output handling practices that can lead to system compromises. Discover the risks of training data poisoning and how contaminated datasets can affect model integrity, followed by an analysis of model theft and extraction techniques used by attackers. Understand excessive agency risks where LLMs are given too much autonomy, and explore how sensitive information disclosure can occur through model responses. Analyze supply chain vulnerabilities in LLM development and deployment pipelines, then examine the dangers of over-reliance on AI decision-making systems. Learn about denial of service attacks specific to LLM infrastructure and conclude with comprehensive best practices for securing LLM applications throughout their lifecycle.
Syllabus
00:00 Introduction to Securing Large Language Model Applications
00:51 The Rise of Large Language Models
01:33 Unique Vulnerabilities of LLMs
03:34 Prompt Injection Attacks
04:45 Insecure Output Handling
05:59 Training Data Poisoning
07:17 Model Theft and Extraction
08:41 Excessive Agency Risks
10:06 Sensitive Information Disclosure
11:33 Supply Chain Vulnerabilities
13:07 Over-Reliance on AI Decisions
16:00 Denial of Service Attacks
17:14 Best Practices Summary
18:34 Conclusion and Final Thoughts
Taught by
Conf42