Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Learn the fundamentals of attacking Large Language Models (LLMs) in red team exercises through this 43-minute conference talk from BSidesLV. Understand your responsibilities as a red teamer when addressing emerging technologies, particularly as LLMs become an increasingly significant attack surface in enterprise environments. Gain foundational knowledge of how LLMs operate without getting bogged down in complex mathematics, then explore key attack strategies including prompt injection techniques and jailbreak methods. Examine real-world examples drawn from both research findings and actual operational scenarios to understand practical attack vectors. Discover how to effectively target applications and AI agents that leverage LLM technology, equipping yourself with the essential skills needed to assess and exploit these systems during red team engagements. Master the core concepts and methodologies required to identify vulnerabilities in LLM-powered systems and incorporate these techniques into your offensive security toolkit.
Syllabus
- Date/Time: Monday, 17:00–17:45
Taught by
BSidesLV