Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn to identify and defend against prompt injection attacks in AI applications through this comprehensive conference talk that explores both basic and advanced attack techniques. Discover how attackers manipulate large language models through instruction overrides and hidden prompts, while examining escalation methods that amplify security risks. Gain practical knowledge of mitigation strategies to secure AI interactions and protect applications from emerging threats in the rapidly evolving landscape of AI security.
Syllabus
Understanding Prompt Injection Techniques, Challenges, and Advanced Escalation by Brian Vermeer
Taught by
Devoxx