Most AI Pilots Fail to Scale. MIT Sloan Teaches You Why — and How to Fix It
Free courses from frontend to fullstack and AI
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn about prompt injection attacks and defenses in this conference presentation from USENIX Security '24, where researchers from Penn State and Duke University present a comprehensive framework for understanding and evaluating these security threats. Explore how malicious instructions can be injected into LLM-Integrated Applications to manipulate outputs, and examine the systematic evaluation of 5 different attack methods and 10 defense strategies across 10 Large Language Models and 7 distinct tasks. Discover a new hybrid attack method that combines existing approaches, and gain access to an open-source platform for conducting further research in this emerging security field. The presentation addresses current limitations in prompt injection research by providing a formal framework and establishing a common benchmark for quantitative evaluation of future attacks and defenses.
Syllabus
USENIX Security '24 - Formalizing and Benchmarking Prompt Injection Attacks and Defenses
Taught by
USENIX