Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore six essential security design patterns for protecting Large Language Model (LLM) agents from prompt injection attacks through this comprehensive paper review. Delve into the research by Beurer-Kellner et al. that presents structural defenses against one of the most critical vulnerabilities in LLM systems. Learn about the fundamental problem space of prompt injection attacks and understand why traditional defenses often fall short. Examine each of the six core patterns in detail: Agent Selector for routing requests safely, Plan-Then-Execute for separating planning from execution, LLM Map-Reduce for distributed processing, Dual LLM for verification workflows, Code-Then-Execute for structured output generation, and Context Minimization for reducing attack surface. Discover practical implementation approaches through working code examples that demonstrate how each pattern addresses specific security challenges while maintaining system functionality. Analyze real-world case studies that illustrate the application of these patterns in production environments. Master best practices for securing LLM agents, including engineering considerations for implementation and deployment. Gain insights into the trade-offs between security and performance for each pattern, enabling informed decisions about which approaches best suit specific use cases and threat models.
Syllabus
00:00 - Introduction
03:14 - Problem Space
05:27 - Prompt Injection Defences
08:11 - The Problem with Prompt Injection Defences
09:03 - Core Principle
09:48 - Pattern 1: Agent Selector
12:28 - Pattern 2: Plan-Then-Execute
15:03 - Pattern 3: LLM Map-Reduce
17:05 - Pattern 4: Dual LLM
20:27 - Pattern 5: Code-Then-Execute
24:40 - Pattern 6: Context Minimization
26:22 - Case Studies
27:08 - Best Practices for Securing LLM Agents
31:37 - Engineering Considerations
32:27 - Summary
Taught by
Donato Capitella