Breaking AI Agents - Exploiting Managed Prompt Templates to Take Over Amazon Bedrock Agents
fwd:cloudsec via YouTube
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore critical security vulnerabilities in AI agent systems through this 22-minute conference talk that demonstrates how attackers can exploit AWS Bedrock Agents using prompt injection techniques. Learn about the concerning security implications as cloud providers rapidly deploy services for building AI-driven applications, with researchers Jay Chen and Royce Lu from Palo Alto Networks revealing how inadequately secured prompt instructions combined with AI models' probabilistic nature create exploitable attack vectors. Discover specific techniques that enable information leakage, agent hijacking, unauthorized tool execution, and manipulation of persistent agent memory within managed AI agent frameworks. Understand the methodology behind identifying these vulnerabilities, examine key research findings, and review proposed mitigation strategies while considering broader implications for similar agent frameworks beyond AWS Bedrock. Gain insights into the intersection of cloud security and AI safety, with emphasis on proactive approaches to addressing emerging security challenges in autonomous AI systems that perform planning, decision-making, and environmental interaction tasks.
Syllabus
Breaking AI Agents: Exploiting Managed Prompt Templates to Take Over Amazon Bedrock Agents
Taught by
fwd:cloudsec