Breaking AI Agents - Exploiting Managed Prompt Templates to Take Over Amazon Bedrock Agents
fwd:cloudsec via YouTube
Learn Excel & Financial Modeling the Way Finance Teams Actually Use Them
AI, Data Science & Cloud Certificates from Google, IBM & Meta
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore critical security vulnerabilities in AI agent systems through this 22-minute conference talk that demonstrates how attackers can exploit AWS Bedrock Agents using prompt injection techniques. Learn about the concerning security implications as cloud providers rapidly deploy services for building AI-driven applications, with researchers Jay Chen and Royce Lu from Palo Alto Networks revealing how inadequately secured prompt instructions combined with AI models' probabilistic nature create exploitable attack vectors. Discover specific techniques that enable information leakage, agent hijacking, unauthorized tool execution, and manipulation of persistent agent memory within managed AI agent frameworks. Understand the methodology behind identifying these vulnerabilities, examine key research findings, and review proposed mitigation strategies while considering broader implications for similar agent frameworks beyond AWS Bedrock. Gain insights into the intersection of cloud security and AI safety, with emphasis on proactive approaches to addressing emerging security challenges in autonomous AI systems that perform planning, decision-making, and environmental interaction tasks.
Syllabus
Breaking AI Agents: Exploiting Managed Prompt Templates to Take Over Amazon Bedrock Agents
Taught by
fwd:cloudsec