Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Breaking AI Agents - Exploiting Managed Prompt Templates to Take Over Amazon Bedrock Agents

fwd:cloudsec via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore critical security vulnerabilities in AI agent systems through this 22-minute conference talk that demonstrates how attackers can exploit AWS Bedrock Agents using prompt injection techniques. Learn about the concerning security implications as cloud providers rapidly deploy services for building AI-driven applications, with researchers Jay Chen and Royce Lu from Palo Alto Networks revealing how inadequately secured prompt instructions combined with AI models' probabilistic nature create exploitable attack vectors. Discover specific techniques that enable information leakage, agent hijacking, unauthorized tool execution, and manipulation of persistent agent memory within managed AI agent frameworks. Understand the methodology behind identifying these vulnerabilities, examine key research findings, and review proposed mitigation strategies while considering broader implications for similar agent frameworks beyond AWS Bedrock. Gain insights into the intersection of cloud security and AI safety, with emphasis on proactive approaches to addressing emerging security challenges in autonomous AI systems that perform planning, decision-making, and environmental interaction tasks.

Syllabus

Breaking AI Agents: Exploiting Managed Prompt Templates to Take Over Amazon Bedrock Agents

Taught by

fwd:cloudsec

Reviews

Start your review of Breaking AI Agents - Exploiting Managed Prompt Templates to Take Over Amazon Bedrock Agents

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.