Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

When the Model Takes Control - The Hidden Risks of AI Autonomy Through MCP - 12

BruCON Security Conference via YouTube

Overview

Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore the emerging security risks of AI autonomy in this conference talk that examines the Model Context Protocol (MCP) and its implications for cybersecurity. Discover how MCP enables large language models to autonomously chain tools and make decisions without human oversight, fundamentally changing the relationship between AI reasoning and execution. Learn about the dangerous scenarios that emerge when AI systems gain the ability to browse the web, write and execute code, and interact with various tools independently. Understand how this autonomy creates new attack vectors where prompt injection exploits and rogue plugins can cause cascading security failures, turning innocent outputs into malicious inputs that trigger unintended command sequences. Examine real-world examples and hypothetical scenarios of AI agents operating beyond intended boundaries in coding assistants, autonomous agent frameworks like AutoGen and CrewAI, OpenAI Agent SDK, and AI desktop environments. Gain insights into the critical need for rethinking trust boundaries and sandboxing strategies when AI systems themselves are making autonomous decisions, and discover why traditional security models may be insufficient for this new paradigm of AI-driven automation.

Syllabus

12 - BruCON 0x11 - When the Model Takes Control: The Hidden Risks of AI Autonomy Through MCP

Taught by

BruCON Security Conference

Reviews

Start your review of When the Model Takes Control - The Hidden Risks of AI Autonomy Through MCP - 12

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.