AI Adoption - Drive Business Value and Organizational Impact
Earn Your CS Degree, Tuition-Free, 100% Online!
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about SAFE-MCP, a comprehensive security framework designed to protect Model Context Protocol (MCP) implementations in AI systems through this 22-minute conference talk. Discover how this community-driven framework addresses real-world security threats by mapping 65 attack techniques across 14 tactics based on 2024-2025 security research. Explore critical vulnerabilities including Tool Poisoning attacks that embed hidden instructions in tool descriptions to compromise AI behavior, Command Injection vulnerabilities affecting 43% of MCP servers according to Equixly research, and emerging threats like MCP Rug Pull attacks. Examine case studies from major organizations and security research conducted by Invariant Labs, Equixly, and Backslash Security to understand how attackers exploit MCP integrations. Gain practical knowledge on identifying vulnerabilities in AI deployments, implementing effective security controls, and developing detection strategies for MCP-based systems. Learn how to contribute to this evolving security framework and apply actionable intelligence to defend against sophisticated attacks targeting AI tool integration protocols.
Syllabus
SAFE-MCP: A Security Framework for AI+MCP (Model Context Protocol) - Frederick Kautz, TestifySec
Taught by
OpenSSF