Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Agentic AI Red Teaming - Scaling Security Testing for Complex AI Systems

Cloud Security Alliance via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore advanced red teaming methodologies specifically designed for agentic AI systems in this 34-minute conference talk from the Agentic AI Security Summit 2025. Learn how red teaming agentic AI fundamentally differs from traditional LLM testing by focusing on dynamic, goal-directed behaviors that evolve across time and context rather than static prompt-response interactions. Discover comprehensive frameworks, automation tools, and infrastructure solutions needed to scale red teaming processes for increasingly sophisticated AI systems. Examine critical threat vectors emerging from AI autonomy, persistent memory capabilities, tool integration, and long-horizon decision-making processes. Master best practices for designing and executing effective agentic AI red teaming workflows while understanding new measurement approaches including behavioral reliability assessment, systemic robustness evaluation, and comprehensive risk mitigation strategies. Gain insights from Rob van der Veer, Chief AI Officer at Software Improvement Group, and Ken Huang, Co-Chair of CSA AI Safety Working Groups at Cloud Security Alliance, as they share practical expertise on securing autonomous AI systems through advanced adversarial testing methodologies.

Syllabus

Agentic AI Red Teaming | Rob van der Veer & Ken Huang | AI Summit 2025

Taught by

Cloud Security Alliance

Reviews

Start your review of Agentic AI Red Teaming - Scaling Security Testing for Complex AI Systems

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.