Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Red Teaming AI - 50 Years of Failure, But This Time, For Sure!

RSA Conference via YouTube

Overview

Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how to secure AI systems through threat modeling and design-first security strategies in this conference talk from RSA Conference. Explore why traditional "penetrate and patch" security approaches have failed over 50 years of penetration testing and discover how shifting security left through threat modeling is finally gaining traction. Examine the unique challenges of securing Large Language Models (LLMs) where code and data are intermingled, making traditional security approaches inadequate. Understand the limitations of reactive security measures and why building secure systems remains difficult despite decades of security practices. Discover practical, achievable strategies for delivering AI that is secure by design rather than secured as an afterthought. Gain insights into modern security approaches that move beyond the failed paradigms of the past and learn how to apply threat modeling specifically to AI systems. Master techniques for identifying and addressing security vulnerabilities in AI applications before they reach production, focusing on proactive rather than reactive security measures.

Syllabus

Red Teaming AI: 50 Years of Failure, But This Time, For Sure!

Taught by

RSA Conference

Reviews

Start your review of Red Teaming AI - 50 Years of Failure, But This Time, For Sure!

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.