Master Windows Internals - Kernel Programming, Debugging & Architecture
40% Off All Coursera Courses
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn why current AI red teaming approaches are fundamentally flawed and missing critical commercial viability issues in this conference talk from RSA Conference. Discover how the first two Generative Red Teams at DEF CON 31 and 32 revealed that existing red teaming efforts focus on minor issues while overlooking major reliability problems that prevent LLMs from becoming commercially viable. Explore why models like DeepSeek, despite demonstrating impressive technical capabilities, remain unusable for most commercial applications due to reliability concerns. Understand the need for a specialized AI disclosure program designed specifically for artificial intelligence systems, moving beyond traditional security testing approaches. Gain insights into what makes LLMs commercially viable and how the AI security community needs to shift its focus from superficial red teaming exercises to addressing fundamental model reliability challenges that actually matter for real-world deployment.
Syllabus
Most AI Red Teaming is Useless: Lessons from AI Village
Taught by
RSA Conference