AI Engineer - Learn how to integrate AI into software applications
Future-Proof Your Career: AI Manager Masterclass
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the critical intersection of AI security and automated testing in this 20-minute conference talk from USENIX Security '25. Learn about AI Red Teaming fundamentals and discover how automated approaches can identify ethical and security vulnerabilities in Generative AI systems at scale. Examine the complex risks and safety concerns that emerge as GenAI transforms industries from healthcare to military defense, and understand why manual testing alone cannot keep pace with rapid AI development. Discover how open-source AI red teaming tools are democratizing access to these essential security techniques, and gain insights into proactive strategies for identifying vulnerabilities before AI systems reach deployment. Master the principles of scalable, efficient, and adaptive red teaming efforts that are crucial for building a safer and more ethical AI future as these powerful systems continue to evolve and introduce novel challenges to society.
Syllabus
USENIX Security '25 (Enigma Track) - AI Red Teaming and Automation: Exploring Societal Risks in...
Taught by
USENIX