Master Windows Internals - Kernel Programming, Debugging & Architecture
Save 43% on 1 Year of Coursera Plus
Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore adversarial red-teaming techniques for AI music generation models in this 14-minute conference talk from USENIX Security '25. Learn how harmful music serves as propaganda and recruitment tools for extremist groups while discovering how generative AI significantly lowers barriers to producing such content. Examine the two main providers of generative music AI from a security testing perspective, practice bypassing safety filters, and understand systematic approaches to security-testing novel AI models. Discover the ethical implications of red-teaming assessments and gain insights into designing more reliable classification solutions to prevent generative models from causing harm. Master considerations for defending AI solutions against unique misuse vectors of generative artificial intelligence, with no prior machine learning experience required.
Syllabus
USENIX Security '25 (Enigma Track) - Please (Don't) Stop the Music: Adversarial Red-Teaming of AI...
Taught by
USENIX