Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

AI/ML Security - Understanding Jailbreak Prompts and Adversarial Illusions in Large Language Models

RSA Conference via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore critical security vulnerabilities in artificial intelligence and machine learning systems through this 47-minute conference talk from RSA Conference. Delve into two groundbreaking USENIX Security research papers presented by PhD researchers from Cornell Tech and Washington University in St. Louis. Learn about jailbreak prompts that can bypass safety measures in large language models, understanding how these techniques work and their implications for AI security. Examine adversarial illusions in multi-modal embeddings, discovering how attackers can manipulate AI systems that process multiple types of data simultaneously. Gain insights into the latest academic research on AI/ML security threats, defensive strategies, and the evolving landscape of machine learning vulnerabilities that security professionals need to understand and address.

Syllabus

AI/ML Security

Taught by

RSA Conference

Reviews

Start your review of AI/ML Security - Understanding Jailbreak Prompts and Adversarial Illusions in Large Language Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.