Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Securing AI at Scale - Practical Defenses against Prompt Injection, Adversarial Attacks, and Model Poisoning

USENIX via YouTube

Overview

Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Learn practical approaches to securing AI systems at scale through this 39-minute conference talk from SREcon25 EMEA. Discover comprehensive defense strategies against three critical AI security threats: prompt injection attacks, adversarial attacks, and model poisoning. Explore filtering techniques for harmful input detection, methods for maintaining system prompt isolation, and sandboxing approaches to prevent prompt injection vulnerabilities. Understand how to train models for resilience against adversarial scenarios, implement randomness for improved performance, and establish output cleaning processes for enhanced model reliability. Examine data source tracking methodologies, anomaly detection systems, and federated learning implementations to combat model poisoning attempts. Gain insights into integrating these security controls within SRE workflows and incident response procedures, while learning how layered, zero-trust architectures combined with continuous adversarial testing have become fundamental requirements for maintaining both reliability and trust in AI-driven services.

Syllabus

SREcon25 Europe/Middle East/Africa - Securing AI at Scale: Practical Defenses against Prompt...

Taught by

USENIX

Reviews

Start your review of Securing AI at Scale - Practical Defenses against Prompt Injection, Adversarial Attacks, and Model Poisoning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.