Completed
00:00 - Why AI apps need a "bodyguard"
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
We Tried to Jailbreak Our AI - and Model Armor Stopped It
Automatically move to the next video in the Classroom when playback concludes
- 1 00:00 - Why AI apps need a "bodyguard"
- 2 00:57 - What are the top AI security risks? OWASP Top 10
- 3 01:46 - [Demo] Trying to jailbreak our AI app
- 4 02:25 - [Demo] Stopping sensitive data SSN leaks
- 5 03:23 - [Demo] Redacting data instead of blocking DLP
- 6 04:06 - [Demo] Blocking malicious URLs
- 7 04:50 - How it works: A simple API call
- 8 05:11 - Code: Sanitizing user prompts Input check
- 9 05:21 - Code: Sanitizing model responses Output check
- 10 06:19 - Code: Redact sensitive data
- 11 08:11 - Q&A: Don't models already have guardrails?
- 12 07:23 - Q&A: Why not use another LLM to protect my LLM?
- 13 07:58 - Q&A: Configuring policies for different apps
- 14 08:50 - Q&A: How much does Model Armor cost?
- 15 09:10 - Final thoughts