We Tried to Jailbreak Our AI - and Model Armor Stopped It

We Tried to Jailbreak Our AI - and Model Armor Stopped It

Google Cloud Tech via YouTube Direct link

00:00 - Why AI apps need a "bodyguard"

1 of 15

1 of 15

00:00 - Why AI apps need a "bodyguard"

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

We Tried to Jailbreak Our AI - and Model Armor Stopped It

Automatically move to the next video in the Classroom when playback concludes

  1. 1 00:00 - Why AI apps need a "bodyguard"
  2. 2 00:57 - What are the top AI security risks? OWASP Top 10
  3. 3 01:46 - [Demo] Trying to jailbreak our AI app
  4. 4 02:25 - [Demo] Stopping sensitive data SSN leaks
  5. 5 03:23 - [Demo] Redacting data instead of blocking DLP
  6. 6 04:06 - [Demo] Blocking malicious URLs
  7. 7 04:50 - How it works: A simple API call
  8. 8 05:11 - Code: Sanitizing user prompts Input check
  9. 9 05:21 - Code: Sanitizing model responses Output check
  10. 10 06:19 - Code: Redact sensitive data
  11. 11 08:11 - Q&A: Don't models already have guardrails?
  12. 12 07:23 - Q&A: Why not use another LLM to protect my LLM?
  13. 13 07:58 - Q&A: Configuring policies for different apps
  14. 14 08:50 - Q&A: How much does Model Armor cost?
  15. 15 09:10 - Final thoughts

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.