Power BI Fundamentals - Create visualizations and dashboards from scratch
All Coursera Certificates 40% Off
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to systematically stress test and evaluate LLM-integrated applications through adversarial red teaming techniques in this 24-minute conference talk from DevSecCon. Discover why traditional testing approaches fall short for generative AI systems and explore comprehensive red teaming methodologies including adversarial prompt engineering, model behavior probing, jailbreak techniques, and novel evasion strategies that mirror real-world threat actor tactics. Master the art of building AI-specific adversarial testing playbooks, simulating realistic misuse scenarios, and integrating red teaming practices directly into your software development lifecycle. Understand how to transform unpredictable LLM behavior into testable, repeatable, and secure-by-design applications through systematic evaluation frameworks that expose vulnerabilities before they reach production environments.
Syllabus
Red Teaming AI How to Stress Test LLM Integrated Apps Like an Attacker
Taught by
DevSecCon