Learn Backend Development Part-Time, Online
Finance Certifications Goldman Sachs & Amazon Teams Trust
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Discover an open-source toolkit designed to automatically evaluate and secure Large Language Model (LLM) system prompts through comprehensive testing and hardening techniques. Learn how to implement automated evaluation, hardening, and adversarial testing using LLMs themselves, while applying advanced security methods including spotlighting, random sequence enclosure, instruction defense, and role consistency. Explore injection testing methodologies that utilize categorized payloads based on the OWASP Top 10 for LLM Applications 2025 framework. Experience live demonstrations of both command-line interface and web-based user interface tools for strengthening prompt security, and understand practical approaches to defending against prompt injection attacks in production LLM systems.
Syllabus
- Date/Time: Monday, 15:00–15:25
Taught by
BSidesLV