The Investment Banker Certification
Power BI Fundamentals - Create visualizations and dashboards from scratch
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn about a powerful open-source tool for evaluating and testing Large Language Models (LLMs) against prompt injection attacks in this technical presentation. Explore the key differences between jailbreaking and prompt injection techniques while discovering how to leverage Spikee for comprehensive LLM security testing. Dive into practical use cases, examine the Targeted-12-2024 dataset specifically designed for testing, and review detailed benchmark results across various LLM models. Understand how to implement and assess guardrails to protect against potential security vulnerabilities in AI systems. Access hands-on demonstrations and implementation guidance through the available GitHub repository and official documentation at spikee.ai.
Syllabus
00:00 - Introduction
02:47 - Jailbreaking vs Prompt Injection
13:39 - Spikee's Use Cases
15:36 - Targeted-12-2024 Dataset
20:32 - LLM Benchmark Results
26:02 - Guardrail Benchmark
Taught by
Donato Capitella