Power BI Fundamentals - Create visualizations and dashboards from scratch
Free AI-powered learning to build in-demand skills
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about a powerful open-source tool for evaluating and testing Large Language Models (LLMs) against prompt injection attacks in this technical presentation. Explore the key differences between jailbreaking and prompt injection techniques while discovering how to leverage Spikee for comprehensive LLM security testing. Dive into practical use cases, examine the Targeted-12-2024 dataset specifically designed for testing, and review detailed benchmark results across various LLM models. Understand how to implement and assess guardrails to protect against potential security vulnerabilities in AI systems. Access hands-on demonstrations and implementation guidance through the available GitHub repository and official documentation at spikee.ai.
Syllabus
00:00 - Introduction
02:47 - Jailbreaking vs Prompt Injection
13:39 - Spikee's Use Cases
15:36 - Targeted-12-2024 Dataset
20:32 - LLM Benchmark Results
26:02 - Guardrail Benchmark
Taught by
Donato Capitella