Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Spikee: A Simple Prompt Injection Kit for Evaluation and Exploitation of LLMs

Donato Capitella via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about a powerful open-source tool for evaluating and testing Large Language Models (LLMs) against prompt injection attacks in this technical presentation. Explore the key differences between jailbreaking and prompt injection techniques while discovering how to leverage Spikee for comprehensive LLM security testing. Dive into practical use cases, examine the Targeted-12-2024 dataset specifically designed for testing, and review detailed benchmark results across various LLM models. Understand how to implement and assess guardrails to protect against potential security vulnerabilities in AI systems. Access hands-on demonstrations and implementation guidance through the available GitHub repository and official documentation at spikee.ai.

Syllabus

00:00 - Introduction
02:47 - Jailbreaking vs Prompt Injection
13:39 - Spikee's Use Cases
15:36 - Targeted-12-2024 Dataset
20:32 - LLM Benchmark Results
26:02 - Guardrail Benchmark

Taught by

Donato Capitella

Reviews

Start your review of Spikee: A Simple Prompt Injection Kit for Evaluation and Exploitation of LLMs

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.