Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Understanding and Mitigating Risks in LLM-Powered Web Apps

Conf42 via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn to identify and defend against prompt injection attacks in large language model-powered web applications through this 20-minute conference talk from Conf42 LLMs 2025. Explore the fundamentals of prompt injection vulnerabilities, starting with practical examples using a tech store chatbot scenario to understand how these attacks work in real-world applications. Discover how prompt injection techniques extend beyond text-based models to affect image and audio processing systems, broadening your understanding of potential attack vectors. Watch a live demonstration showing how to exploit chatbot vulnerabilities, providing concrete insight into the methods attackers use. Master comprehensive prevention strategies and security best practices to protect your LLM-powered applications from these emerging threats. Gain essential knowledge for building secure AI systems as you examine the critical importance of implementing proper security measures in modern AI-driven web applications.

Syllabus

00:00 Introduction to Prompt Injection Attacks
00:32 Understanding Prompt Injection
00:59 Example: Tech Store Chatbot
02:03 The Importance of Secure AI
03:37 Prompt Injection in Image Models
05:42 Prompt Injection in Audio Models
09:47 Demo: Breaking the Chatbot
11:27 Strategies to Prevent Prompt Injection
19:35 Conclusion and Final Thoughts

Taught by

Conf42

Reviews

Start your review of Understanding and Mitigating Risks in LLM-Powered Web Apps

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.