Free courses from frontend to fullstack and AI
AI Engineer - Learn how to integrate AI into software applications
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn to identify and defend against prompt injection attacks in large language model-powered web applications through this 20-minute conference talk from Conf42 LLMs 2025. Explore the fundamentals of prompt injection vulnerabilities, starting with practical examples using a tech store chatbot scenario to understand how these attacks work in real-world applications. Discover how prompt injection techniques extend beyond text-based models to affect image and audio processing systems, broadening your understanding of potential attack vectors. Watch a live demonstration showing how to exploit chatbot vulnerabilities, providing concrete insight into the methods attackers use. Master comprehensive prevention strategies and security best practices to protect your LLM-powered applications from these emerging threats. Gain essential knowledge for building secure AI systems as you examine the critical importance of implementing proper security measures in modern AI-driven web applications.
Syllabus
00:00 Introduction to Prompt Injection Attacks
00:32 Understanding Prompt Injection
00:59 Example: Tech Store Chatbot
02:03 The Importance of Secure AI
03:37 Prompt Injection in Image Models
05:42 Prompt Injection in Audio Models
09:47 Demo: Breaking the Chatbot
11:27 Strategies to Prevent Prompt Injection
19:35 Conclusion and Final Thoughts
Taught by
Conf42