Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to defend large language models against hacking attempts and prompt injection attacks in this 14-minute video from IBM. Discover the critical security risks facing LLMs including data leaks, jailbreaks, and malicious prompt exploitation. Explore comprehensive defense strategies using policy engines, proxies, and defense-in-depth approaches to protect generative AI systems from advanced threats. Gain insights into securing AI applications and implementing robust safeguards for enterprise-level LLM deployments.
Syllabus
LLM Hacking Defense: Strategies for Secure AI
Taught by
IBM Technology