Learn Generative AI, Prompt Engineering, and LLMs for Free
Python, Prompt Engineering, Data Science — Build the Skills Employers Want Now
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the critical security vulnerabilities specific to Large Language Models through this 29-minute conference talk that examines the OWASP Top 10 for LLMs framework. Learn about the most significant security risks facing LLM applications, including prompt injection attacks, data leakage, inadequate sandboxing, unauthorized code execution, SSRF vulnerabilities, overreliance on LLM-generated content, inadequate AI alignment, insufficient access controls, improper error handling, and training data poisoning. Understand how these vulnerabilities differ from traditional web application security concerns and discover practical mitigation strategies for each risk category. Gain insights into secure development practices for AI-powered applications, risk assessment methodologies for LLM implementations, and the unique challenges of securing systems that incorporate artificial intelligence components.
Syllabus
OWASP Top 10 for LLMs
Taught by
OWASP Foundation