Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the critical security vulnerabilities specific to Large Language Models through this 29-minute conference talk that examines the OWASP Top 10 for LLMs framework. Learn about the most significant security risks facing LLM applications, including prompt injection attacks, data leakage, inadequate sandboxing, unauthorized code execution, SSRF vulnerabilities, overreliance on LLM-generated content, inadequate AI alignment, insufficient access controls, improper error handling, and training data poisoning. Understand how these vulnerabilities differ from traditional web application security concerns and discover practical mitigation strategies for each risk category. Gain insights into secure development practices for AI-powered applications, risk assessment methodologies for LLM implementations, and the unique challenges of securing systems that incorporate artificial intelligence components.