Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

LLMs in AppSec - Why They Still Need a Chaperone

LASCON via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the current limitations and challenges of implementing Large Language Models in application security through this 40-minute conference talk. Examine how LLMs are being integrated into vulnerability detection, code reviews, and penetration testing, while understanding why these AI tools still require human supervision despite their promising capabilities. Learn about real-world failures where AI-assisted security tools have missed critical threats or introduced new vulnerabilities, including instances of hallucinated security issues and insecure fix recommendations. Discover why LLMs, despite their confidence and creativity, remain unreliable when operating without oversight in security contexts. Analyze specific cases where AI tools have shifted security problems rather than solving them, creating unpredictable new challenges for security teams. Understand the ongoing necessity of human oversight in AI-driven threat modeling, secure code generation, and policy enforcement. Gain insights into the future development of AI models and potential improvements that may reduce the need for constant human supervision in application security workflows.

Syllabus

Vineeta Sangaraju - LLMs in AppSec: Why They Still Need a Chaperone

Taught by

LASCON

Reviews

Start your review of LLMs in AppSec - Why They Still Need a Chaperone

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.