Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the challenges and realities of using AI to automatically fix security vulnerabilities in code through this conference talk that chronicles both spectacular failures and eventual breakthroughs in building an automated remediation system. Learn from real-world experiences where initial attempts with LLMs produced absurd solutions like deleting entire functions to "fix" vulnerabilities, and discover how simple approaches like zero-shot classifiers, tree of thought prompting, and reflexion loops often resulted in impractical 200-line refactors or incorrectly marking serious security issues as false positives. Understand why basic RAG and prompting techniques proved insufficient and examine the evolution toward more sophisticated solutions involving constraint-based action planning, feedback loops from actual developer behavior, and multi-agent workflows that debate solutions before implementing changes. Gain practical insights into the technical and human factors that determine whether developers will trust and adopt AI-assisted code remediation tools, while learning what approaches to avoid and which strategies show promise for creating effective automated security fixes that developers actually want to use.