Learn the Skills Netflix, Meta, and Capital One Actually Hire For
Become an AI & ML Engineer with Cal Poly EPaCE — IBM-Certified Training
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore a comprehensive conference talk examining the evolution of AI-powered automatic code vulnerability remediation through six months of intensive research and development. Learn how a security team challenged their initial skepticism about AI auto-fixing by building and testing a sophisticated system that combines curated remediation guidance, real-world vulnerable code samples, and test-driven prompt engineering to generate accurate and verifiable security fixes. Discover the systematic approach developed to guide large language models toward producing high-quality vulnerability remediation while maintaining the ability to fall back to deterministic logic when necessary. Understand the methodology behind generating 80-120 validated fix rules per month and rapidly scaling support for new programming languages within weeks rather than months. Gain insights into the patterns, common mistakes, and critical lessons learned during the development process, including how the same system can generate prompts for identifying false positives to validate whether security issues are genuine and determine appropriate remediation strategies. Examine the limitations and ongoing challenges of this approach through practical examples and participate in an interactive review session analyzing code fixes that appear secure but contain hidden vulnerabilities, providing a blueprint for organizations interested in implementing similar AI-assisted security remediation systems.
Syllabus
Eitan Worcel - Fixing My Fixing Talk: What We Got Wrong (and Right) About AI Auto-Remediation
Taught by
LASCON