Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a comprehensive conference talk examining the evolution of AI-powered automatic code vulnerability remediation through six months of intensive research and development. Learn how a security team challenged their initial skepticism about AI auto-fixing by building and testing a sophisticated system that combines curated remediation guidance, real-world vulnerable code samples, and test-driven prompt engineering to generate accurate and verifiable security fixes. Discover the systematic approach developed to guide large language models toward producing high-quality vulnerability remediation while maintaining the ability to fall back to deterministic logic when necessary. Understand the methodology behind generating 80-120 validated fix rules per month and rapidly scaling support for new programming languages within weeks rather than months. Gain insights into the patterns, common mistakes, and critical lessons learned during the development process, including how the same system can generate prompts for identifying false positives to validate whether security issues are genuine and determine appropriate remediation strategies. Examine the limitations and ongoing challenges of this approach through practical examples and participate in an interactive review session analyzing code fixes that appear secure but contain hidden vulnerabilities, providing a blueprint for organizations interested in implementing similar AI-assisted security remediation systems.