Live Online Classes in Design, Coding & AI — Small Classes, Free Retakes
Build with Azure OpenAI, Copilot Studio & Agentic Frameworks — Microsoft Certified
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This conference talk explores how formal methods can improve code generation from large language models (LLMs) by using Dafny as an intermediate verification layer. Learn how researchers from MIT and Amazon Web Services developed a system where LLMs generate code in Dafny first, allowing automatic validation against specifications before compiling to target languages—all while keeping the verification process invisible to users. The presentation covers their prototype implementation and its performance on HumanEval Python benchmarks, demonstrating a promising approach to addressing the reliability issues in LLM-generated code. The 15-minute talk was presented at the Dafny 2025 workshop on January 19, 2025, sponsored by ACM SIGPLAN.
Syllabus
[Dafny'25] Dafny as Verification-Aware Intermediate Language for Code Generation
Taught by
ACM SIGPLAN