Dafny-Annotator: AI-Assisted Verification of Dafny Programs
ACM SIGPLAN via YouTube
Build the Finance Skills That Lead to Promotions — Not Just Certificates
Most AI Pilots Fail to Scale. MIT Sloan Teaches You Why — and How to Fix It
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This conference talk explores the development of dafny-annotator, a tool that uses Large Language Models and search techniques to automatically add logical annotations to Dafny programs until they can be formally verified. Learn how researchers from Stanford and Harvard Universities addressed the challenge of formal verification adoption by creating AI assistance for Dafny users. Discover their approach combining LLaMa 3.1 models with search strategies, which initially achieved only 15.7% success on DafnyBench test methods. Follow their innovative solution to the limited training data problem through the creation of DafnySynth, a synthetic dataset generated by having LLMs formulate ideas, implement them, and propose incremental changes validated by Dafny. See how fine-tuning on both datasets significantly improved success rates to 50.6%, demonstrating a promising path toward AI assistants for programming languages with limited human-generated examples, potentially reducing barriers to formal verification adoption.
Syllabus
[Dafny'25] dafny-annotator: AI-Assisted Verification of Dafny Programs
Taught by
ACM SIGPLAN