Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This conference talk presents DafnyBench, the largest benchmark created for training and evaluating machine learning systems in formal software verification. Learn about how researchers from Harvard, MIT, Stanford, and other institutions tested leading large language models like GPT-4 and Claude 3 on their ability to generate annotations for the Dafny verification engine. The benchmark includes over 750 programs with approximately 53,000 lines of code, with the best model achieving a 68% success rate. Discover insights on how performance improves with error message feedback and degrades with increased code complexity. The presentation aims to establish a baseline for future improvements as both language models and verification techniques advance. Delivered at the Dafny 2025 workshop on January 19, 2025, this talk was sponsored by ACM SIGPLAN.
Syllabus
[Dafny'25] DafnyBench: A Benchmark for Formal Software Verification
Taught by
ACM SIGPLAN