Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Interleaving Large Language Models for Compiler Testing

ACM SIGPLAN via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a 15-minute conference presentation from OOPSLA 2025 that introduces LegoFuzz, a novel compiler testing framework that addresses key limitations in current AI-based compiler testing approaches. Learn how researchers Yunbo Ni and Shaohua Li from the Chinese University of Hong Kong developed a two-phase testing methodology that decouples the process into offline and online phases to overcome the problems of overly simple generated test programs and computationally expensive LLM usage. Discover how the offline phase leverages large language models to generate collections of small, feature-rich code pieces, while the online phase strategically combines these pieces to construct high-quality, valid test programs for compiler testing. Examine the impressive results achieved by LegoFuzz in testing C compilers, including the discovery of 66 bugs in GCC and LLVM, with nearly half being serious miscompilation bugs that existing LLM-based tools failed to detect. Understand how this efficient design methodology opens new possibilities for applying AI models in software testing beyond C compilers, with implications for broader compiler reliability and testing practices in the software development ecosystem.

Syllabus

[OOPSLA'25] Interleaving Large Language Models for Compiler Testing

Taught by

ACM SIGPLAN

Reviews

Start your review of Interleaving Large Language Models for Compiler Testing

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.