Scaling Laws of Formal Reasoning in Large Language Models - Lecture 7
MICDE University of Michigan via YouTube
MIT Sloan AI Adoption: Build a Playbook That Drives Real Business ROI
Get 20% off all career paths from fullstack to AI
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the critical advancements in improving Large Language Models' (LLMs) formal reasoning abilities for scientific applications in this 28-minute conference talk. Delve into two key research directions: the introduction of Llemma, a foundation model specifically designed for mathematics, and the concept of "easy-to-hard" generalization. Learn how Llemma leverages the extensive Proofpile II corpus to enhance the relationship between training compute and reasoning ability, resulting in significant accuracy improvements. Discover the potential of training strong evaluator models to facilitate generalization to more complex problems. Gain insights into the importance of scaling high-quality data collection and further algorithmic development for enhancing formal reasoning capabilities in LLMs.
Syllabus
07. SciFM24 Sean Welleck: Scaling Laws of Formal Reasoning
Taught by
MICDE University of Michigan