Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about groundbreaking research in Large Language Model (LLM) training through this technical presentation from NVIDIA and Stanford University researchers. Explore a novel two-phase pretraining approach that enhances LLM accuracy and scalability, focusing exclusively on core model functionalities rather than external components. Discover insights from researchers Steven Y. Feng, Shrimai Prabhumoye, Kezhi Kong, Dan Su, Mostofa Patwary, Mohammad Shoeybi, and Bryan Catanzaro as they present their collaborative work on maximizing data potential in LLM development. Gain deep technical understanding of how this innovative training methodology can improve model precision and performance in just 19 minutes of concentrated learning.
Syllabus
Two-Phase Pretraining: Unlocking LLM Scalability & Precision (NVIDIA, Stanford)
Taught by
Discover AI