Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Two-Phase Pretraining - Enhancing LLM Accuracy and Scalability

Discover AI via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about groundbreaking research in Large Language Model (LLM) training through this technical presentation from NVIDIA and Stanford University researchers. Explore a novel two-phase pretraining approach that enhances LLM accuracy and scalability, focusing exclusively on core model functionalities rather than external components. Discover insights from researchers Steven Y. Feng, Shrimai Prabhumoye, Kezhi Kong, Dan Su, Mostofa Patwary, Mohammad Shoeybi, and Bryan Catanzaro as they present their collaborative work on maximizing data potential in LLM development. Gain deep technical understanding of how this innovative training methodology can improve model precision and performance in just 19 minutes of concentrated learning.

Syllabus

Two-Phase Pretraining: Unlocking LLM Scalability & Precision (NVIDIA, Stanford)

Taught by

Discover AI

Reviews

Start your review of Two-Phase Pretraining - Enhancing LLM Accuracy and Scalability

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.