Heterogeneous Hybrid Distributed Training for Large-Scale Language Models
OpenInfra Foundation via YouTube
Gain a Splash of New Skills - Coursera+ Annual Nearly 45% Off
Introduction to Programming with Python
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about the technical challenges and solutions in heterogeneous distributed training for Large Language Models (LLMs) in this 11-minute conference talk. Explore how integrating different computing resources for distributed parallel acceleration can support the development of LLMs with hundreds of billions of parameters. Discover the research conducted by China Mobile and industry partners to overcome challenges related to GPU architecture differences, memory constraints, and vendor hardware incompatibilities. Gain insights into the core functional components of a training system designed to enable heterogeneous GPUs to work together effectively, contributing to the advancement of the intelligent computing ecosystem.
Syllabus
Heterogeneous Hybrid Distributed Training Helps the Development of Large-Scale Language Model
Taught by
OpenInfra Foundation