Distributed Training: Hybrid Parallelism and Gradient Optimization - Lecture 20
MIT HAN Lab via YouTube
Google, IBM & Meta Certificates — 40% Off for a Limited Time
Most AI Pilots Fail to Scale. MIT Sloan Teaches You Why — and How to Fix It
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn advanced distributed training concepts in machine learning through a recorded MIT lecture that explores hybrid parallelism, auto-parallelization techniques, and strategies for overcoming bandwidth and latency bottlenecks. Dive deep into gradient compression methods including gradient pruning for sparse communication, deep gradient compression, and gradient quantization techniques like 1-Bit SGD and TernGrad. Master the implementation of delayed gradient updates while understanding their role in addressing latency challenges in distributed systems. Taught by Professor Song Han, this comprehensive lecture from MIT's 6.5940 course provides essential knowledge for optimizing large-scale machine learning training processes.
Syllabus
EfficientML.ai Lecture 20 - Distributed Training Part 2 (Zoom Recording) (MIT 6.5940, Fall 2024)
Taught by
MIT HAN Lab