Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Attend this seminar to explore the fundamental principles and practical techniques for scaling and accelerating large language model (LLM) training processes. Learn about scaling laws and their role in understanding the rationale behind large-scale training initiatives. Discover how to effectively apply parallelization techniques by identifying where they deliver the greatest performance benefits. Explore low-precision training methods designed to maximize cluster performance and computational efficiency. The presentation, delivered by Andrea Pilzer, Ph.D. from NVIDIA AI Technology Center in Italy, provides insights into optimizing LLM training workflows for high-performance computing environments. Gain practical knowledge about balancing computational resources, training speed, and model performance when working with large-scale language models in distributed computing settings.
Syllabus
NHR PerfLab Seminar: Scaling and accelerating LLM trainings
Taught by
NHR@FAU