Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

Accelerate Model Training with PyTorch 2.X

Packt via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This course teaches you techniques to dramatically speed up model training using the latest features in PyTorch 2.X. Mastering these optimization strategies is essential for professionals building scalable, high-performance AI systems. You’ll learn how to refine your training workflow, improve computation efficiency, and achieve faster, more reliable model iterations. Each module translates performance concepts into practical techniques you can immediately apply. The course blends deep technical foundations with real-world optimization workflows, ensuring you understand both why each method works and how to execute it effectively. You’ll practice using compiled models, mixed precision, distributed strategies, and more. This course is ideal for developers, data scientists, and ML engineers with basic PyTorch experience who want to train models faster and scale training across hardware configurations.

Syllabus

  • Deconstructing the Training Process
    • In this section, we explore the training process of neural networks, analyze factors contributing to computational burden, and evaluate elements influencing training time.
  • Training Models Faster
    • In this section, we explore techniques to accelerate model training by modifying the software stack and scaling resources. Key concepts include vertical and horizontal scaling, application and environment layer optimizations, and practical strategies for improving efficiency.
  • Compiling the Model
    • In this section, we explore the PyTorch 2.0 Compile API to accelerate deep learning model training, focusing on graph mode benefits, API usage, and workflow components for performance optimization.
  • Using Specialized Libraries
    • In this section, we explore using OpenMP for multithreading and IPEX to optimize PyTorch on Intel CPUs, enhancing performance through specialized libraries.
  • Building an Efficient Data Pipeline
    • In this section, we explore building efficient data pipelines to prevent training bottlenecks. Key concepts include configuring workers, optimizing GPU memory transfer, and ensuring continuous data flow for ML model training.
  • Simplifying the Model
    • In this section, we explore model simplification through pruning and compression techniques to improve efficiency without sacrificing performance, using the Microsoft NNI toolkit for practical implementation.
  • Adopting Mixed Precision
    • In this section, we explore mixed precision strategies to optimize model training efficiency by reducing computational and memory demands without sacrificing accuracy, focusing on PyTorch implementation and hardware utilization.
  • Distributed Training at a Glance
    • In this section, we explore distributed training principles, parallel strategies, and PyTorch implementation to enhance model training efficiency through resource distribution.
  • Training with Multiple CPUs
    • In this section, we explore distributed training on multiple CPUs, focusing on benefits, implementation, and using Intel oneCCL for efficient communication in resource-constrained environments.
  • Training with Multiple GPUs
    • In this section, we explore multi-GPU training strategies, analyze interconnection topologies, and configure NCCL for efficient distributed deep learning operations.
  • Training with Multiple Machines
    • In this section, we explore distributed training on computing clusters, focusing on Open MPI and NCCL for efficient communication and resource management across multiple machines.

Taught by

Packt - Course Instructors

Reviews

Start your review of Accelerate Model Training with PyTorch 2.X

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.