Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Multi-GPU Training with Unsloth

Trelis Research via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn to accelerate machine learning model training by implementing multi-GPU configurations with Unsloth in this comprehensive 54-minute tutorial. Explore the fundamental differences between data parallel, pipeline parallel, and fully sharded data parallel training approaches to understand which method best suits your computational needs. Master the process of converting Jupyter notebooks to Python scripts optimized for distributed training, and discover the key differences between using Unsloth versus Transformers for multi-GPU setups. Follow step-by-step instructions for modifying fine-tuning scripts to support Distributed Data Parallel (DDP), including essential script modifications, installation requirements for Unsloth, TensorBoard, and UV package manager. Gain hands-on experience with LoRA training script implementation, gradient accumulation step configuration, dataset loading procedures, and training parameter optimization. Practice running both single-GPU and multi-GPU configurations using accelerate config while learning to troubleshoot common issues such as training hangs with torch run. Address current limitations including loss reporting challenges and batch size constraints when using Unsloth with larger batch sizes, and understand practical workarounds for these known issues.

Syllabus

0:00 Faster training with multiple GPUs
0:39 Video Overview
1:24 Data parallel versus Pipeline Parallel versus Fully Sharded Data Parallel
6:38 Downloading a jupyter notebook as a python script for multi-gpu, e.g. an unsloth notebook
7:44 Unsloth vs Transformers for multi-gpu
8:13 Modifying a fine-tuning script for distributed data parallel
9:03 Starting up a GPU in one-click for fine-tuning
10:27 Converting a jupyter notebook to a python script
11:30 Installation notes for unsloth and tensorboard, and uv
13:32 Script modifications required for DDP
18:50 Training script run-through, for LoRA
22:46 Setting gradient accumulation steps
24:07 Dataset loading
26:22 Setting up the run name and training parameters
29:30 Running without multi-gpu single gpu check
35:47 Running with multiple GPUs using accelerate config btw torch run can result in run hangs
41:02 Sanity check of running with accelerate and a single gpu
44:48 Open at time of recording issues with loss reporting and using unsloth with batch size larger than one
53:11 Conclusion and shout-outs to spr1nter and rakshith

Taught by

Trelis Research

Reviews

Start your review of Multi-GPU Training with Unsloth

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.