Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Unlock Scikit-learn GPU Powers with NVIDIA cuML - Tutorial and Benchmarks

Python Simplified via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This tutorial demonstrates how to accelerate Scikit-learn with NVIDIA cuML GPU acceleration without changing any code. Learn how to set up cuML on both Google Colab (using free Tesla T4 GPU) and local systems, then explore three use cases where GPUs significantly outperform CPUs for machine learning tasks. Witness impressive benchmarks including a workflow that runs in 87 seconds on GPU versus 53 minutes on CPU. Discover when to leverage GPU acceleration for giant datasets and complex algorithms, with complete code examples and performance comparisons. The tutorial includes installation instructions, benchmark visualizations, and accuracy comparisons between standard Scikit-learn and GPU-accelerated implementations. Perfect for data scientists, ML practitioners, and anyone looking to dramatically speed up their machine learning workflows without rewriting existing Scikit-learn code.

Syllabus

01:04 - setup cuML sklearn in Google Colab
01:57 - setup cuML sklearn locally
03:23 - what workflows are better for GPU?
03:47 - use GPU for giant datasets
07:29 - use GPU for complex algorithms
11:17 - CPU vs GPU benchmark charts
11:40 - cuML vs sklearn accuracy
12:29 - use GPU for giant datasets and complex algorithms
13:12 - advanced CPU vs GPU benchmark charts

Taught by

Python Simplified

Reviews

Start your review of Unlock Scikit-learn GPU Powers with NVIDIA cuML - Tutorial and Benchmarks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.