Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Fine-Tuning LLM on Custom Dataset with Single GPU - Complete Tutorial for Sentiment Analysis

Venelin Valkov via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn to fine-tune the Qwen3 0.6B large language model on a custom dataset for sentiment analysis of financial news using a single GPU. Begin by understanding when fine-tuning is appropriate, then set up your notebook environment and prepare your custom dataset for training. Load the Qwen3 0.6B model and tokenizer, perform token counting analysis, and examine predictions from the untrained model to establish a baseline. Configure the training setup using LoRA (Low-Rank Adaptation) adapters, optimize hyperparameters including learning rate settings, and monitor training progress through TensorBoard logs. Save your trained model and conduct comprehensive evaluation comparing your fine-tuned model's performance against automated prompt engineering approaches using DSPy. Complete the tutorial by uploading your trained model to the HuggingFace Hub for sharing and deployment, gaining hands-on experience with the entire machine learning pipeline from data preparation to model deployment.

Syllabus

00:00 - When to fine-tune?
04:22 - Notebook setup
06:30 - Dataset preparation
09:54 - Load model and tokenizer Qwen3 0.6B
12:20 - Token counting
13:57 - Untrained model predictions
15:20 - Training setup LoRA, optimizer, learning rate
23:32 - Training logs in TensorBoard
24:51 - Save the trained model
26:07 - Evaluation
29:58 - Upload to HuggingFace Hub

Taught by

Venelin Valkov

Reviews

Start your review of Fine-Tuning LLM on Custom Dataset with Single GPU - Complete Tutorial for Sentiment Analysis

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.