Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

We Fine-Tuned GPT OSS 20B to Rap Like Eminem

Oxen via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to fine-tune large language models by following a practical demonstration of customizing GPT OSS 20B to generate rap lyrics in Eminem's distinctive style. Explore the complete fine-tuning pipeline from data preparation and storage to model deployment, including how to handle explicit content tagging and automate the fine-tuning process. Discover the internal architecture of 20B and 120B parameter models, understand attention mechanisms and attention sinks, and examine OpenAI's new Harmony format for structured conversations. Compare training results against baseline models like Llama 3.2 1B, troubleshoot common deployment challenges, and gain insights into scaling fine-tuning workflows for larger models up to 120B parameters using Oxen.ai's automated tools and data versioning capabilities.

Syllabus

0:00 Intro: Fine-tuning GPT OSS 20B-120B
4:05 The Task: Rapping in the Style of Eminem
6:15 Where we Kept the Data and the /Explicit Tag
8:08 How We Automize Fine-tuning
10:35 Understanding the Internals of the Models
15:13 The Attention Sinks in 20B and 120B
17:37 OpenAI’s New Harmony Format
23:45 Double Check your Templates
27:20 Question: Is the Harmony Format Only for Agentic Use-Cases?
28:25 How the Training Runs Went
32:40 Comparing against Llama 3.2 1B
34:10 Deployment Gotchas: How to Deploy After Training
37:48 Next Up: Supporting GPT OSS 120B

Taught by

Oxen

Reviews

Start your review of We Fine-Tuned GPT OSS 20B to Rap Like Eminem

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.