Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

freeCodeCamp

LLMs from Scratch - Practical Engineering from Base Model to PPO RLHF

via freeCodeCamp

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Master the complete engineering process of building large language models from the ground up in this comprehensive 6-hour course using pure PyTorch. Begin with foundational transformer architecture concepts and progress through training a basic LLM, then advance to modern architectural improvements and scaling techniques. Explore cutting-edge approaches including Mixture-of-Experts (MoE) implementations, supervised fine-tuning methods, reward modeling frameworks, and reinforcement learning from human feedback using Proximal Policy Optimization (PPO). Gain hands-on experience with each stage of the LLM development lifecycle, from core transformer components through advanced alignment techniques, enabling you to build and customize your own language models with deep technical understanding of the underlying engineering principles.

Syllabus

0:00:00 Part 0 - Introduction
0:05:43 Part 1 - Core Transformer Architecture
0:40:24 Part 2 — Training a Tiny LLM
1:30:27 Part 3 — Modernizing the Architecture
2:33:53 Part 4 — Scaling Up
3:17:22 Part 5 — Mixture-of-Experts MoE
3:44:19 Part 6 — Supervised Fine-Tuning SFT
4:23:44 Part 7 — Reward Modeling
4:59:55 Part 8 — RLHF with PPO

Taught by

freeCodeCamp.org

Reviews

Start your review of LLMs from Scratch - Practical Engineering from Base Model to PPO RLHF

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.