Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

LLM Tokenizers Explained - BPE, SentencePiece, Pretrained vs Custom - Lecture 3

Code With Aarohi via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to implement and integrate production-level tokenizers into large language models through this comprehensive 42-minute tutorial. Upgrade from manual tokenization to real-world tokenization systems by exploring multiple tokenization approaches including BPE (Byte Pair Encoding), SentencePiece, and pretrained tokenizers from popular models like GPT-2, BERT, LLaMA, and T5. Discover how tokenizers convert text into tokens and numbers, understand the impact of vocabulary size and domain-specific text on tokenization, and master the process of training custom tokenizers from your own datasets. Work hands-on with essential libraries including sentencepiece for custom tokenizer training, the tokenizers library for BPE implementation, gensim for Word2Vec and FastText embeddings, and transformers for HuggingFace tokenizers. Explore how embedding layers convert token IDs into vectors and learn to integrate these tokenization systems into a TinyGPT model built from scratch. Access the complete code implementation through the provided GitHub repositories and follow along with practical examples that demonstrate the differences between various tokenization approaches and their real-world applications in LLM development.

Syllabus

L-3 | LLM Tokenizers Explained: BPE, SentencePiece, Pretrained vs Custom (Full Hands-On Guide)

Taught by

Code With Aarohi

Reviews

Start your review of LLM Tokenizers Explained - BPE, SentencePiece, Pretrained vs Custom - Lecture 3

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.