Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the fundamental role of tokenization in large language models through this 27-minute conference talk that examines how LLMs process and predict human and machine text as sequences of tokens. Discover what tokens are, how they represent text, and why understanding tokenization is crucial for comprehending LLM output generation and the trade-offs between correctness and computational performance. Learn about various tokenization algorithms including word-based, subword-based, and character-level approaches, with detailed coverage of widely-used methods such as Byte Pair Encoding and WordPiece. Gain insights into how tokenization choices directly impact model performance trade-offs and understand the foundational mechanisms that underpin every LLM's text processing capabilities.
Syllabus
Which Is To Be Master? Understanding LLM Tokenization - DevConf.US 2025
Taught by
DevConf