Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Zero To Mastery

AI Mastery: LLMs Explained with Math (Transformers, Attention Mechanisms & More)

via Zero To Mastery

Overview

Unlock the secrets behind transformers like GPT and BERT. Learn tokenization, attention mechanisms, positional encodings, and embeddings to build and innovate with advanced AI. Excel in the field of machine learning and become a top-tier AI expert.
  • How tokenization transforms text into model-readable data
  • The inner workings of attention mechanisms in transformers
  • How positional encodings preserve sequence data in AI models
  • The role of matrices in encoding and processing language
  • Building dense word representations with multi-dimensional embeddings
  • Differences between bidirectional and masked language models
  • Practical applications of dot products and vector mathematics in AI
  • How transformers process, understand, and generate human-like text

Syllabus

  •   Introduction
    • AI Mastery: LLMs Explained with Math
    • Exercise: Meet Your Classmates and Instructor
  •   Introduction to Tokenizations and Encodings
    • Creating Our Optional Experiment Notebook - Part 1
    • Creating Our Optional Experiment Notebook - Part 2
    • Encoding Categorical Labels to Numeric Values
    • Understanding the Tokenization Vocabulary
    • Encoding Tokens
    • Practical Example of Tokenization and Encoding
  •   Embeddings and Positional Encodings
    • DistilBert vs. Bert Differences
    • Embeddings In A Continuous Vector Space
    • Introduction To Positional Encodings
    • Positional Encodings - Part 1
    • Positional Encodings - Part 2 (Even and Odd Indices)
    • Why Use Sine and Cosine Functions
    • Understanding the Nature of Sine and Cosine Functions
    • Visualizing Positional Encodings in Sine and Cosine Graphs
    • Solving the Equations to Get the Values for Positional Encodings
  •   Attention Mechanism, Multi Head Attention, Masked Language Learning and More
    • Introduction to Attention Mechanism
    • Query, Key and Value Matrix
    • Getting Started with Our Step by Step Attention Calculation
    • Calculating Key Vectors
    • Query Matrix Introduction
    • Calculating Raw Attention Scores
    • Understanding the Mathematics Behind Dot Products and Vector Alignment
    • Visualizing Raw Attention Scores in 2D
    • Converting Raw Attention Scores to Probability Distributions with Softmax
    • Normalization
    • Understanding the Value Matrix and Value Vector
    • Calculating the Final Context Aware Rich Representation for the Word "River"
    • Understanding the Output
    • Understanding Multi Head Attention
    • Multi Head Attention Example and Subsequent Layers
    • Masked Language Learning
  •   Where To Go From Here?
    • Review This Byte!

Taught by

Patrik Szepesi

Reviews

Start your review of AI Mastery: LLMs Explained with Math (Transformers, Attention Mechanisms & More)

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.