Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Udemy

Math Behind LLMs, Transformers and Modern Computer Vision

via Udemy

Overview

From Multi Head Attention and Embeddings to Transformers, Vision Transformers, Modern Image Segmentation + LLMs and SAM

What you'll learn:
  • Mathematics Behind Large Language Models
  • Modern Image Segmentation
  • Positional Encodings
  • Compare CNNs and Vision Transformers mathematically
  • Compute prompt self-attention and image–prompt cross-attention
  • Multi Head Attention
  • Query, Value and Key Matrix
  • Attention Masks
  • Masked Language Modeling
  • Dot Products and Vector Alignments
  • Nature of Sine and Cosine functions in Positional Encodings
  • How models like ChatGPT work under the hood
  • Bidirectional Models
  • Context aware word representations
  • Vision Transformers
  • Word Embeddings
  • How dot products work
  • Modern Computer Vision
  • Understand quadratic complexity in Vision Transformers
  • Matrix multiplication
  • Programatically Create tokens
  • Derive self-attention, multi-head attention, and cross-attention from scratch
  • Analyze the full Vision Transformer pipeline
  • Break down the mathematics of Meta’s Segment Anything Model (SAM)
  • Understand prompt encoders in modern segmentation models

Welcome to Math Behind LLMs, Transformers and Modern Computer Vision, a rigorous deep dive into the mathematical foundations powering today’s most advanced AI systems.

This course is designed for learners who want more than intuition. We derive and analyze the core equations behind Large Language Models, Vision Transformers, and modern image segmentation systems.

You will begin with tokenization and embedding mathematics, understanding how raw text becomes high-dimensional vector representations through algorithms like WordPiece. From there, we mathematically unpack the heart of transformer architectures: query, key, and value matrices, attention score computation, scaling behavior, and multi-head attention.

We examine attention masks, contextual encoding, and positional encodings — including the sine and cosine formulations that preserve sequence structure. You’ll build strong geometric intuition around vectors, dot products, cosine similarity, and dense embeddings.

The course then expands beyond language.

You’ll compare Convolutional Neural Networks with Vision Transformers, analyze quadratic attention operations, and walk through the complete Vision Transformer pipeline from patch embeddings to final predictions.

In an advanced section, we dissect the mathematics behind Meta’s Segment Anything Model (SAM). You will explore prompt encoders, self-attention, cross-attention between prompts and images, attention score computation in segmentation models, and how these systems are trained at scale.

By the end of this course, you won’t just understand how transformers work — you will understand why they work at the equation level across language and vision.

If you aim to build deep technical mastery and develop the mathematical intuition required for cutting-edge AI research and engineering, this course will elevate your expertise.

Syllabus

  • Course Overview
  • Tokenization and Multidimensional Word Embeddings
  • Positional Encodings
  • Attention Mechanism and Transformer Architecture

Taught by

Patrik Szepesi

Reviews

4.5 rating at Udemy based on 1074 ratings

Start your review of Math Behind LLMs, Transformers and Modern Computer Vision

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.