Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Low Level Technicals of LLMs - Analysis, Finetuning, and Deep Technical Implementation

AI Engineer via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn the low-level technical aspects of Large Language Models through this comprehensive workshop that covers debugging, fine-tuning, and mathematical foundations. Dive deep into analyzing and fixing bugs in popular models like Gemma, Phi-3, and Llama, while also addressing tokenizer issues that commonly arise in production environments. Master advanced fine-tuning techniques using Unsloth, including continued pretraining, reward modeling, and QLoRA optimization methods that achieve 2x faster training speeds with 70% less VRAM usage. Explore the mathematical underpinnings of LLMs by hand-deriving derivatives and learning state-of-the-art fine-tuning tricks used by industry professionals. Gain practical experience through hands-on exercises that require Python with PyTorch and Unsloth, with options to use Google Colab or Kaggle for cloud-based development. Benefit from insights shared by Daniel Han, the algorithms expert behind Unsloth who has identified and resolved critical bugs in major models including 8 Google Gemma bugs, Phi-3 SWA issues, and Llama-3 tokenization problems, drawing from his experience at NVIDIA optimizing GPU algorithms and helping NASA engineers process Mars rover data more efficiently.

Syllabus

Low Level Technicals of LLMs: Daniel Han

Taught by

AI Engineer

Reviews

Start your review of Low Level Technicals of LLMs - Analysis, Finetuning, and Deep Technical Implementation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.