Low Level Technicals of LLMs - Analysis, Finetuning, and Deep Technical Implementation
AI Engineer via YouTube
Learn the Skills Netflix, Meta, and Capital One Actually Hire For
Power BI Fundamentals - Create visualizations and dashboards from scratch
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn the low-level technical aspects of Large Language Models through this comprehensive workshop that covers debugging, fine-tuning, and mathematical foundations. Dive deep into analyzing and fixing bugs in popular models like Gemma, Phi-3, and Llama, while also addressing tokenizer issues that commonly arise in production environments. Master advanced fine-tuning techniques using Unsloth, including continued pretraining, reward modeling, and QLoRA optimization methods that achieve 2x faster training speeds with 70% less VRAM usage. Explore the mathematical underpinnings of LLMs by hand-deriving derivatives and learning state-of-the-art fine-tuning tricks used by industry professionals. Gain practical experience through hands-on exercises that require Python with PyTorch and Unsloth, with options to use Google Colab or Kaggle for cloud-based development. Benefit from insights shared by Daniel Han, the algorithms expert behind Unsloth who has identified and resolved critical bugs in major models including 8 Google Gemma bugs, Phi-3 SWA issues, and Llama-3 tokenization problems, drawing from his experience at NVIDIA optimizing GPU algorithms and helping NASA engineers process Mars rover data more efficiently.
Syllabus
Low Level Technicals of LLMs: Daniel Han
Taught by
AI Engineer