Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore how Parameter Efficient Fine Tuning (PEFT) revolutionizes large language model adaptation by training only 0.1-2% of model parameters while achieving performance comparable to full fine-tuning in this 12-minute conference talk. Learn how PEFT reduces memory requirements by 3-4x and shrinks checkpoint sizes from gigabytes to megabytes, making LLM fine-tuning accessible even with limited computational resources. Discover PyTorch's architectural features including the module system and autograd engine that enable practical PEFT implementation, and examine popular methods such as LoRA and Prefix Tuning. Understand how PyTorch's nn.ModuleDict facilitates dynamic adapter management and how custom CUDA extensions optimize performance for efficient model adaptation. Gain practical knowledge for implementing PEFT methods and leveraging PyTorch's advanced features to overcome the significant barrier of fine-tuning billion-parameter models that traditionally require over 80GB of GPU memory.
Syllabus
Memory-Efficient AI: How PEFT and PyTorch Enable Accessible LLM Fine-Tuning - DevConf.IN 2026
Taught by
DevConf