Power BI Fundamentals - Create visualizations and dashboards from scratch
Launch a New Career with Certificates from Google, IBM & Microsoft
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore how Parameter Efficient Fine Tuning (PEFT) revolutionizes large language model adaptation by training only 0.1-2% of model parameters while achieving performance comparable to full fine-tuning in this 12-minute conference talk. Learn how PEFT reduces memory requirements by 3-4x and shrinks checkpoint sizes from gigabytes to megabytes, making LLM fine-tuning accessible even with limited computational resources. Discover PyTorch's architectural features including the module system and autograd engine that enable practical PEFT implementation, and examine popular methods such as LoRA and Prefix Tuning. Understand how PyTorch's nn.ModuleDict facilitates dynamic adapter management and how custom CUDA extensions optimize performance for efficient model adaptation. Gain practical knowledge for implementing PEFT methods and leveraging PyTorch's advanced features to overcome the significant barrier of fine-tuning billion-parameter models that traditionally require over 80GB of GPU memory.
Syllabus
Memory-Efficient AI: How PEFT and PyTorch Enable Accessible LLM Fine-Tuning - DevConf.IN 2026
Taught by
DevConf