Parameter Efficient Fine-Tuning with Multiple LoRA Adapters for Large Language Models
Discover AI via YouTube
The Private Equity Associate Certification
Google AI Professional Certificate - Learn AI Skills That Get You Hired
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Dive deep into Parameter Efficient Fine-Tuning (PEFT) with multiple LoRA adapters in this comprehensive technical video. Explore the intricacies of Low Rank Adaptation (LoRA) and master its various configurations, including all 16 LoRA_config parameters essential for efficient model fine-tuning. Learn to manipulate multiple PEFT adapters by switching between them, activating or deactivating them on pre-trained Large Language Models (LLMs) or Vision Language Models (VLMs). Understand the fundamental concepts of matrix factorization and Singular Value Decomposition (SVD) while discovering how to combine multiple PEFT-LoRA adapters into a single unified adapter for enhanced model performance.
Syllabus
PEFT w/ Multi LoRA explained (LLM fine-tuning)
Taught by
Discover AI