Fine-tuning LLMs Without Maxing Out Your GPU - LoRA for Parameter-Efficient Training
Data Centric via YouTube
Build AI Apps with Azure, Copilot, and Generative AI — Microsoft Certified
Live Online Classes in Design, Coding & AI — Small Classes, Free Retakes
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how to utilize LoRA (Low Rank Adapters) for parameter-efficient fine-tuning of large language models in this 47-minute video. Follow along as the instructor demonstrates fine-tuning RoBERTa to classify consumer finance complaints using Google Colab with a V100 GPU. Gain insights into the end-to-end process, including access to a detailed notebook and technical blog. Discover how to optimize your GPU usage while achieving effective model fine-tuning. Explore additional resources on building LLM-powered applications, understanding precision and recall, and booking consultations for further guidance.
Syllabus
Fine-tune your LLMs, Without Maxing out Your GPU!
Taught by
Data Centric