Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Fine-tune LLMs from Kaggle Models using (Q)LoRA

Data Science Conference via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Discover how to fine-tune Large Language Models available in the Kaggle environment using quantized Low-Rank Adaptation (QLora) methods in this 32-minute conference talk from DSC EUROPE 24 in Belgrade. Gabriel Preda demonstrates practical techniques for optimizing LLMs effectively, providing valuable insights into leveraging Kaggle's tools and resources to streamline the fine-tuning process and enhance model performance. Learn implementation strategies that can be applied to improve language models while working within the Kaggle ecosystem.

Syllabus

Fine-tune LLMs from Kaggle Models using (Q)LoRA | Gabriel Preda | DSC EUROPE 24

Taught by

Data Science Conference

Reviews

Start your review of Fine-tune LLMs from Kaggle Models using (Q)LoRA

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.