Learn EDR Internals: Research & Development From The Masters
Learn Backend Development Part-Time, Online
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Discover how to fine-tune Large Language Models available in the Kaggle environment using quantized Low-Rank Adaptation (QLora) methods in this 32-minute conference talk from DSC EUROPE 24 in Belgrade. Gabriel Preda demonstrates practical techniques for optimizing LLMs effectively, providing valuable insights into leveraging Kaggle's tools and resources to streamline the fine-tuning process and enhance model performance. Learn implementation strategies that can be applied to improve language models while working within the Kaggle ecosystem.
Syllabus
Fine-tune LLMs from Kaggle Models using (Q)LoRA | Gabriel Preda | DSC EUROPE 24
Taught by
Data Science Conference