MIT Sloan: Lead AI Adoption Across Your Organization — Not Just Pilot It
AI Engineer - Learn how to integrate AI into software applications
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Discover how to fine-tune Large Language Models available in the Kaggle environment using quantized Low-Rank Adaptation (QLora) methods in this 32-minute conference talk from DSC EUROPE 24 in Belgrade. Gabriel Preda demonstrates practical techniques for optimizing LLMs effectively, providing valuable insights into leveraging Kaggle's tools and resources to streamline the fine-tuning process and enhance model performance. Learn implementation strategies that can be applied to improve language models while working within the Kaggle ecosystem.
Syllabus
Fine-tune LLMs from Kaggle Models using (Q)LoRA | Gabriel Preda | DSC EUROPE 24
Taught by
Data Science Conference