Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

Fine-tuning Text Models with PEFT

Coursera via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
The Fine-tuning Text Models with PEFT course is designed for developers, engineers, and technical product builders who are new to Generative AI but already have intermediate machine learning knowledge, basic Python proficiency, and familiarity with development environments such as VS Code, and who want to engineer, customize, and deploy open generative AI solutions while avoiding vendor lock-in. The course introduces learners to parameter-efficient fine-tuning methods that enable large language model adaptation on limited hardware. Learners start with foundational concepts of PEFT and Low-Rank Adaptation (LoRA), understanding their advantages over full fine-tuning in terms of memory, cost, and flexibility. The course then dives into implementing QLoRA, combining quantization with LoRA for high-performance fine-tuning on consumer GPUs. Learners practice setting up training environments, preparing datasets, optimizing hyperparameters, and managing checkpoints. The final module emphasizes evaluation, using metrics such as perplexity, BLEU, ROUGE, and BERTScore to measure improvements. By the end, learners will have implemented a fine-tuning pipeline and produced a domain-adapted LLM with performance documentation.

Syllabus

  • Understanding PEFT and LoRA
    • Learn how to fine-tune large language models with parameter-efficient techniques that make advanced training possible on everyday hardware. You’ll explore the principles and advantages of PEFT, implement QLoRA for practical fine-tuning, and design hyperparameter strategies that balance accuracy and efficiency. You’ll also apply evaluation metrics and build complete pipelines from data preparation to model assessment, gaining hands-on experience with workflows that shape today’s practice while preparing you to adapt as methods continue to advance.
  • Implementing Fine-Tuning with QLoRA
    • See how parameter-efficient fine-tuning (PEFT) concepts form the foundation for QLoRA. You’ll examine QLoRA’s architecture, set up the training environment with the right dependencies, and prepare datasets for efficient fine-tuning on consumer hardware. You’ll also design hyperparameter strategies and manage checkpoints and model versions, gaining hands-on experience with a workflow that plays a central role in modern fine-tuning. Along the way, you’ll strengthen principles that help you adapt as fine-tuning methods continue to advance.
  • Hyperparameter Optimization
    • Focus on the role of hyperparameters in fine-tuning and how to adjust them for the best results. You’ll learn strategies for setting and refining learning rates, batch sizes, and rank values, along with techniques for identifying the “sweet spot” that balances efficiency and accuracy. You’ll also implement checkpointing and manage model versions to track progress and avoid wasted runs. These skills give you the ability to adapt hyperparameter choices to different problems and build stronger, more reliable models.
  • Evaluating Fine-Tuned Models
    • Learn how to evaluate whether your fine-tuned model is bringing value and why benchmarks are critical for proving it. You’ll apply a suite of metrics, such as perplexity, ROUGE, BLEU, and BERTScore, while also using qualitative checks to capture dimensions numbers can miss. You’ll analyze trade-offs in accuracy, inference speed, and memory use, and create dashboards that make results easy to interpret. These practices ensure you can confidently measure performance and deliver fine-tuned models that meet real-world standards.

Taught by

Professionals from the Industry

Reviews

Start your review of Fine-tuning Text Models with PEFT

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.