Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

Fine-Tune & Optimize Generative AI Models

Coursera via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
In today’s AI-driven world, optimizing large language models for specific domains while managing cost is a key competitive skill. This course trains AI engineers, ML practitioners, and data scientists to transform baseline generative models into efficient, production-ready solutions. Through hands-on labs using Hugging Face Transformers, PEFT, and Evaluate, you’ll master decoding strategies (temperature, top-k, top-p, beam search), automated evaluation (BLEU, ROUGE, BERTScore, custom metrics), and parameter-efficient fine-tuning (LoRA) that cuts trainable parameters by 99% without losing quality. Real-world projects cover fine-tuning 7B+ models for legal, medical, and financial applications while analyzing GPU and inference costs. The capstone simulates real constraints—limited GPU memory, latency, budget, and compliance—requiring technical, analytical, and executive deliverables. By course end, you’ll confidently optimize and evaluate LLMs, balancing quality, performance, and cost for advanced roles in LLM engineering, MLOps, and AI product development. This course is ideal for DevOps engineers, SREs, cloud engineers, and developers who manage containerized applications and want to streamline deployments using Helm. It’s also suited for technical leads and engineers who design or maintain CI/CD or GitOps pipelines for modern, scalable systems. Participants should have basic proficiency in Python, an understanding of machine learning fundamentals, and familiarity with natural language processing (NLP) concepts and machine learning frameworks to fully engage with the course content. Participants should have basic proficiency in Python, an understanding of machine learning fundamentals, and familiarity with natural language processing (NLP) concepts and machine learning frameworks to fully engage with the course content.

Syllabus

  • Understanding and Controlling Generative Model Outputs
    • This module introduces learners to decoding strategies and parameters that control how generative AI models produce text. Learners will explore the mechanics of temperature, top-k, top-p sampling, and beam search, understanding how these parameters influence output diversity, coherence, and relevance. Through hands-on experimentation, learners will gain practical skills in tuning these parameters for different use cases.
  • Evaluating Generative AI Output Quality
    • This module equips learners with systematic approaches to evaluate AI-generated text using automated metrics and evaluation frameworks. Learners will explore metrics like BLEU, ROUGE, perplexity, BERTScore, and task-specific evaluation methods, understanding both their capabilities and limitations. The module emphasizes when automated metrics suffice and when human evaluation remains essential.
  • Parameter-Efficient Fine-Tuning for Domain Adaptation
    • This module introduces learners to parameter-efficient fine-tuning (PEFT) techniques that enable domain adaptation of large language models without the computational and memory costs of full fine-tuning. Learners will explore methods like LoRA, prefix tuning, and adapter layers, understanding the cost-performance trade-offs and practical implementation strategies for real-world applications.

Taught by

Sonali Sen Baidya and Starweaver

Reviews

Start your review of Fine-Tune & Optimize Generative AI Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.