Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

Generative AI: Fine-Tuning LLMs and Diffusion Models

Board Infinity via Coursera

Overview

Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
"Master Generative AI with hands-on training in Large Language Models (LLMs), PEFT techniques (LoRA, QLoRA), and Diffusion Models using Hugging Face, diffusers, peft, trl, and bitsandbytes. This course takes you from the internals of decoder-only transformers to building a specialist fine-tuned LLM and generating high-quality, controllable images with ControlNet. In Module 1, explore decoder-only transformer architectures, self-attention, causal masking, KV caching, and token flow mechanics. Module 2 focuses on Parameter-Efficient Fine-Tuning (PEFT), where you'll implement LoRA, QLoRA, and 4-bit quantization to fine-tune large models on consumer GPUs using SFT pipelines. Module 3 dives into diffusion models, covering forward/reverse processes, UNet, schedulers (DDIM, Euler, DPM++), and ControlNet conditioning. Module 4 is a capstone where you'll build a Specialist LLM — from dataset creation to adapter export and evaluation. By the end of this course, you will: - Build and optimize decoder-only transformer pipelines with KV caching - Fine-tune 7B+ LLMs using LoRA, QLoRA, and SFT pipelines on limited hardware - Configure diffusers pipelines with ControlNet for controllable image generation - Train, export, and evaluate a domain-specialized LLM adapter end-to-end" Disclaimer: This is an independent educational resource created by Board Infinity for informational and educational purposes only. This course is not affiliated with, endorsed by, sponsored by, or officially associated with any company, organization, or certification body unless explicitly stated. The content provided is based on industry knowledge and best practices but does not constitute official training material for any specific employer or certification program. All company names, trademarks, service marks, and logos referenced are the property of their respective owners and are used solely for educational identification and comparison purposes.

Syllabus

  • Transformer Internals & Decoder-Only Architectures
    • Explore the inner workings of decoder-only transformer architectures, including token flow, self-attention, causal masking, and KV cache optimization.
  • PEFT - LoRA, QLoRA, & SFT Pipelines
    • Master parameter-efficient fine-tuning techniques including LoRA, QLoRA with 4-bit quantization, and building supervised fine-tuning pipelines using peft and trl.
  • Diffusion Models & Image Generation
    • Understand the forward and reverse diffusion processes, configure diffusers pipelines with various schedulers, and apply ControlNets for conditioned image generation.
  • The Hands-On Project - The Specialist LLM
    • Apply all course concepts in a capstone project building a specialist LLM through dataset creation, QLoRA training, and adapter exporting with rigorous evaluation.

Taught by

Board Infinity

Reviews

Start your review of Generative AI: Fine-Tuning LLMs and Diffusion Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.