Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

Optimizing Models for Production

Coursera via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
The Optimizing Models for Production course is designed for developers, engineers, and technical product builders who are new to Generative AI but already have intermediate machine learning knowledge, basic Python proficiency, and familiarity with development environments such as VS Code, and who want to engineer, customize, and deploy open generative AI solutions while avoiding vendor lock-in. The course prepares learners to make generative AI models more efficient, scalable, and cost-effective for real-world deployment. Learners begin with quantization, applying INT8 and INT4 precision reduction using tools like bitsandbytes while balancing accuracy and efficiency. Next, they explore inference optimization strategies, including batching, KV-cache management, and token-level computation scheduling to reduce latency in interactive applications. The course also covers memory footprint reduction and adaptive batch sizing for dynamic workloads. In the final module, learners apply practical hardware optimization techniques such as GPU memory tuning, mixed precision inference, and profiling tools like nvidia-smi and PyTorch Profiler to identify bottlenecks. By the end, learners will be able to deliver optimized models across diverse hardware environments, supported by performance benchmarks and reproducible deployment pipelines.

Syllabus

  • Quantization Techniques (INT8/INT4)
    • Learn how quantization makes large models faster and easier to run without requiring high-end hardware. You’ll apply INT8 and INT4 methods, compare post-training vs. quantization-aware training, and measure how accuracy is affected. You’ll also use calibration techniques to minimize trade-offs, giving you the skills to balance efficiency with performance in real-world scenarios.
  • Inference Optimization Strategies
    • Discover how to streamline inference so models respond faster and run more efficiently in production. You’ll practice advanced batching, KV-cache management, and token scheduling to cut latency while improving throughput. You’ll also explore memory-saving techniques beyond quantization, ensuring your models remain reliable and cost-effective under real-world system loads.
  • Practical Hardware Optimization
    • Learn how to make the most of available hardware by tuning GPU performance. You’ll use tools like nvidia-smi and PyTorch profiler to spot bottlenecks, and apply strategies such as mixed precision, gradient checkpointing, and memory mapping. These practices help you adapt models to limited resources while maintaining stability and quality in training or inference.
  • Deployment & Benchmarking
    • Prepare models for deployment across platforms and measure how well they perform once optimized. You’ll convert models into formats like ONNX for cross-platform use and benchmark them to evaluate speed, memory, and throughput. By practicing these workflows, you’ll gain the ability to deliver models that are portable, production-ready, and backed by clear performance data.

Taught by

Professionals from the Industry

Reviews

Start your review of Optimizing Models for Production

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.