Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

Optimize AI Inference Speed & Accuracy

Coursera via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Production ML models failing your latency targets? Learn how to make them run 3-5x faster without losing accuracy. This course helps ML engineers and data scientists optimize neural network inference for real-world deployment—across mobile, edge, and cloud environments. If you face slow model inference, high infrastructure costs, or deployment constraints, this course provides practical solutions. You'll master profiling techniques to identify performance bottlenecks, apply quantization to cut precision requirements, and make smart trade-offs between speed, accuracy, and resource constraints. You'll learn to benchmark optimization techniques and select the right approach for deployment scenarios. You'll explore inference profiling and metrics, pruning strategies, and quantization methods. You'll practice with real-world cases—from streaming platforms to autonomous vehicles—using industry-standard tools like PyTorch Profiler, TensorRT, and pruning utilities. This course is ideal for machine learning engineers, data scientists, and AI practitioners who are deploying or optimizing models in production. It’s also valuable for MLOps professionals and system engineers responsible for performance tuning in resource-constrained environments (e.g., mobile, embedded, or cloud inference systems). Learners should have a good grasp of Python and basic experience with PyTorch or TensorFlow. Familiarity with machine learning concepts, such as model training and evaluation, is expected. Understanding how neural networks work and basic performance metrics like latency and accuracy will help you get the most from this course. By the end of this course, you’ll confidently optimize production models, cut inference costs, meet latency goals, and deploy ML systems that scale efficiently.

Syllabus

  • Foundations: Profiling and Understanding Inference Bottlenecks
    • In this module, learners will master profiling techniques to identify bottlenecks and understand the fundamental trade-offs in model inference optimization. You'll use industry-standard tools like PyTorch Profiler to diagnose where models waste time—whether in computation, memory bandwidth, or data transfer. By the end, you'll confidently analyze profiling data, prioritize optimization efforts, and establish performance baselines for production ML systems.
  • Model Pruning: Reducing Complexity Without Losing Power
    • In this module, learners will master pruning techniques to reduce neural network complexity without sacrificing accuracy. You'll explore both structured and unstructured pruning approaches, implement them using PyTorch pruning utilities, and discover how to recover accuracy through fine-tuning and knowledge distillation. By the end, you'll confidently apply pruning to optimize models for resource-constrained environments like mobile devices and edge hardware.
  • Quantization and Secure Deployment: Speed Meets Security
    • In this module, learners will master quantization techniques to reduce numerical precision while maintaining model accuracy. You'll implement both post-training quantization and quantization-aware training using PyTorch, then compare quantization against pruning across speed, accuracy, and security dimensions. By the end, you'll understand how optimization choices affect adversarial robustness and confidently select the right technique for secure, high-performance deployments in mission-critical applications.

Taught by

Starweaver and Ritesh Vajariya

Reviews

Start your review of Optimize AI Inference Speed & Accuracy

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.