Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

Optimizing AI Workflows and Deploying Edge Models

Coursera via Coursera

Overview

Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Modern AI systems require efficient training workflows, scalable data pipelines, and deployment strategies that meet real-world performance constraints. In this course, you'll learn how to optimize machine learning workflows and deploy AI models in production environments, including edge devices. You'll begin by working with PyTorch to implement neural network components using tensor operations and automatic differentiation. You'll analyze GPU utilization and training performance to identify computational bottlenecks and improve throughput. Next, you'll explore tools and techniques used to visualize and evaluate machine learning experiments. You'll learn how to compare model variants using performance metrics and design standardized workflows that improve experiment reproducibility. The course also covers building efficient data pipelines that maximize hardware utilization during model training. Finally, you'll evaluate model robustness across data slices and learn how to prepare optimized models for deployment on edge devices where latency and resource constraints matter. By the end of the course, you'll be able to design efficient ML pipelines, analyze performance bottlenecks, and deploy optimized AI models in real-world environments.

Syllabus

  • Optimize PyTorch: Build and Accelerate Layers: Custom Layers in PyTorch: From Building Blocks to Squeeze-and-Excite
    • You will move beyond the standard “out-of-the-box” components in PyTorch by building your own custom building block called Squeeze-and-Excite. You will understand why these custom components matter for real-world problems, and you will create one step by step while ensuring it behaves correctly. You will see how data flows through this custom block, how its parameters are stored and updated during learning, and how to verify that everything is connected properly. By the end, you will understand a general pattern you can reuse to build many other custom components for your neural networks.
  • Optimize PyTorch: Build and Accelerate Layers: Speed Up Your AI Training: Double Your GPU Power
    • You will learn how to find and fix slowdowns in your AI training code, improving performance from data processing to model training. You will use built-in tools to identify issues such as slow data loading, then apply two practical techniques: one that makes mathematical computations faster while using less memory, and another that allows you to train with larger batches of data without running out of memory. Through quizzes, ready-to-copy code examples, and clear explanations, you will see how to keep your GPU working at full speed instead of sitting idle. By the end, you will be able to streamline complex training workflows into efficient processes that support business success.
  • Evaluate and Create ML Workflows Visually: Visualizing and Evaluating ML Experiments
    • You will explore how visual dashboards help you understand model behavior and compare different training runs. You will learn how to interpret accuracy curves, loss trajectories, and compute trade-offs so you can choose the model variant that is best for the task. By the end, you will know how to evaluate experiments using clear visual evidence rather than guesswork.
  • Evaluate and Create ML Workflows Visually: Build Better: Creating Reusable and Standardized ML Workflows
    • You will practice structuring reusable ML workflows using modular components. You will explore LightningModule and DataModule patterns, strengthen your documentation habits, and understand how structured templates reduce errors.
  • Optimize AI: Build Fast Efficient Pipelines: Build High-Throughput Data Pipelines
    • You will explore how data loading, batching, caching, and prefetching impact training speed. You will learn how frameworks like tf.data and PyTorch DataLoader parallelize input operations to keep GPUs busy.
  • Optimize AI: Build Fast Efficient Pipelines: Analyze & Prune Model Computational Graphs
    • You will explore how computational graphs work, why redundant operations exist, and how pruning them improves model inference latency. You will analyze a model graph, identify unnecessary reshape and identity operations, prune them, re-export the SavedModel, and measure the resulting latency improvements.
  • Optimize and Deploy Edge AI Models: Evaluating Model Robustness on Real-World Data Slices
    • You will explore how to evaluate ML models using slice-based performance analysis. You will discover how different environments, devices, and usage-context slices can expose hidden weaknesses in an otherwise accurate model. Through TFMA workflows and hands-on exploration, you will identify a real 5% drop in performance on low-light smartphone images and generate actionable recommendations to improve data quality and fairness. This lesson emphasizes practical robustness evaluation rather than purely theoretical metrics.
  • Optimize and Deploy Edge AI Models: Optimizing and Deploying Models on Edge Devices with TensorFlow Lite
    • You will optimize and deploy models to edge hardware using TensorFlow Lite. You will convert a SavedModel into a quantized TFLite model, explore weight and integer quantization options, and deploy the optimized model on a Jetson Nano. You will measure changes in file size, inference speed (FPS), and accuracy, then summarize your results in a reproducible hand-off guide. By the end, you will understand the practical trade-offs between speed, footprint, and accuracy in real edge deployments.
  • Project: Optimization and Edge Deployment Strategy Brief
    • Real-world computer vision systems move through several stages before they are ready for deployment. Engineers must evaluate model experiments, diagnose workflow inefficiencies, improve training pipelines, and ensure that models can operate reliably under real-world and device constraints. These activities require combining performance analysis with practical engineering decisions about system design and deployment readiness. In this integration project, you will act as a machine learning engineer preparing a computer vision model for deployment on edge devices in a resource-constrained environment. You will analyze experiment results, identify performance bottlenecks, evaluate slice-level robustness, and propose workflow and deployment optimizations. The project integrates key engineering activities involved in preparing vision systems for production, including GPU performance diagnosis, experiment visualization and comparison, data pipeline optimization, workflow standardization, and edge deployment trade-off analysis. Rather than focusing on isolated techniques, you will evaluate the full machine learning workflow—from training inefficiencies and experiment interpretation to robustness risks and deployment feasibility. Your final deliverable will be an Optimization and Edge Deployment Strategy Brief, a structured technical report that identifies workflow bottlenecks, proposes targeted optimization strategies, evaluates slice-level risks, and presents a justified edge-deployment recommendation. The project reflects real-world ML engineering responsibilities where professionals must balance accuracy, speed, maintainability, and hardware constraints before approving production deployment.

Taught by

Professionals from the Industry

Reviews

Start your review of Optimizing AI Workflows and Deploying Edge Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.