Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

AWS: ML Workflows with SageMaker, Storage & Security

Whizlabs via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
AWS: ML Workflows with SageMaker, Storage & Security is the fourth course in the Exam Prep (MLA-C01): AWS Certified Machine Learning Engineer – Associate Specialization. This course enables learners to design secure, scalable, and efficient machine learning workflows on AWS, focusing on key pillars: data storage, model development, and security. Learners will begin by exploring how to collect, store, and stream ML data using services like Amazon S3, Amazon Kinesis, and Amazon Redshift. The course then transitions into hands-on model development with Amazon SageMaker, including data preparation, training, and deployment processes. In the final module, learners are introduced to the critical aspects of security and data protection, learning how to secure ML pipelines using IAM, KMS, encryption, and network controls. This course prepares learners to build production-grade ML systems that not only scale efficiently but also meet enterprise-level compliance and security requirements. This course consists of three comprehensive modules, each divided into focused lessons and practical demonstrations. Learners will gain approximately 3–3.5 hours of video content, featuring step-by-step tutorials using AWS services and real-world ML pipeline examples. Graded and Ungraded Quizzes are included in every module to test knowledge and practical readiness. Module 1: Data Storage & Real-Time Streaming on AWS Module 2: Data Preparation & ML Model Development with Amazon SageMaker Module 3: Security, Identity & Data Protection on AWS By the end of this course, learners will be able to: Design end-to-end ML workflows using AWS storage, compute, and ML services Process streaming and batch data sources for ML model development Secure ML pipelines using IAM, encryption, and network controls Build compliance-ready ML solutions using Amazon SageMaker and supporting services This course is ideal for cloud developers, ML engineers, and data professionals with hands-on experience in AWS who are looking to master the integration of machine learning workflows with enterprise-grade data management and security. It is especially valuable for those preparing for the AWS Certified Machine Learning Engineer – Associate (MLA-C01) exam, with a focus on storage, model development, and secure deployment practices.

Syllabus

  • Data Storage & Real-Time Streaming on AWS
    • Welcome to Week 1 of the AWS: End-to-End ML Workflows with SageMaker, Storage & Security course. This week, you’ll explore the core data infrastructure and streaming services that power scalable machine learning workflows on AWS. We’ll start by reviewing storage options such as Amazon S3, EBS, EFS, and FSx for NetApp ONTAP, and discuss how to select the right storage service based on performance and ML use case requirements. Next, you’ll examine database options for ML, followed by an in-depth look at real-time data ingestion and streaming using services like Amazon Kinesis, Amazon Managed Streaming for Apache Kafka, and Amazon Managed Service for Apache Flink. You’ll also complete a hands-on activity where you’ll create a data streaming pipeline using Kinesis Streams, Amazon S3, and AWS Lambda, enabling real-time data collection and processing for machine learning applications.
  • Data Preparation & ML Model Development with Amazon SageMaker
    • Welcome to Week 2 of the AWS: Model Training, Optimization & Deployment course. This week, you'll explore the broader capabilities of Amazon SageMaker and how it supports the full machine learning lifecycle. We’ll begin with an introduction and demo of SageMaker, highlighting its core services and development environment. You’ll then take a deeper dive into SageMaker Data Wrangler for efficient data preparation, followed by a detailed walkthrough of the SageMaker Feature Store, which enables consistent feature reuse across training and inference. As we move forward, you'll learn how to monitor model performance using SageMaker Model Monitor, helping ensure reliability and detect data drift in production. We’ll wrap up the week by using SageMaker JumpStart to quickly deploy pre-built models and solution templates, accelerating your ML experimentation and deployment process.
  • Security, Identity & Data Protection on AWS
    • Welcome to Week 3 of the AWS: End-to-End ML Workflows with SageMaker, Storage & Security course. This week, you'll focus on securing and governing your machine learning workloads on AWS. We’ll start by exploring AWS Key Management Service (KMS) and AWS Secrets Manager, which help you securely store, manage, and encrypt sensitive data such as API keys and credentials. Next, we’ll cover AWS WAF and AWS Shield, two essential services for protecting ML applications from web threats and Distributed Denial of Service (DDoS) attacks. You’ll also learn how to use Amazon Macie to detect and protect sensitive data within S3 buckets, ensuring compliance with data privacy standards. We’ll wrap up the week with AWS Trusted Advisor, a powerful tool that provides real-time recommendations to improve security, performance, and fault tolerance across your AWS environment—enabling you to maintain a secure and cost-efficient ML infrastructure.
  • Monitoring, Visualization & Operational Insights
    • Welcome to Week 4 of the AWS: End-to-End ML Workflows with SageMaker, Storage & Security course. This week, you’ll explore tools that help you monitor, visualize, and optimize your machine learning workflows in production. We’ll begin with Amazon QuickSight, where you’ll learn how to analyze and visualize ML outputs for better business insights. You’ll then dive into SageMaker Model Monitor to detect anomalies in deployed models and ensure ongoing performance. To strengthen observability, you’ll work with AWS X-Ray and CloudWatch Logs to trace model behavior, debug issues, and gain insights into operational metrics. We’ll wrap up by using AWS Cost Explorer and Trusted Advisor to monitor usage and cost, and explore SageMaker Inference Recommender to choose optimal instance types for model deployment—ensuring cost-effective and high-performance inference at scale.

Taught by

Whizlabs Instructor

Reviews

Start your review of AWS: ML Workflows with SageMaker, Storage & Security

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.