Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Scaling Foundation Model Inference on Amazon SageMaker AI

AWS Events via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Discover how to optimize and deploy popular open-source foundation models like Qwen3, GPT-OSS, and Llama4 using advanced inference engines such as vLLM on Amazon SageMaker in this 53-minute conference talk from AWS re:Invent 2025. Explore key features including bidirectional streaming for audio and text applications while learning proven optimization techniques for model inference. Master performance-boosting strategies through live demonstrations covering KV caching, intelligent routing, and autoscaling to maintain system stability under varying workloads. Learn to build Agentic workflows by integrating SageMaker AI with LangChain and Amazon Bedrock AgentCore, and gain access to best practices that will help you confidently transition from prototype development to production-ready AI experiences that deliver exceptional user value.

Syllabus

AWS re:Invent 2025 - Scaling foundation model inference on Amazon SageMaker AI (AIM424)

Taught by

AWS Events

Reviews

Start your review of Scaling Foundation Model Inference on Amazon SageMaker AI

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.