Scaling Foundation Model Inference on Amazon SageMaker AI
AWS Events via YouTube
The Most Addictive Python and SQL Courses
Google Data Analytics, IBM AI & Meta Marketing — All in One Subscription
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Discover how to optimize and deploy popular open-source foundation models like Qwen3, GPT-OSS, and Llama4 using advanced inference engines such as vLLM on Amazon SageMaker in this 53-minute conference talk from AWS re:Invent 2025. Explore key features including bidirectional streaming for audio and text applications while learning proven optimization techniques for model inference. Master performance-boosting strategies through live demonstrations covering KV caching, intelligent routing, and autoscaling to maintain system stability under varying workloads. Learn to build Agentic workflows by integrating SageMaker AI with LangChain and Amazon Bedrock AgentCore, and gain access to best practices that will help you confidently transition from prototype development to production-ready AI experiences that deliver exceptional user value.
Syllabus
AWS re:Invent 2025 - Scaling foundation model inference on Amazon SageMaker AI (AIM424)
Taught by
AWS Events