Gain a Splash of New Skills - Coursera+ Annual Just ₹7,999
The Most Addictive Python and SQL Courses
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore how to design and operate local Generative AI inference architectures using AWS IoT Greengrass in this conference talk from AWS re:Invent 2025. Discover when and why local execution is preferred over cloud-based inference for certain use cases, and learn practical implementation strategies through a live demonstration featuring a robotic arm. Compare cloud-based versus local inference approaches by examining critical trade-offs including latency considerations, connectivity requirements, and model update frequency challenges. Gain insights into delivering and updating AI models at the edge to ensure reliable, ongoing deployment of Generative AI solutions in real-world environments. Master the technical considerations for implementing edge-based AI inference while maintaining operational efficiency and model performance in distributed IoT systems.
Syllabus
AWS re:Invent 2025 - Designing local Generative AI inference with AWS IoT Greengrass (DEV316)
Taught by
AWS Events