Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

CNCF [Cloud Native Computing Foundation]

AI Infrastructure Best Practices - Enterprise Do's and Don'ts

CNCF [Cloud Native Computing Foundation] via YouTube

Overview

Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore enterprise AI infrastructure best practices in this 41-minute panel discussion featuring industry experts who address critical decisions facing organizations deploying AI workloads on Kubernetes. Learn from power users with extensive AI deployment experience, builders of modern Kubernetes-based AI frameworks like Ray, and practitioners managing heterogeneous AI use cases for enterprise environments. Discover whether AI workloads introduce unique enterprise readiness requirements beyond traditional Kubernetes considerations like logging, monitoring, analytics, security, and multi-tenancy. Examine strategies for managing high costs and limited availability of hardware accelerators such as GPUs, and evaluate architectural decisions including whether to implement siloed stacks for pre-training, post-training, serving, and batch workloads versus consolidating multiple stacks on single clusters. Consider cluster sizing approaches comparing large numbers of small clusters against small numbers of large clusters, and assess deployment strategies for single versus multi-region, multi-cloud, and neo-cloud federation scenarios.

Syllabus

Panel: AI Infra Best Practices: Enterprise Do’s and Don’ts - Madhuri Yechuri & Andrew Leung

Taught by

CNCF [Cloud Native Computing Foundation]

Reviews

Start your review of AI Infrastructure Best Practices - Enterprise Do's and Don'ts

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.