MIT Sloan: Lead AI Adoption Across Your Organization — Not Just Pilot It
Free courses from frontend to fullstack and AI
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how to optimize AI inference deployment across geographically distributed networks in this conference talk from SREcon25 EMEA. Discover the unique challenges that traditional cloud-centric SRE practices fail to address when dealing with edge deployments, including managing geographically dispersed processing units, vast model catalogs, and complex resource constraints with network variability. Explore the critical objective of achieving high processing unit utilization to balance operational costs (CapEx efficiency) with service quality through effective latency trade-offs. Understand how underutilized models create significant financial burdens while over-utilization can degrade user experience, and gain insights into strategies for managing this delicate balance in distributed AI inference systems.
Syllabus
SREcon25 Europe/Middle East/Africa - Utilization Is the Key to Efficiency: What It Takes to Run...
Taught by
USENIX