Streamlining MLOps Pipeline With Kubeflow on Arm64 Locally
CNCF [Cloud Native Computing Foundation] via YouTube
-
23
-
- Write review
MIT Sloan: Lead AI Adoption Across Your Organization — Not Just Pilot It
Build the Finance Skills That Lead to Promotions — Not Just Certificates
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore how to deploy and optimize Kubeflow for MLOps pipelines on Arm64 architecture in a local, resource-constrained environment in this 15-minute lightning talk. Discover the practical implementation of Kubeflow's core qualities—composability, modularity, scalability, and portability—on a single host system with up to 192 CPU cores. Learn about deploying AI pipelines using Kubeflow Pipelines and serving large language models with KServe without requiring large GPUs or cloud services. Examine performance measurements across various workloads and understand how to meet service level objectives for advanced AI applications, including agentic AI, using local infrastructure. Gain insights into running comprehensive MLOps workflows in a "box" configuration while maintaining the full capabilities of the Kubeflow ecosystem.
Syllabus
Lightning Talk: Streamlining MLOps Pipeline With Kubeflow on Arm64 Locally - Jeffery Tu
Taught by
CNCF [Cloud Native Computing Foundation]