How Cookpad Leverages Triton Inference Server to Boost Model Serving
CNCF [Cloud Native Computing Foundation] via YouTube
Earn a Michigan Engineering AI Certificate — Stay Ahead of the AI Revolution
Get 20% off all career paths from fullstack to AI
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Discover how Cookpad optimizes its machine learning model deployment using Triton Inference Server in this 32-minute conference talk. Learn about the challenges faced by Machine Learning Platform teams when scaling model deployment, including managing diverse frameworks and infrastructure requirements. Explore how Triton Inference Server, an open-source tool from Nvidia, simplifies the deployment process and improves resource utilization. Gain insights into deploying concurrent models on single GPU or CPU and multi-GPU servers, and understand how Cookpad's ML Platform Engineers leverage this technology to boost their model serving capabilities.
Syllabus
How Cookpad Leverages Triton Inference Server To Boost Their Model S... Jose Navarro & Prayana Galih
Taught by
CNCF [Cloud Native Computing Foundation]