Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

CNCF [Cloud Native Computing Foundation]

AI Inference Performance Acceleration: Methods, Tools, and Deployment Workflows

CNCF [Cloud Native Computing Foundation] via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore AI inference performance acceleration methods, tools, and deployment workflows in this 43-minute conference talk by Yifei Zhang and 磊 钱 from Bytedance. Discover cloud-native solutions for storage performance issues and learn about tools for evaluating inference performance across different configurations. Gain insights into optimizing GPU selection, serving framework configuration, and model/data loading to enhance inference efficiency. Understand the impact of inference performance on user experience and how optimization can reduce costs. Explore strategies using technologies like Fluid and model optimization to improve inference performance. Receive guidance on hardware selection based on performance and cost analysis of various GPUs. Learn about a performance testing tool that evaluates and recommends the best combinations of models, hardware, and acceleration schemes, aligning with deployment workflows based on test results.

Syllabus

AI Inference Performance Acceleration: Methods, Tools, and Deployment Workflows - Yifei Zhang & 磊 钱

Taught by

CNCF [Cloud Native Computing Foundation]

Reviews

Start your review of AI Inference Performance Acceleration: Methods, Tools, and Deployment Workflows

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.