Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Unifying Real-Time and Batch ML Inference Using BentoML and Apache Spark

The ASF via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Discover how to unify real-time and batch machine learning inference using BentoML and Apache Spark in this 28-minute conference talk. Learn from Bo Jiang, a Product Engineer at BentoML, as he explores the integration of these powerful tools. Gain insights into packaging models with BentoML, deploying BentoServices to production, and invoking them from Spark for scalable batch inference. Understand how to leverage the same models for both real-time and batch predictions, ensuring consistency in inference logic across different workloads. Explore the run_in_spark API, which automatically distributes models and inference logic across Spark worker nodes. Discover how this unified approach eliminates concerns about divergence in inference logic, promotes version control, and maintains consistent library dependencies. Master the art of managing both real-time and batch inference under the same standards, ultimately fostering efficient AI service development and deployment.

Syllabus

Unifying Real-Time And Batch Ml Inference Using Bentoml And Apache Spark

Taught by

The ASF

Reviews

Start your review of Unifying Real-Time and Batch ML Inference Using BentoML and Apache Spark

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.