Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Scale to 0 LLM Inference: Cost Efficient Open Model Deployment on Serverless GPUs

Devoxx via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
In this 17-minute conference talk from Devoxx, learn how to deploy open large language models cost-efficiently using serverless GPUs that can scale to zero when inactive. Discover how companies can gain full control over their LLM deployments through open models like Gemma and Deepseek, while managing deployment options, model upgrade timing, and private data security. Watch a practical demonstration of running Ollama, an open-source LLM inference server, on serverless GPU infrastructure that rapidly scales up and down based on demand, eliminating costs during periods of inactivity.

Syllabus

Scale to 0 LLM inference: Cost efficient open model deployment on serverless GPUs by Wietse Venema

Taught by

Devoxx

Reviews

Start your review of Scale to 0 LLM Inference: Cost Efficient Open Model Deployment on Serverless GPUs

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.