Learn AI, Data Science & Business — Earn Certificates That Get You Hired
Stuck in Tutorial Hell? Learn Backend Dev the Right Way
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
In this 17-minute conference talk from Devoxx, learn how to deploy open large language models cost-efficiently using serverless GPUs that can scale to zero when inactive. Discover how companies can gain full control over their LLM deployments through open models like Gemma and Deepseek, while managing deployment options, model upgrade timing, and private data security. Watch a practical demonstration of running Ollama, an open-source LLM inference server, on serverless GPU infrastructure that rapidly scales up and down based on demand, eliminating costs during periods of inactivity.
Syllabus
Scale to 0 LLM inference: Cost efficient open model deployment on serverless GPUs by Wietse Venema
Taught by
Devoxx