Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Run LLMs Locally with Docker Model Runner - Simplify AI Dev with Docker Desktop

Kubesimplify via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to run and test large language models (LLMs) locally using Docker's new feature, Docker Model Runner, available in Docker Desktop 4.40. This 28-minute tutorial from Kubesimplify features guest speaker Kevin Wittek and walks through the complete workflow for local AI development. Discover why Docker created this tool, how it simplifies the development process, enables GPU acceleration on Apple silicon, packages models as OCI artifacts, and integrates with HuggingFace. The video covers the current capabilities and future roadmap of this feature that streamlines the local development loop for GenAI applications and LLM experimentation. Access the official documentation to try Docker Model Runner yourself and enhance your AI development workflow.

Syllabus

Run LLMs Locally with Docker Model Runner | Simplify AI Dev with Docker Desktop

Taught by

Kubesimplify

Reviews

Start your review of Run LLMs Locally with Docker Model Runner - Simplify AI Dev with Docker Desktop

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.