Run LLMs Locally with Docker Model Runner - Simplify AI Dev with Docker Desktop
Kubesimplify via YouTube
Learn Backend Development Part-Time, Online
AI Product Expert Certification - Master Generative AI Skills
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to run and test large language models (LLMs) locally using Docker's new feature, Docker Model Runner, available in Docker Desktop 4.40. This 28-minute tutorial from Kubesimplify features guest speaker Kevin Wittek and walks through the complete workflow for local AI development. Discover why Docker created this tool, how it simplifies the development process, enables GPU acceleration on Apple silicon, packages models as OCI artifacts, and integrates with HuggingFace. The video covers the current capabilities and future roadmap of this feature that streamlines the local development loop for GenAI applications and LLM experimentation. Access the official documentation to try Docker Model Runner yourself and enhance your AI development workflow.
Syllabus
Run LLMs Locally with Docker Model Runner | Simplify AI Dev with Docker Desktop
Taught by
Kubesimplify