Run LLMs Locally with Docker Model Runner - Simplify AI Dev with Docker Desktop
Kubesimplify via YouTube
PowerBI Data Analyst - Create visualizations and dashboards from scratch
Get 20% off all career paths from fullstack to AI
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how to run and test large language models (LLMs) locally using Docker's new feature, Docker Model Runner, available in Docker Desktop 4.40. This 28-minute tutorial from Kubesimplify features guest speaker Kevin Wittek and walks through the complete workflow for local AI development. Discover why Docker created this tool, how it simplifies the development process, enables GPU acceleration on Apple silicon, packages models as OCI artifacts, and integrates with HuggingFace. The video covers the current capabilities and future roadmap of this feature that streamlines the local development loop for GenAI applications and LLM experimentation. Access the official documentation to try Docker Model Runner yourself and enhance your AI development workflow.
Syllabus
Run LLMs Locally with Docker Model Runner | Simplify AI Dev with Docker Desktop
Taught by
Kubesimplify