You’re only 3 weeks away from a new language
PowerBI Data Analyst - Create visualizations and dashboards from scratch
Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Learn to run large language models locally on your machine using Docker in this comprehensive 14-minute tutorial. Discover the benefits of local AI deployment, including eliminating inference costs and maintaining complete control over your models. Follow step-by-step instructions to install Docker Desktop, enable necessary features for AI model deployment, and download models for local use. Explore how to set up a basic application that interfaces with Docker using API calls similar to OpenAI or Google's services. Master the configuration process for Docker to optimize AI model performance, understand the model downloading and execution workflow, and develop AI-powered applications that run entirely on your local infrastructure. Gain insights into advanced troubleshooting techniques and optimization tips for running AI models efficiently without relying on cloud services. Access practical demonstrations and a downloadable demo application to implement the concepts immediately, making this tutorial ideal for developers seeking to build cost-effective, privacy-focused AI applications.
Syllabus
00:00 Introduction to Local AI Models
01:23 Setting Up Docker Desktop
02:10 Configuring Docker for AI Models
03:38 Downloading and Running AI Models
07:26 Developing AI-Powered Apps
10:05 Advanced Tips and Troubleshooting
13:42 Conclusion and Final Thoughts
Taught by
MattVidPro AI