Learn the Skills Netflix, Meta, and Capital One Actually Hire For
Learn Backend Development Part-Time, Online
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This 19-minute tutorial demonstrates how to set up and run private GPT models locally on your own computer without relying on paid services. Follow a comprehensive walkthrough covering Ollama installation, model selection (including Deepseek, Gemma, and QwQ), and implementation on GPU or CPU with Docker support. Learn to optimize your system for local LLMs, create a secure offline AI assistant, manage different models while ensuring data privacy, and troubleshoot common issues across Windows, Linux, and Mac. The step-by-step process includes installing Ollama, downloading appropriate models (starting with smaller ones to test compatibility), setting up Docker, and implementing OpenWebUI for a user-friendly interface. Perfect for engineers, developers, and AI enthusiasts looking to enhance productivity while maintaining data security through locally-hosted large language models.
Syllabus
0:00 – Introduction & Overview
0:44 – Ollama overview
2:31 – Installing and Running Ollama
4:20 – Install LLM with Ollama
6:50 – Chatting with local LLM model
7:52 – OpenWebUI overview
8:45 – Installing and running OpenWebUI with Docker
10:55 – Play around with Ollama and OpenWebUI
14:05 – Chatting with Gemma 3 model
17:45 – Concluding thoughts
Taught by
Python Lessons