PowerBI Data Analyst - Create visualizations and dashboards from scratch
Get 35% Off CFI Certifications - Code CFI35
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn to build a complete AI self-hosting stack using Open WebUI and Ollama in this comprehensive 23-minute tutorial. Discover how to set up and configure Ollama for local AI model hosting, then integrate it with Open WebUI to create a powerful interface with RAG (Retrieval-Augmented Generation) capabilities and API functionality. Walk through the complete setup process including Ollama installation, Open WebUI configuration, and model installation directly from the interface. Explore practical applications by building an AI chatbot with self-hosted models, implementing document-based RAG for enhanced responses, and creating APIs for your applications. Compare local versus cloud deployment options, understand hardware specifications requirements, and navigate the Open WebUI dashboard for model management. Master both local development and cloud deployment scenarios, including adding and removing models, and implementing API endpoints in cloud environments for scalable AI solutions.
Syllabus
Intro
Ollama / LM Studio / llama.cpp
Open WebUI RAG, API
AI Chatbot with self-hosted AI example
Ollama setup
Open WebUI setup
Install model from Open WebUI
Chat
RAG documents
API for your app
Local vs Cloud
Which specs?
Dashboard
Open WebUI dashboard in cloud
Add & Remove models
API in cloud
Taught by
ByteGrad