Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Set Up Your Own LLM Server at Home - Run Local AI Models with Ollama and NVIDIA DGX Spark

Jeff Heaton via YouTube

Overview

Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Learn to set up an Ollama server on the NVIDIA DGX Spark to run large language models locally, including 70B+ parameter models. Deploy Ollama using Docker through NVIDIA's DGX Spark notebooks and access the Ollama WebUI remotely for an intuitive chat interface. Connect to your local API using OpenAI-compatible Python code and manage advanced models like DeepSeek R01 70B while balancing performance with context length requirements. Master the configuration process that works on nearly any Unix-based system, not just the DGX Spark, enabling you to create a fully functional local LLM environment for experimentation or integration into your applications. Gain practical experience with Docker deployment, remote access setup, API integration, and model management techniques essential for running enterprise-grade language models on local hardware.

Syllabus

Set Up Your Own LLM Server at Home | Run Local AI Models with Ollama + NVIDIA DGX Spark

Taught by

Jeff Heaton

Reviews

Start your review of Set Up Your Own LLM Server at Home - Run Local AI Models with Ollama and NVIDIA DGX Spark

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.