Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Why I Don't Run AI Models Locally - Only in Docker or Cloud

ByteGrad via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about three different approaches for running AI models in development environments through this 18-minute tutorial. Explore the pros and cons of running AI models locally using Ollama, containerizing them with Docker, and deploying them in cloud environments. Discover why cloud-based solutions might be preferable for many developers, especially when working with resource-intensive AI models. Follow along as the instructor demonstrates creating an AI template using Ollama and the LLama model, setting up an AI workspace, and leveraging Cloud Development Environment (CDE) benefits for more efficient AI development workflows. Gain practical insights into choosing the right deployment strategy for your AI projects based on factors like resource requirements, scalability, and development team collaboration needs.

Syllabus

00:00 Local vs Docker vs Cloud
01:19 Option 1: run AI locally with Ollama
02:17 Option 2: run AI in Docker
03:12 Option 3: run AI in cloud
06:38 Create AI-template Ollama, LLama model
11:10 Create AI-workspace
16:51 Cloud Development Environment CDE benefits

Taught by

ByteGrad

Reviews

Start your review of Why I Don't Run AI Models Locally - Only in Docker or Cloud

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.