Run LLMs with Docker Model Runner - No Python, PyTorch, or CUDA Required

Run LLMs with Docker Model Runner - No Python, PyTorch, or CUDA Required

KodeKloud via YouTube Direct link

- Dependency Hell Explained

2 of 14

2 of 14

- Dependency Hell Explained

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Run LLMs with Docker Model Runner - No Python, PyTorch, or CUDA Required

Automatically move to the next video in the Classroom when playback concludes

  1. 1 - Introduction: The LLM Dependency Challenge
  2. 2 - Dependency Hell Explained
  3. 3 - How Docker Solves Dependency Management
  4. 4 - Understanding Inference Engines
  5. 5 - DevOps and MLOps Benefits
  6. 6 - Free Lab Introduction
  7. 7 - Task 1: Installing Docker Model Plugin
  8. 8 - Task 2: Pulling AI Models as OCI Artifacts
  9. 9 - Task 3: Testing Models Interactively
  10. 10 - Task 4: Starting Background Inference Service
  11. 11 - Task 5: Querying via OpenAI API
  12. 12 - Task 6: Creating Custom Personas
  13. 13 - Task 7: Packaging for Offline Deployment
  14. 14 - Conclusion and Next Steps

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.