Completed
- Understanding Inference Engines
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Run LLMs with Docker Model Runner - No Python, PyTorch, or CUDA Required
Automatically move to the next video in the Classroom when playback concludes
- 1 - Introduction: The LLM Dependency Challenge
- 2 - Dependency Hell Explained
- 3 - How Docker Solves Dependency Management
- 4 - Understanding Inference Engines
- 5 - DevOps and MLOps Benefits
- 6 - Free Lab Introduction
- 7 - Task 1: Installing Docker Model Plugin
- 8 - Task 2: Pulling AI Models as OCI Artifacts
- 9 - Task 3: Testing Models Interactively
- 10 - Task 4: Starting Background Inference Service
- 11 - Task 5: Querying via OpenAI API
- 12 - Task 6: Creating Custom Personas
- 13 - Task 7: Packaging for Offline Deployment
- 14 - Conclusion and Next Steps