Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Running AI Models Locally with HuggingFace and LangChain

Tech with Tim via YouTube

Overview

Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Learn to implement and run thousands of AI models locally on your computer using HuggingFace and LangChain in this comprehensive tutorial video. Master the process of setting up your environment, managing dependencies, and integrating your HuggingFace token for model access. Explore practical demonstrations of running transformer models on GPU, selecting different model architectures, and implementing text generation and question-answering systems. Follow along with detailed code examples and step-by-step instructions for leveraging PyCharm IDE, CUDA toolkit, and essential Python libraries. Gain hands-on experience with free AI models while understanding the fundamentals of HuggingFace and LangChain integration through practical examples and real-world applications.

Syllabus

00:00 | Overview
00:25 | HuggingFace & LangChain Explained
02:03 | Environment Setup
03:07 | Virtual Environment & Dependencies
06:02 | Adding Your HuggingFace Token
07:30 | Using a Simple Transformer Model
10:35 | Running on GPU
14:00 | Selecting Different Models
16:54 | Example 1 - Text Generation
19:55 | Example 2 - Text Question & Answer

Taught by

Tech With Tim

Reviews

Start your review of Running AI Models Locally with HuggingFace and LangChain

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.