Running Open Source LLMs Locally on RTX 5090 - Performance and Capabilities
MattVidPro AI via YouTube
Overview
Syllabus
00:00 Introduction to Running LLMs Locally
01:27 Setting Up LM Studio
01:58 Testing DeepSeek R1 on RTX 5090
02:42 Exploring Model Settings and Performance
03:48 Generating Content with DeepSeek R1
06:19 Loading Larger Models
09:44 Pushing the Limits with 32B Models
12:47 Reflections on Local AI Performance
16:55 Introduction to GEMMA 327B
17:15 Setting Up the Model
18:00 First Impressions and Performance
18:47 Roasting with GEMMA
22:48 Analyzing Memes and Humor
27:53 Exploring Smaller LLMs
30:49 Conclusion and Future Plans
Taught by
MattVidPro AI