Running Open Source LLMs Locally on RTX 5090 - Performance and Capabilities

Running Open Source LLMs Locally on RTX 5090 - Performance and Capabilities

MattVidPro AI via YouTube Direct link

00:00 Introduction to Running LLMs Locally

1 of 15

1 of 15

00:00 Introduction to Running LLMs Locally

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Running Open Source LLMs Locally on RTX 5090 - Performance and Capabilities

Automatically move to the next video in the Classroom when playback concludes

  1. 1 00:00 Introduction to Running LLMs Locally
  2. 2 01:27 Setting Up LM Studio
  3. 3 01:58 Testing DeepSeek R1 on RTX 5090
  4. 4 02:42 Exploring Model Settings and Performance
  5. 5 03:48 Generating Content with DeepSeek R1
  6. 6 06:19 Loading Larger Models
  7. 7 09:44 Pushing the Limits with 32B Models
  8. 8 12:47 Reflections on Local AI Performance
  9. 9 16:55 Introduction to GEMMA 327B
  10. 10 17:15 Setting Up the Model
  11. 11 18:00 First Impressions and Performance
  12. 12 18:47 Roasting with GEMMA
  13. 13 22:48 Analyzing Memes and Humor
  14. 14 27:53 Exploring Smaller LLMs
  15. 15 30:49 Conclusion and Future Plans

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.