Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

How RAG Finds Answers in Millions of Documents - Embeddings, Vector Databases, LangChain and Supabase

Venelin Valkov via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn to build a scalable semantic search system that transforms text documents into AI-searchable knowledge bases using embeddings, vector databases, and RAG (Retrieval-Augmented Generation) applications. Discover what embeddings are and how they enable machines to understand text meaning through a practical toy example demonstration. Master the implementation of pre-trained embedding models using LangChain and understand the criteria for selecting the most appropriate embedding model for your specific use case. Explore the debate around vector databases and their necessity, then get hands-on experience with Supabase installation, setup, and integration with LangChain for vector storage and retrieval. Practice advanced techniques including metadata filtering to enhance search precision and learn how to scale your solution to handle millions of documents efficiently.

Syllabus

00:00 - What are Embeddings?
02:00 - Toy example
05:56 - Using pre-trained embedding model with LangChain
09:28 - How to choose embedding model
11:01 - Do you need a vector database?
12:45 - Supabase install and setup
15:16 - Use Supabase vectors with LangChain
18:47 - Metadata filtering
20:22 - Conclusion

Taught by

Venelin Valkov

Reviews

Start your review of How RAG Finds Answers in Millions of Documents - Embeddings, Vector Databases, LangChain and Supabase

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.