Learn how to improve the accuracy and reliability of LLM-based apps by implementing Retrieval-augmented Generation (RAG) using embeddings and a vector database.
Overview
Syllabus
- Your next big step in AI engineering
- What are embeddings?
- Set up environment variables
- Create an embedding
- Challenge: Pair text with embedding
- Vector databases
- Supabase Dependency Upgrade Warning
- Set up your vector database
- Store vector embeddings
- Semantic search
- Query embeddings using similarity search
- Create a conversational response using OpenAI
- Chunking text from documents
- Challenge: Split text, get vectors, insert into Supabase
- Error handling
- Query database and manage multiple matches
- AI chatbot proof of concept
- Solo Project: PopChoice
- Want to become a Scrimbassador?
- You made it to the finish line!
- How to Utilize Your Certificate