Get 20% off all career paths from fullstack to AI
Earn a Michigan Engineering AI Certificate — Stay Ahead of the AI Revolution
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Dive into a comprehensive 16-minute tutorial on implementing Retrieval Augmented Generation (RAG) from scratch using Python and Ollama. Learn how to parse and manipulate documents, explore the concept of embeddings for describing abstract ideas, and implement an effective method for surfacing relevant document sections based on queries. Follow along to build a script that enables a locally-hosted Language Model to interact with your own documents. Gain insights into environment setup, function implementation, embedding techniques, caching strategies, and cosine similarity for comparison. Explore potential improvements and discover how to provide context to your LLM. By the end, you'll have a solid foundation for creating RAG systems and enhancing LLM interactions with custom datasets.
Syllabus
- Intro
- Environment Setup
- Function review
- Source Document
- Starting the project
- parse_file
- Understanding embeddings
- Implementing embeddings
- Timing embedding
- Caching embeddings
- Prompt embedding
- Cosine similarity for embedding comparison
- Brainstorming improvements
- Giving context to our LLM
- CLI input
- Next steps
Taught by
Decoder