Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Get ready to put your generative AI engineering skills into practice! In this hands-on guided project, you’ll apply the knowledge and techniques gained throughout the previous courses in the program to build your own real-world generative AI application.
You’ll begin by filling in key knowledge gaps, such as using LangChain’s document loaders to ingest documents from various sources. You’ll then explore and apply text-splitting strategies to improve model responsiveness and use IBM watsonx to embed documents. These embeddings will be stored in a vector database, which you’ll connect to LangChain to develop an effective document retriever.
As your project progresses, you’ll implement retrieval-augmented generation (RAG) to enhance retrieval accuracy, construct a question-answering bot, and build a simple Gradio interface for interactive model responses.
By the end of the course, you’ll have a complete, portfolio-ready AI application that showcases your skills and serves as compelling evidence of your ability to engineer real-world generative AI solutions. If you're ready to elevate your career with hands-on experience, enroll today and take the next step toward becoming a confident AI engineer.
Syllabus
- Â Document Loader Using LangChainÂ
- In this module, you will explore essential techniques for loading, preparing, and structuring documents to build effective retrieval-augmented generation (RAG) applications using LangChain. You will learn how to use LangChain’s document loaders to import content from various sources, apply best practices for document ingestion, and implement text-splitting strategies to enhance model responsiveness. You will also examine when and how to incorporate entire documents into prompts for optimal output. Through hands-on labs, you’ll gain practical experience by loading documents and applying text-splitting techniques in real-world scenarios.
- RAG Using LangChain
- In this module, you will learn how to embed documents using watsonx’s embedding model and store these embeddings using vector databases, such as Chroma DB and FAISS. You will explore the role of embeddings in RAG pipelines, configure vector stores to manage these embeddings, and use LangChain to preprocess documents for embedding. Additionally, you will gain hands-on experience with advanced retrievers in LangChain, such as Vector Store-Based, Multi-Query, Self-Query, and Parent Document retrievers, to extract relevant information from documents efficiently. Finally, you’ll compare RAG-based approaches with fine-tuning using InstructLab to evaluate their trade-offs and applicability.
- Create a QA Bot to Read Your Document
- In this module, you will combine all the components you’ve learned to build a complete generative AI application using LangChain and RAG. You’ll learn how to implement RAG to improve information retrieval, set up user interfaces using Gradio, and construct a question-answering bot that leverages LLMs and LangChain to respond to queries from loaded documents. Through hands-on labs, you’ll practice building a Gradio interface and developing your own QA bot. In the final project, you will build an AI application using RAG and LangChain. The supporting materials, like a cheat sheet and glossary, will reinforce your understanding, build confidence in your implementation skills, and assess your learning through a graded quiz. You'll leave this module with a deployable AI-powered assistant and clear the next steps for advancing your skills.
Taught by
Kang Wang and Wojciech 'Victor' Fulmyk