Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Data Scientists, AI Researchers, Robotics Engineers, and others who can use Retrieval-Augmented Generation (RAG) can expect to earn entry-level salaries ranging from USD 93,386 to USD 110,720 annually, with highly experienced AI engineers earning as much as USD 172,468 annually (Source: ZipRecruiter).
In this beginner-friendly short course, you’ll begin by exploring RAG fundamentals—learning how RAG enhances information retrieval and user interactions—before building your first RAG pipeline.
Next, you’ll discover how to create user-friendly Generative AI applications using Python and Gradio, gaining experience with moving from project planning to constructing a QA bot that can answer questions using information contained in source documents.
Finally, you’ll learn about LlamaIndex, a popular framework for building RAG applications. Moreover, you’ll compare LlamaIndex with LangChain and develop a RAG application using LlamaIndex.
Throughout this course, you’ll engage in interactive hands-on labs and leverage multiple LLMs, gaining the skills needed to design, implement, and deploy AI-driven solutions that deliver meaningful, context-aware user experiences.
Enroll now to gain valuable RAG skills!
Syllabus
- Introduction to RAG
- This module provides an overview of Retrieval-Augmented Generation (RAG), illustrating how it can enhance information retrieval and summarization for AI applications. The module features a lab designed to introduce the fundamental components of building RAG applications, presented in an easy-to-use Jupyter Notebook format. Through this hands-on project, you’ll learn to split and embed documents and implement retrieval chains using LangChain.
- Build Apps with RAG
- In this module, you'll learn to build a Retrieval-Augmented Generation (RAG) application using LangChain, gaining hands-on experience in transforming an idea into a fully functional AI solution. You'll also explore Gradio as a user-friendly interface layer for your models, setting up a simple Gradio interface to facilitate real-time interactions. Finally, you'll construct a QA Bot leveraging LangChain and an LLM to answer questions from loaded documents, reinforcing your understanding of end-to-end RAG workflows.
- Build RAG Apps with LlamaIndex
- This module introduces you to LlamaIndex as an alternative to LangChain, helping you understand how to apply your RAG knowledge across different frameworks. You will explore the differences between these frameworks and gain hands-on experience by building a bot with IBM Granite and LlamaIndex that provides individuals with suggestions on engaging in conversations. When completing this project, you will learn about implementing key concepts such as vector databases, embedding models, document chunking, retrievers, and prompt templates to generate high-quality responses. 
Taught by
Wojciech 'Victor' Fulmyk and IBM Skills Network Team