Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Udemy

Generative AI Engineering: LLMs, RAG, and Agentic Systems

via Udemy

Overview

Learn to design and implement GenAI workflows, multi-agent systems, LangChain, LangGraph, MCP, and model fine-tuning

What you'll learn:
  • Master Generative AI foundations, how LLMs work, and how modern AI systems are designed and applied in real-world products.
  • Design and build end-to-end Generative AI systems using LLMs, retrieval pipelines, tools, and agentic workflows.
  • Implement Retrieval-Augmented Generation (RAG), embeddings, vector search, reranking, and advanced retrieval patterns.
  • Build AI agents, multi-step reasoning systems, and multi-agent workflows using LangChain and LangGraph.
  • Develop production-style applications with structured outputs, validation, memory, and human-in-the-loop workflows.
  • Create MCP servers and clients to connect LLMs to real tools, services, and enterprise systems.
  • Fine-tune and optimize models using Hugging Face workflows, dataset preparation, and quantization techniques.
  • Apply system-level best practices for cost, reliability, scalability, and responsible deployment of GenAI applications.

Build Real-World Generative AI Systems with LLMs, RAG, and AI Agents

Go beyond prompts and chatbots. This course takes you on a complete, progressive journey from Generative AI fundamentals to advanced system-level techniques.

You’ll start by mastering the core concepts of LLMs, NLP, and AI model behavior, then move into applying RAG pipelines, vector search, and prompting patterns. Finally, you’ll tackle advanced topics such as agentic systems, multi-agent orchestration, LangGraph workflows, MCP, and model fine-tuning.

Learn to design and implement intelligent AI workflows and system components using multiple LLMs, LangChain, LangGraph, embeddings, and agentic reasoning—without the pressure of building full production applications.

Skip the beginner fluff—this is for engineers, architects, and technical founders who want to understand how modern GenAI systems are actually structured and engineered.


What You Will Learn

  1. Understand Generative AI foundations and how LLMs work, including OpenAI, Claude, Gemini, and Hugging Face models.

  2. Apply RAG pipelines, vector search, embeddings, and structured outputs to create robust AI workflows.

  3. Learn prompting techniques, in-context learning, and fine-tuning strategies for advanced LLM behavior.

  4. Build and test agentic and multi-agent systems using LangChain and LangGraph.

  5. Explore MCP servers and clients to integrate LLM reasoning with external tools and services.

  6. Understand system-level best practices for efficiency, scalability, cost, and responsible AI deployment.


Hands-On Learning

This is a learning-by-doing course, focused on frameworks, patterns, and exercises, rather than fully functional apps. You will:

  • Work with multiple LLMs and open-source models to understand their behavior.

  • Implement retrieval pipelines, multi-agent patterns, and workflows in hands-on exercises.

  • Explore LangChain, LangGraph, embeddings, vector databases, and MCP integration in manageable components.

  • Gain practical, reusable code snippets and exercises without the stress of shipping a full product.


Who This Course Is For

  • Software engineers & application developers learning system-level GenAI design

  • Solution and platform architects designing LLM-powered workflows and pipelines

  • Cloud, platform, and backend engineers transitioning into Generative AI engineering roles

  • Startup builders and technical founders exploring AI-native system architecture

  • Professionals preparing for Generative AI Engineer / Applied AI / Architect roles

Not for beginners expecting “easy prompts and chatbots,” or for data scientists seeking a math-heavy course.


Course Features

  • 29+ Hours of Video Content

  • Hands-On Projects and Coding Exercises

  • Real-World Examples

  • Quizzes for Learning Reinforcement

  • GitHub Repository with Solutions

  • Web-Based Course Guide


By the end of this course, you'll be well-equipped to leverage Generative AI for a wide range of applications, from natural language processing to content generation and beyond.


Recent Course Updates

  • Jan 2026 – New section on multi-agent system patterns

  • Sep 2025 – 2 new sections on building agents with LangGraph

  • Aug 2025 – Added lessons on chat models (Subscriber ask)

  • Jul 2025 – Updated MCP content after protocol changes

  • Jun 2025 – Expanded MCP lessons (Subscriber ask)

  • May 2025 – Model Context Protocol (MCP) section added

  • May 2025 – Python UV environment support

  • Feb 2025 – LLM fine-tuning lessons added (Subscriber ask)

  • Mar 2025 – Multiple curriculum expansions



Syllabus

  • Introduction
  • Setup development environment
  • Generative AI : Fundamentals
  • Generative AI applications
  • Hugging Face Models : Fundamentals
  • (Optional) Hugging Face Models : Advanced
  • LLM challenges & prompt engineering
  • Langchain : Prompts, Chains & LCEL
  • Dealing with structured responses from LLM
  • Datasets for model training, and testing
  • Vectors, embeddings & semantic search
  • Vector databases
  • Conversation User Interface
  • Advanced Retrieval Augmented Generation
  • Agentic RAG
  • Model Context Protocol (MCP)
  • Fine tuning
  • Dataset preparation for fine-tuning
  • Pre-training & Fine-tuning with HuggingFace Trainer
  • Quantization

Taught by

Rajeev Sakhuja

Reviews

4.6 rating at Udemy based on 914 ratings

Start your review of Generative AI Engineering: LLMs, RAG, and Agentic Systems

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.