Master Windows Internals - Kernel Programming, Debugging & Architecture
The Perfect Gift: Any Class, Never Expires
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Master the fundamental building blocks of LangChain through this comprehensive 33-minute tutorial that breaks down complex AI agent development into essential engineering components. Learn to work with multiple LLM providers including OpenAI, Gemini, and Qwen3, while understanding token usage and cost management. Build a simple chatbot using messages, ChatPromptTemplate, and chat history management, then advance to structured JSON output generation using Pydantic models. Explore document processing by loading external PDF files and implementing a simple RAG (Retrieval-Augmented Generation) system with embeddings and vector stores for PDF chat functionality. Discover tool calling capabilities including tool documentation, execution, and ToolMessage handling, while gaining proficiency in debugging and tracing applications using MLFlow for observability. Follow along with hands-on notebook setup and step-by-step implementation of each core abstraction that makes LLM applications maintainable and scalable, preparing you to build more complex AI agents with a solid foundation in LangChain's ecosystem.
Syllabus
00:00 - Welcome
01:22 - LangChain ecosystem with LangGraph
02:00 - Notebook setup
03:15 - Call LLM OpenAI
04:51 - Multiple LLM provider support Gemini, Qwen3
06:57 - Token usage
08:48 - Simple chatbot messages, ChatPromptTemplate, chat history
13:09 - Structured output JSON mode with Pydantic
17:20 - Loading external PDF document
18:45 - Simple RAG embeddings, vector store and chat with a PDF
24:14 - Tool calling tool documentation, tool execution and ToolMessage
28:08 - Debugging and Tracing with MLFlow
32:22 - Conclusion
Taught by
Venelin Valkov