Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn to build a custom Compacting Chat Memory Advisor in Spring AI that automatically summarizes conversation history when context windows reach capacity limits. Discover how to create intelligent advisors that go beyond Spring AI's built-in options, implementing configurable thresholds for automatic compaction instead of simply clearing all messages when hitting token limits. Master the fundamentals of Spring AI advisors as AOP-like functions around LLM calls, understand the critical difference between stateless LLMs and stateful chat applications, and explore Message Chat Memory Advisor configuration and its inherent limitations. Implement conversation summarization techniques to optimize token usage while preserving important context, similar to Claude Code's /compact command functionality. Set up comprehensive debug logging to monitor advisor behavior and gain insights into context window management for production-ready AI applications. Build configurable systems that automatically trigger compaction at specified thresholds, ensuring efficient memory management without losing conversational continuity. Explore practical solutions for real-world Spring AI development challenges, transforming conference discussions into production-ready features that enhance user experience and optimize resource utilization.
Syllabus
Build a Smart Chat Memory Advisor in Spring AI That Auto-Compacts Context
Taught by
Dan Vega