From AI Agents to LLMs as Judges - Reshaping Observability in the Era of Generative AI
Platform Engineering via YouTube
Master Windows Internals - Kernel Programming, Debugging & Architecture
Coursera Plus Annual Nearly 45% Off
Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore how generative AI is transforming observability practices in this 14-minute conference talk that examines the integration of AI agents into traditional observability workflows and demonstrates how observability can evaluate and optimize generative AI system performance. Begin by understanding the architecture of transformers and generative models, learning how these foundational components combine to create effective AI agent workflows. Discover key use cases where generative AI enables fully automated operations, significantly improving response times while reducing manual effort in DevOps environments. Examine the emerging concept of using large language models as "judges" to make contextual decisions within observability frameworks, providing intelligent automation for system monitoring and evaluation. Gain practical insights into applying generative AI to DevOps platforms and learn how machine learning principles can enhance the observability and reliability of modern systems. Develop a comprehensive understanding of how AI-driven approaches are reshaping traditional monitoring and observability practices, enabling more intelligent and automated system management in the era of generative AI.
Syllabus
From AI agents to LLMs as judges: Reshaping observability in the era of generative AI - Diana Todea
Taught by
Platform Engineering