LLM Observability Panel - Monitoring and Evaluation in Production AI Systems
MLOps World: Machine Learning in Production via YouTube
-
80
-
- Write review
Learn Excel & Financial Modeling the Way Finance Teams Actually Use Them
Free courses from frontend to fullstack and AI
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Join a comprehensive panel discussion featuring industry experts from Galileo, DraftKings, Target Corporation, and PIMCO as they explore the critical landscape of LLM observability in production environments. Learn from Atin Sanyal (Founder & CTO at Galileo), Naresh Kumar Batthula (Principal Developer Advocate at DraftKings), Bali Varadarajan (Lead AI Engineer – Digital Personalization at Target Corporation), and Anupama Garani (AI & ML Systems at PIMCO) as they share practical insights on monitoring, evaluation, and maintaining reliability in large language model systems. Discover key considerations for implementing effective observability frameworks in LLM deployments, gain valuable lessons from leading enterprise AI teams across diverse industries, and understand how monitoring and evaluation approaches are evolving to meet the unique challenges of generative AI systems. Explore real-world strategies for ensuring production AI system reliability, learn about emerging best practices in model monitoring, and understand the infrastructure requirements for maintaining robust LLM operations at scale.
Syllabus
Observability Panel | Galileo, DraftKings, Target Corporation, PIMCO | MLOps World 2025
Taught by
MLOps World: Machine Learning in Production