Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Watch a 28-minute conference talk exploring the critical need for utility verification in Large Language Model (LLM) applications and the introduction of AgentEval, a framework designed to assess application effectiveness. Learn how Dr. Julia Kiseleva, a researcher at MultiOn, addresses the challenge of evaluating LLM-powered applications that facilitate multi-agent collaboration and human task assistance. Discover how AgentEval automatically generates tailored evaluation criteria for applications, enabling comprehensive utility assessment and ensuring alignment between functionality and user needs. Gain insights into the development of safe and reliable AI agents, interactive grounded language understanding, and user-driven evaluation methodologies for interactive systems.
Syllabus
Task Utility in LLM-Powered Applications
Taught by
MLOps.community