Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the critical role of AI observability in ensuring the safety and security of large language models in this fireside chat with experts. Delve into topics such as LLMops, validation and testing, prompt engineering, response stability detection, and user interaction analysis. Gain fresh insights and practical approaches for leveraging LLMs and generative AI through specialized analytics tools and observability platforms. Discover when observability tools are needed and how they can enhance the development and deployment of LLM-powered applications. Ideal for technical architects, engineers, and organizational leaders seeking to harness the power of large language models while maintaining safety and security.
Syllabus
– Introduction
– LLMops and observability
– When is the observability tool needed?
– Validation and testing
– Prompts and observability tool
– Does the observability tool help with detecting the response instability?
– User interaction and behavior
– QnA
Taught by
Data Science Dojo