Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn to evaluate, test, and secure Large Language Model applications through comprehensive measurement techniques for prompts and RAG pipelines in this 39-minute conference talk. Explore evaluation frameworks including Vertex AI Evaluation, DeepEval, and Promptfoo to assess LLM performance and reliability. Discover security implementation strategies using LLM Guard to protect applications against prompt injections and harmful responses. Master the development of robust input-output guardrails essential for maintaining LLM application resilience and safety in production environments.
Syllabus
Beyond the Prompt: Evaluating, Testing, and Securing LLM Applications by Mete Atamel
Taught by
Devoxx