AI Engineer - Learn how to integrate AI into software applications
Stuck in Tutorial Hell? Learn Backend Dev the Right Way
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn to evaluate, test, and secure Large Language Model applications through comprehensive measurement techniques for prompts and RAG pipelines in this 39-minute conference talk. Explore evaluation frameworks including Vertex AI Evaluation, DeepEval, and Promptfoo to assess LLM performance and reliability. Discover security implementation strategies using LLM Guard to protect applications against prompt injections and harmful responses. Master the development of robust input-output guardrails essential for maintaining LLM application resilience and safety in production environments.
Syllabus
Beyond the Prompt: Evaluating, Testing, and Securing LLM Applications by Mete Atamel
Taught by
Devoxx