Gain a Splash of New Skills - Coursera+ Annual Just ₹7,999
Get 35% Off CFI Certifications - Code CFI35
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to mitigate risks in LLM-powered applications through effective guardrail strategies in this 32-minute conference talk presented by Don Shin of CrossComm at All Things Open AI 2025. Discover pre-processing techniques to protect against end-user prompt manipulation, approaches for evaluating LLM outputs using LLM-as-a-judge, and an overview of open source guardrail frameworks. The presentation addresses how enterprises can overcome adoption barriers caused by hallucinations - where LLMs respond inaccurately but with complete confidence - and includes a live demonstration of implementing these risk mitigation strategies in real-world production settings.
Syllabus
Hallucinations, Prompt Manipulations, and Mitigating Risk Putting Guardrails around your LLM Powered
Taught by
All Things Open