Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how to mitigate risks in LLM-powered applications through effective guardrail strategies in this 32-minute conference talk presented by Don Shin of CrossComm at All Things Open AI 2025. Discover pre-processing techniques to protect against end-user prompt manipulation, approaches for evaluating LLM outputs using LLM-as-a-judge, and an overview of open source guardrail frameworks. The presentation addresses how enterprises can overcome adoption barriers caused by hallucinations - where LLMs respond inaccurately but with complete confidence - and includes a live demonstration of implementing these risk mitigation strategies in real-world production settings.
Syllabus
Hallucinations, Prompt Manipulations, and Mitigating Risk Putting Guardrails around your LLM Powered
Taught by
All Things Open