Google AI Professional Certificate - Learn AI Skills That Get You Hired
AI Engineer - Learn how to integrate AI into software applications
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about mitigating risks in LLM-powered applications in this 32-minute conference talk presented by Don Shin of CrossComm at All Things Open AI 2025. Discover effective strategies for implementing guardrails around large language models to address their key challenges, including hallucinations (when LLMs respond inaccurately with apparent confidence) and vulnerability to prompt manipulation. Explore pre-processing techniques to protect against user prompt manipulation, methods for evaluating LLM outputs using LLM-as-a-judge approaches, and an overview of open source guardrail frameworks. The presentation includes a live demonstration showing how to implement these risk mitigation strategies in real-world production environments, making this essential viewing for organizations looking to safely deploy LLM technology.
Syllabus
Hallucinations, Prompt Manipulations, and Mitigating Risk Putting Guardrails around your LLM Powered
Taught by
All Things Open