Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Hallucinations, Prompt Manipulations, and Mitigating Risk: Putting Guardrails around your LLM-Powered Applications

All Things Open via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to mitigate risks in LLM-powered applications through effective guardrail strategies in this 32-minute conference talk presented by Don Shin of CrossComm at All Things Open AI 2025. Discover pre-processing techniques to protect against end-user prompt manipulation, approaches for evaluating LLM outputs using LLM-as-a-judge, and an overview of open source guardrail frameworks. The presentation addresses how enterprises can overcome adoption barriers caused by hallucinations - where LLMs respond inaccurately but with complete confidence - and includes a live demonstration of implementing these risk mitigation strategies in real-world production settings.

Syllabus

Hallucinations, Prompt Manipulations, and Mitigating Risk Putting Guardrails around your LLM Powered

Taught by

All Things Open

Reviews

Start your review of Hallucinations, Prompt Manipulations, and Mitigating Risk: Putting Guardrails around your LLM-Powered Applications

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.