Learn EDR Internals: Research & Development From The Masters
Get 50% Off Udacity Nanodegrees — Code CC50
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore essential safeguards for large language models in this 34-minute conference talk covering security, privacy, compliance, and anti-hallucination measures. Learn from Daniel Whitenack, founder of Prediction Guard and co-host of the Practical AI podcast, as he draws on over ten years of experience developing and deploying machine learning models at scale to address critical challenges in LLM implementation. Discover practical approaches to securing LLM deployments, protecting user privacy, ensuring regulatory compliance, and mitigating hallucination risks in production environments. Gain insights into building robust safeguards that enable safe and reliable deployment of large language models in enterprise settings, with real-world examples and best practices from someone who has built data teams across startups and international organizations.
Syllabus
LLM Safeguards: Security Privacy Compliance Anti Hallucination: Daniel Whitenack
Taught by
AI Engineer