Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Building Trustworthy AI with NVIDIA NeMo Guardrails Live Demo

Data Science Dojo via YouTube

Overview

Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Learn to implement programmable AI guardrails through a hands-on demonstration using NVIDIA NeMo Guardrails, an open-source toolkit designed to enhance the safety, security, and reliability of LLM-powered applications. Explore the key features of NVIDIA NeMo Guardrails including topical safety, content safety, and jailbreak prevention capabilities. Master the folder structure and configuration files such as config.yml, prompts, and rails.co while discovering how to define custom prompts and flows for your specific requirements. Follow along with live coding demonstrations that show you how to implement both input and output guardrails effectively. Discover techniques for customizing refusal messages and prompts to match your particular use case scenarios. Test real-world examples including blocking harmful or off-topic requests and handling sophisticated jailbreak attempts that could compromise your AI system. Gain insights into best practices for configuring and deploying guardrails with practical, real-world examples that you can apply immediately. Access additional learning resources to deepen your understanding of AI safety principles and advanced guardrail design strategies, making this tutorial perfect for developers, AI enthusiasts, and anyone committed to building responsible, aligned AI applications.

Syllabus

Building Trustworthy AI with NVIDIA NeMo Guardrails Live Demo #ai #guardrails

Taught by

Data Science Dojo

Reviews

Start your review of Building Trustworthy AI with NVIDIA NeMo Guardrails Live Demo

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.