Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Trust, but Verify - Enhancing LLM Safety and Reliability with Guardrails AI

AI Engineer via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the challenges and solutions for deploying Large Language Models in production environments through this 20-minute conference talk from the AI Engineer Summit 2023. Learn about the inherent brittleness of models like ChatGPT and discover how to address consistency and accuracy issues when using LLMs as software abstraction layers. Dive into Guardrails AI, an open-source platform specifically designed to mitigate risks and enhance the safety and efficiency of large language models in real-world applications. Examine specific techniques and advanced control mechanisms that enable developers to optimize model performance effectively, moving beyond basic implementation to production-grade reliability. Gain insights from a decade of machine learning expertise, including practical experience from founding engineering roles at Predibase, Apple's Special Projects Group, and autonomous driving perception systems development at Drive.ai.

Syllabus

Trust, but Verify: Shreya Rajpal

Taught by

AI Engineer

Reviews

Start your review of Trust, but Verify - Enhancing LLM Safety and Reliability with Guardrails AI

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.