Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Responsible Generative AI Evaluation Best Practices and Tools

AWS Events via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a comprehensive conference talk from AWS re:Invent 2024 focused on implementing responsible evaluation practices for generative AI applications. Discover essential methodologies for measuring performance and mitigating risks in applications built with large language models (LLMs), including features like Retrieval Augmented Generation (RAG), agents, and guardrails. Gain valuable insights into open access libraries and AWS services that support evaluation processes, while learning the critical steps of creating an effective evaluation plan. Master the process of defining use cases, conducting risk assessments, selecting appropriate metrics and release criteria, developing evaluation datasets, and interpreting results to implement actionable risk mitigation strategies. Delivered by AWS experts, this 54-minute session provides practical knowledge for ensuring responsible AI development and deployment in cloud computing environments.

Syllabus

AWS re:Invent 2024 - Responsible generative AI: Evaluation best practices and tools (AIM342)

Taught by

AWS Events

Reviews

Start your review of Responsible Generative AI Evaluation Best Practices and Tools

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.