Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Participatory and Periodic Red-Teaming of LLMs

Association for Computing Machinery (ACM) via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about participatory and periodic red-teaming methodologies for large language models through this 46-minute conference talk presented by researchers from IBM Research, Carnegie Mellon University, All Tech is Human, and Bloomberg. Explore comprehensive approaches to systematically testing and evaluating LLM vulnerabilities through collaborative red-teaming exercises that involve diverse stakeholders and occur at regular intervals. Discover how participatory methods can enhance the identification of potential risks, biases, and failure modes in language models by incorporating perspectives from various communities and domain experts. Examine the importance of periodic assessment cycles in maintaining robust AI safety practices as models evolve and are deployed in different contexts. Gain insights into practical frameworks for implementing these red-teaming strategies, understanding their role in responsible AI development, and learning how organizations can establish sustainable processes for ongoing model evaluation and risk mitigation.

Syllabus

Participatory & Periodic Red-Teaming of LLMs

Taught by

ACM FAccT Conference

Reviews

Start your review of Participatory and Periodic Red-Teaming of LLMs

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.