Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

RiskRubric.ai - Standardizing LLM Risk Assessment

Cloud Security Alliance via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Discover how to standardize and streamline the assessment of large language model trustworthiness through this 19-minute conference talk from the Cloud Security Alliance's Agentic AI Security Summit 2025. Learn about RiskRubric.ai, a collaborative evaluation tool developed by CSA, Haize Labs, Harmonic Security, and Noma Security that addresses the time-consuming and inconsistent nature of current LLM risk assessment practices. Explore how this platform evaluates over 150 LLMs across six critical risk pillars, providing standardized "report card" formats that clearly identify strengths, weaknesses, and risk factors in today's AI models. Understand the community-driven origins of this initiative and discover opportunities to contribute to this standardization effort. Gain insights from Caleb Sima, Chair for AI Safety Initiative at Cloud Security Alliance, and Michael Machado, RiskRubric Product Lead, as they demonstrate practical applications of this assessment framework and explain how organizations can leverage these standardized evaluations to make more informed decisions about LLM deployment and trust.

Syllabus

RiskRubric.ai: Standardizing LLM Risk Assessment | CSA AI Summit 2025

Taught by

Cloud Security Alliance

Reviews

Start your review of RiskRubric.ai - Standardizing LLM Risk Assessment

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.