Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

CNCF [Cloud Native Computing Foundation]

Open Source Tools to Empower Ethical and Robust AI Systems

CNCF [Cloud Native Computing Foundation] via YouTube

Overview

Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
This conference talk explores open source tools for evaluating and securing AI models to build responsible AI systems. Vicente Herrera presents an ontology explaining where each tool can assist in creating ethical and robust AI. Discover tools like Garak for identifying undesirable behaviors, LLM Guard and LLM Canary for detecting and preventing adversarial attacks and data disclosures, and Promptfoo for optimizing prompt engineering. Learn about solutions for adversarial robustness including Counterfit, the Adversarial Robustness Toolkit, and BrokenHill. Understand how AI Fairness 360 and Audit AI help ensure models are just and accountable. The presentation emphasizes selecting AI models not just for size or knowledge evaluation scores, but for robustness and fairness. Connect with cloud native computing projects at upcoming KubeCon + CloudNativeCon events in Hong Kong, Tokyo, Hyderabad, and Atlanta.

Syllabus

Open Source Tools To Empower Ethical and Robust AI... Vicente Herrera & Alberto Rodríguez Fernandez

Taught by

CNCF [Cloud Native Computing Foundation]

Reviews

Start your review of Open Source Tools to Empower Ethical and Robust AI Systems

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.