Open Source Tools to Empower Ethical and Robust AI Systems
CNCF [Cloud Native Computing Foundation] via YouTube
Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
This conference talk explores open source tools for evaluating and securing AI models to build responsible AI systems. Vicente Herrera presents an ontology explaining where each tool can assist in creating ethical and robust AI. Discover tools like Garak for identifying undesirable behaviors, LLM Guard and LLM Canary for detecting and preventing adversarial attacks and data disclosures, and Promptfoo for optimizing prompt engineering. Learn about solutions for adversarial robustness including Counterfit, the Adversarial Robustness Toolkit, and BrokenHill. Understand how AI Fairness 360 and Audit AI help ensure models are just and accountable. The presentation emphasizes selecting AI models not just for size or knowledge evaluation scores, but for robustness and fairness. Connect with cloud native computing projects at upcoming KubeCon + CloudNativeCon events in Hong Kong, Tokyo, Hyderabad, and Atlanta.
Syllabus
Open Source Tools To Empower Ethical and Robust AI... Vicente Herrera & Alberto RodrÃguez Fernandez
Taught by
CNCF [Cloud Native Computing Foundation]