Open Source Tools to Empower Ethical and Robust AI Systems
CNCF [Cloud Native Computing Foundation] via YouTube
Advanced Techniques in Data Visualization - Self Paced Online
PowerBI Data Analyst - Create visualizations and dashboards from scratch
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This conference talk explores open source tools for evaluating and securing AI models to build responsible AI systems. Vicente Herrera presents an ontology explaining where each tool can assist in creating ethical and robust AI. Discover tools like Garak for identifying undesirable behaviors, LLM Guard and LLM Canary for detecting and preventing adversarial attacks and data disclosures, and Promptfoo for optimizing prompt engineering. Learn about solutions for adversarial robustness including Counterfit, the Adversarial Robustness Toolkit, and BrokenHill. Understand how AI Fairness 360 and Audit AI help ensure models are just and accountable. The presentation emphasizes selecting AI models not just for size or knowledge evaluation scores, but for robustness and fairness. Connect with cloud native computing projects at upcoming KubeCon + CloudNativeCon events in Hong Kong, Tokyo, Hyderabad, and Atlanta.
Syllabus
Open Source Tools To Empower Ethical and Robust AI... Vicente Herrera & Alberto RodrÃguez Fernandez
Taught by
CNCF [Cloud Native Computing Foundation]