Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

Open Source Tools To Empower Ethical and Robust AI Systems

Linux Foundation via YouTube

Overview

Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore open-source tools for evaluating and securing AI models in this 35-minute conference talk from the Linux Foundation's Open Source Summit. Learn about a comprehensive range of tools organized through a simple ontology to help classify their use cases, including bias and fairness assessment tools like AIF360, evaluation platforms such as Garak for LLM security assessments, Promptfoo for prompt engineering, and Giskard for custom evaluation datasets. Discover guardrail solutions like NeMo Guardrails for LLM systems, prompt security tools including LLMGuard and LangKit, and traditional ML model security resources such as the Adversarial Robustness Toolbox. Gain practical insights and examples to make informed decisions about which tools best suit your needs for building responsible and robust AI systems, with guidance from industry experts Alberto Rodríguez from ControlPlane and Miguel Fontanilla from sennder.

Syllabus

Open Source Tools To Empower Ethical and Robust AI Systems - Alberto Rodríguez & Miguel Fontanilla

Taught by

Linux Foundation

Reviews

Start your review of Open Source Tools To Empower Ethical and Robust AI Systems

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.