Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore open-source tools for evaluating and securing AI models in this 35-minute conference talk from the Linux Foundation's Open Source Summit. Learn about a comprehensive range of tools organized through a simple ontology to help classify their use cases, including bias and fairness assessment tools like AIF360, evaluation platforms such as Garak for LLM security assessments, Promptfoo for prompt engineering, and Giskard for custom evaluation datasets. Discover guardrail solutions like NeMo Guardrails for LLM systems, prompt security tools including LLMGuard and LangKit, and traditional ML model security resources such as the Adversarial Robustness Toolbox. Gain practical insights and examples to make informed decisions about which tools best suit your needs for building responsible and robust AI systems, with guidance from industry experts Alberto RodrÃguez from ControlPlane and Miguel Fontanilla from sennder.
Syllabus
Open Source Tools To Empower Ethical and Robust AI Systems - Alberto RodrÃguez & Miguel Fontanilla
Taught by
Linux Foundation