Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn the fundamentals of AI safety in this comprehensive 35-minute video tutorial that explores what AI safety means, why it's critical, and how to apply it when developing or using AI systems. Discover real-world examples of unsafe AI systems and explore the most important AI safety frameworks including NIST, Microsoft, MITRE, Databricks, ISO, and IEEE. Master practical approaches to building AI systems safely with and without code, while understanding how to manage AI risks such as bias, model theft, and data poisoning. Examine key frameworks and resources including the NIST AI Risk Management Framework, Microsoft AI Security Framework, MITRE ATLAS (Adversarial Threat Landscape), Databricks AI Security Framework (DASF 2.0), ISO/IEC 42001:2023 for AI Management Systems, and the IEEE Ethically Aligned Design Initiative. Gain insights into keeping AI trustworthy and aligned whether you're a developer, researcher, or simply curious about the hidden risks behind artificial intelligence systems.
Syllabus
ISO/IEC 42001:2023 – AI Management Systems
00:00 intro
Taught by
Tina Huang