Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

AI Safety in 35 Minutes

Tina Huang via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn the fundamentals of AI safety in this comprehensive 35-minute video tutorial that explores what AI safety means, why it's critical, and how to apply it when developing or using AI systems. Discover real-world examples of unsafe AI systems and explore the most important AI safety frameworks including NIST, Microsoft, MITRE, Databricks, ISO, and IEEE. Master practical approaches to building AI systems safely with and without code, while understanding how to manage AI risks such as bias, model theft, and data poisoning. Examine key frameworks and resources including the NIST AI Risk Management Framework, Microsoft AI Security Framework, MITRE ATLAS (Adversarial Threat Landscape), Databricks AI Security Framework (DASF 2.0), ISO/IEC 42001:2023 for AI Management Systems, and the IEEE Ethically Aligned Design Initiative. Gain insights into keeping AI trustworthy and aligned whether you're a developer, researcher, or simply curious about the hidden risks behind artificial intelligence systems.

Syllabus

ISO/IEC 42001:2023 – AI Management Systems
00:00 intro

Taught by

Tina Huang

Reviews

Start your review of AI Safety in 35 Minutes

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.