Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Mitigating LLM Risks - SECtember 2025

Cloud Security Alliance via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the critical security challenges facing large language models in this 29-minute conference talk featuring Mark Russinovich, CTO, Deputy CISO, and Technical Fellow at Microsoft Azure. Delve into the most pressing risks affecting AI model safety and reliability, including hallucinations, indirect prompt injections, jailbreaks, and reasoning limitations. Examine real-world examples of these vulnerabilities and discover effective strategies for mitigating LLM risks. Learn about the latest risk categories impacting large language models and gain insights into best practices for strengthening trust and reliability in AI systems. Understand how to implement robust security measures to protect against prompt injection threats and other AI-specific vulnerabilities. Access comprehensive frameworks like the AI Controls Matrix (AICM) with 243 controls across 18 domains, mapped to global AI security standards, and explore additional resources for advancing your knowledge in AI governance and cloud security.

Syllabus

Mitigating LLM Risks with Mark Russinovich | SECtember 2025

Taught by

Cloud Security Alliance

Reviews

Start your review of Mitigating LLM Risks - SECtember 2025

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.