Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

A Look at AI Security - Generative AI Risks and Safeguards

Association for Computing Machinery (ACM) via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the critical security challenges facing generative AI systems in this one-hour conference talk featuring Microsoft Azure CTO Mark Russinovich, moderated by Scott Hanselman. Examine three fundamental vulnerabilities inherent in large language models: hallucination, indirect prompt injection, and jailbreaks (direct prompt injection). Learn about the origins of these security risks, understand their potential impact on systems and users, and discover effective mitigation strategies. Gain insights into how organizations can harness the transformative potential of LLMs while implementing responsible risk management practices. Understand the evolving landscape of AI security threats and the safeguards necessary to protect against them in enterprise and consumer applications.

Syllabus

A Look at AI Security with Mark Russinovich

Taught by

Association for Computing Machinery (ACM)

Reviews

Start your review of A Look at AI Security - Generative AI Risks and Safeguards

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.