A Look at AI Security - Generative AI Risks and Safeguards
Association for Computing Machinery (ACM) via YouTube
Launch a New Career with Certificates from Google, IBM & Microsoft
UC San Diego Product Management Certificate — AI-Powered PM Training
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the critical security challenges facing generative AI systems in this one-hour conference talk featuring Microsoft Azure CTO Mark Russinovich, moderated by Scott Hanselman. Examine three fundamental vulnerabilities inherent in large language models: hallucination, indirect prompt injection, and jailbreaks (direct prompt injection). Learn about the origins of these security risks, understand their potential impact on systems and users, and discover effective mitigation strategies. Gain insights into how organizations can harness the transformative potential of LLMs while implementing responsible risk management practices. Understand the evolving landscape of AI security threats and the safeguards necessary to protect against them in enterprise and consumer applications.
Syllabus
A Look at AI Security with Mark Russinovich
Taught by
Association for Computing Machinery (ACM)