A Look at AI Security - Generative AI Risks and Safeguards
Association for Computing Machinery (ACM) via YouTube
AI Adoption - Drive Business Value and Organizational Impact
Free AI-powered learning to build in-demand skills
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the critical security challenges facing generative AI systems in this one-hour conference talk featuring Microsoft Azure CTO Mark Russinovich, moderated by Scott Hanselman. Examine three fundamental vulnerabilities inherent in large language models: hallucination, indirect prompt injection, and jailbreaks (direct prompt injection). Learn about the origins of these security risks, understand their potential impact on systems and users, and discover effective mitigation strategies. Gain insights into how organizations can harness the transformative potential of LLMs while implementing responsible risk management practices. Understand the evolving landscape of AI security threats and the safeguards necessary to protect against them in enterprise and consumer applications.
Syllabus
A Look at AI Security with Mark Russinovich
Taught by
Association for Computing Machinery (ACM)