Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the critical security challenges facing Large Language Model (LLM) workflows in this conference talk that examines how non-deterministic AI systems create vulnerabilities to spoofing, privilege escalation, and compliance issues. Learn from real-world social engineering experiments conducted during conversational AI system development to understand how attackers can bypass security guardrails. Discover practical injection attack scenarios and the underlying vulnerabilities that enable them, while examining emerging identity patterns including W3C Verifiable Credentials and blockchain-based verification systems. Master methods for protecting against prompt manipulation attacks and identify often-overlooked elements crucial for comprehensive audit logging. Gain insights into building LLM-aware identity ecosystems through policy-as-code enforcement and federated governance models. Through hands-on demonstrations and detailed case studies, acquire actionable security patterns for implementing trust mechanisms in current AI systems while preparing for future decentralized identity architectures.
Syllabus
Building Identity into LLM Workflows with Verifiable Credentials - Ben Dechrai
Taught by
NDC Conferences