Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to implement robust security measures for AI agents and prevent unauthorized data access in enterprise environments through this 33-minute conference talk from DevConf.CZ 2025. Explore the critical challenges of building enterprise-ready AI systems, particularly around data security, scalability, and integration in compliance-regulated industries. Discover how organizations are mitigating risks associated with Large Language Models (LLMs) regarding sensitive data exfiltration of personally identifiable information and company data. Understand the primary mitigation strategy of building guardrails around Retrieval-Augmented Generation (RAG) systems to safeguard data while optimizing query response quality and efficiency. Examine how to implement permissions systems with advanced fine-grained authorization capabilities that return lists of authorized subjects and accessible resources, ensuring LLMs can access only authorized data while preventing sensitive information exfiltration. Gain insights into why authorization is critical for RAG pipelines to protect sensitive data from potential vulnerabilities and learn various techniques for permissions-aware data retrieval. Watch a practical demonstration implementing fine-grained authorization for RAG using Pinecone, Langchain, OpenAI, and SpiceDB - an open source authorization database, providing hands-on experience with modern permissions systems that make RAG systems more efficient and improve performance at scale.
Syllabus
How to Prevent AI Agents from Accessing Unauthorized Data - DevConf.CZ 2025
Taught by
DevConf