Authorization Best Practices for Systems Using Large Language Models
Cloud Security Alliance via YouTube
Python, Prompt Engineering, Data Science — Build the Skills Employers Want Now
Learn the Skills Netflix, Meta, and Capital One Actually Hire For
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore authorization best practices for systems utilizing Large Language Models in this 26-minute conference talk by the Cloud Security Alliance. Gain insights into the unique security considerations that arise with the integration of LLMs, including prompt injection attacks and vector database risks. Discover the components and design patterns involved in LLM-based systems, focusing on authorization implications specific to each element. Learn about best practices and patterns for various use cases, such as retrieval augmented generation (RAG) with vector databases, API calls to external systems, and SQL queries generated by LLMs. Delve into the fundamental concerns surrounding the development of agentic systems, equipping yourself with essential knowledge to build more robust and secure LLM-powered applications.
Syllabus
Authorization best practices for systems using Large Language Models
Taught by
Cloud Security Alliance