Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Identify, analyze, and defend against the security vulnerabilities that arise when Large Language Models (LLMs) are integrated into production applications. This course begins with how LLMs function in applications—tokenization, next-token prediction, and the architectural patterns that determine attack surface—then surveys real-world application types including Application Programming Interface (API)-based services, embedded-model deployments, and multi-model orchestration pipelines. You will examine each architecture's distinct security profile and the trade-offs that shape deployment decisions.
The second module provides a systematic walkthrough of LLM-specific vulnerability categories: prompt injection, insecure output handling, model theft and replication through distillation, sensitive information disclosure, insecure plugin design, excessive agency, and denial-of-service attacks. For each vulnerability you will study the attack mechanism, analyze why LLM behavior makes it exploitable, and apply concrete defense patterns including input sanitization, output validation, permission boundaries, and rate limiting. A capstone assessment synthesizes these skills into an end-to-end security evaluation of an LLM-powered system.