Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

AI Security Fundamentals – LLM Threats & OWASP 2026

Packt via Coursera

Overview

Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
This course features Coursera Coach! A smarter way to learn with interactive, real-time conversations that help you test your knowledge, challenge assumptions, and deepen your understanding as you progress through the course. In this course, you’ll gain a comprehensive understanding of the security principles vital for securing Large Language Model (LLM) applications. You will explore critical vulnerabilities, such as prompt injection, data poisoning, and improper output handling, while learning strategies to mitigate these risks. Through engaging modules, you will analyze real-world examples of successful and unsuccessful LLM implementations, enabling you to understand the delicate balance between functionality and security in AI systems. The course is structured across 12 detailed modules, beginning with an introduction to LLM applications and their associated security challenges. As you progress, you will dive into specific topics such as prompt injection, sensitive information disclosure, and supply chain vulnerabilities, with each module providing practical, hands-on solutions to counter these risks. You’ll also explore essential topics such as the role of third-party models, data minimization, and model poisoning, which are key to securing AI applications at scale. Designed for security professionals and AI developers, this course provides you with the tools needed to address security issues within LLM systems proactively. You’ll walk away with the ability to implement best practices for securing LLM development and deployment processes. Whether you are working in AI development, security, or policy, this course will help you understand and address the security complexities that come with LLM technology. By the end of the course, you will be able to assess LLM vulnerabilities, apply security principles to mitigate risks, design secure LLM applications, and implement strategies to defend against prompt injections and other security threats.

Syllabus

  • Module 1: Introduction to LLM Application Security
    • In this module, we will introduce Large Language Models (LLMs) and explore their applications across various industries. We will also examine the security challenges that arise in LLM applications and discuss why securing LLM development and deployment processes is essential. This section sets the foundation for understanding the security risks associated with LLM technology.
  • Module 2: LLM01:2025 – Prompt Injection
    • In this module, we will focus on the vulnerability of prompt injection in LLM systems, explaining both direct and indirect types of attacks. We will dive into prevention strategies, mitigation techniques, and the evolution of these attacks as they grow more sophisticated over time. You will learn how to safeguard LLM applications against prompt injection risks.
  • Module 3: LLM02:2025 – Sensitive Information Disclosure
    • In this module, we will examine sensitive information disclosure within LLM applications, focusing on common vulnerabilities such as PII leakage. We will also discuss prevention strategies like data sanitization and privacy-enhancing technologies to protect sensitive information, while ensuring compliance with privacy regulations.
  • Module 4: LLM03:2025 – Supply Chain
    • In this module, we will explore the security risks inherent in the LLM supply chain, focusing on third-party models, data, and components. We will examine how to use Software Bill of Materials (SBOMs) to secure LLM systems and emphasize the importance of clear governance policies for using third-party LLM models in applications.
  • Module 5: LLM04:2025 – Data and Model Poisoning
    • In this module, we will delve into the risks of data and model poisoning, exploring how these attacks can alter LLM behavior and compromise security. We will cover different poisoning scenarios and provide prevention strategies, including robustness testing to identify and mitigate poisoning effects.
  • Module 6: LLM05:2025 – Improper Output Handling
    • In this module, we will explore the risks tied to improper handling of LLM outputs, including vulnerabilities like XSS and SQL injection. We will outline strategies for secure coding practices and demonstrate output encoding techniques to protect against injection attacks and other security risks.
  • Module 7: LLM06:2025 – Excessive Agency
    • In this module, we will examine the risks of excessive agency in LLM systems, focusing on autonomy, permissions, and functionality. We will discuss best practices for mitigating these risks, including the implementation of least privilege principles and secure authorization frameworks.
  • Module 8: LLM07:2025 – System Prompt Leakage
    • In this module, we will explore the risks associated with system prompt leakage in LLM systems. We will provide strategies to mitigate these risks, including prompt engineering and defense-in-depth techniques to ensure the security of system prompts and prevent sensitive information exposure.
  • Module 9: LLM08:2025 – Vector and Embedding Weaknesses
    • In this module, we will investigate the vulnerabilities related to vector and embedding usage in LLM applications, focusing on risks such as unauthorized access and data leakage. We will explore security best practices and provide strategies for protecting vector databases and embeddings to enhance LLM security.
  • Module 10: LLM09:2025 – Misinformation
    • In this module, we will explore the challenges of misinformation generated by LLMs and its effects on various domains like healthcare, politics, and finance. We will discuss strategies for preventing and mitigating misinformation spread and examine detection techniques for identifying harmful content.
  • Module 11: LLM10:2025 – Unbounded Consumption
    • In this module, we will discuss the risks of unbounded consumption in LLM systems, focusing on how excessive use can lead to Denial of Service (DoS) attacks and other vulnerabilities. We will cover strategies for mitigating these risks, including rate limiting techniques and model extraction defenses to protect LLM resources.
  • Module 12: Best Practices and Future Trends in LLM Security
    • In this final module, we will summarize the essential security principles for LLM application development and explore future trends and challenges in securing LLM systems. We will discuss the role of emerging technologies and the importance of integrating security standards and regulations to ensure ethical LLM usage.

Taught by

Packt - Course Instructors

Reviews

Start your review of AI Security Fundamentals – LLM Threats & OWASP 2026

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.