Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

Generative AI and LLM Security

Edureka via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
This program equips cybersecurity professionals, AI engineers, and security architects with the expertise to identify, analyze, and mitigate vulnerabilities in Generative AI (GenAI) and Large Language Models (LLMs). You’ll begin by exploring the foundations of GenAI threats, examining common attack vectors such as prompt injection, jailbreaks, model theft, and adversarial manipulation. Through practical demonstrations, you will learn how attackers exploit weaknesses in AI-driven systems and how defenders can detect and respond to these risks in real-world environments. Building on these fundamentals, you’ll gain hands-on experience in securing LLM applications, aligning model outputs to security objectives, and applying guardrails, watermarking, and safety evaluation methods. You’ll also work with API integrations using platforms like Gemini API and Google Colab to simulate secure deployment practices and mitigate risks in live systems. Next, the program delves into AI lifecycle security, covering strategies to secure training data, prevent poisoning attacks, and protect AI pipelines. You’ll explore model provenance, dependency scanning, and secure deployment pipelines—ensuring the integrity of AI systems across their entire supply chain. The course also emphasizes AI ethics and compliance, including bias detection, fairness in model design, and global regulatory frameworks like GDPR, CCPA, NIST AI RMF, ISO standards, and the EU AI Act. Using tools like Sola Security, you’ll practice auditing, governance, and risk management to operationalize ethical and compliant AI practices. Finally, you’ll examine frontier threats in emerging domains such as multimodal AI and Agentic AI, exploring adversarial attacks, cross-modal vulnerabilities, and their implications for enterprise cybersecurity. By the end of this program, you will be able to: - Identify and evaluate attack vectors targeting GenAI and LLMs. - Apply secure prompt engineering and defense strategies against prompt injection and jailbreaks. - Design and implement guardrails, safety mechanisms, and watermarking in LLM applications. - Protect AI training data, pipelines, and deployment workflows from poisoning and supply chain risks. - Assess and enforce regulatory compliance with GDPR, CCPA, NIST, ISO, and the EU AI Act. - Recognize and mitigate frontier threats in multimodal and agentic AI systems. - Integrate ethical, transparent, and resilient security practices across the AI lifecycle. This specialization is designed for cybersecurity engineers, LLM developers, AI security specialists, ML engineers, and cloud/edge security architects who want to build advanced expertise in safeguarding GenAI systems. Join us to gain the skills, tools, and strategies required to secure next-generation AI systems against evolving adversarial threats.

Syllabus

  • Threats in Generative AI Systems
    • Uncover the vulnerabilities of Generative AI systems by examining common attack vectors such as prompt injection, jailbreaks, and model theft. Learn how adversaries exploit weaknesses, explore mitigation strategies, and gain hands-on practice in detecting and responding to real-world GenAI risks.
  • AI Lifecycle Security
    • Learn how to secure the AI lifecycle by protecting training data, ensuring supply chain integrity, and safeguarding model deployment pipelines. Explore techniques to detect data poisoning, enforce model provenance, manage dependencies, and implement tamper-proofing strategies. Gain practical skills to apply security best practices, monitor AI systems, and mitigate risks while ensuring ethical, reliable, and compliant AI operations.
  • AI Ethics and Regulatory Compliance
    • Explore how AI systems can operate ethically and comply with regulatory standards while maintaining security. Learn to identify ethical risks, address bias and fairness challenges, and implement transparency and accountability in AI workflows. Gain hands-on experience with compliance frameworks, auditing practices, and tools like Sola Security to ensure AI-driven systems are responsible, transparent, and legally compliant.
  • Frontier Threats in AI Systems
    • Investigate advanced security risks in AI systems, focusing on multimodal and Agentic AI vulnerabilities. Learn to identify and mitigate adversarial threats across diverse data modalities, while understanding defensive strategies and risk management practices. Gain hands-on experience with AI-driven threat detection, cybersecurity triage, and security assessment techniques to ensure robust, resilient, and secure enterprise AI deployments.
  • Course Wrap-Up and Assessment
    • This module is designed to assess an individual on the various concepts and teachings covered in this course. Evaluate your knowledge with a comprehensive graded quiz.

Taught by

Edureka

Reviews

Start your review of Generative AI and LLM Security

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.