Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Create comprehensive AI security from code to deployment in this 13-course specialization covering the entire AI lifecycle. Learn to secure ML pipelines, implement threat modeling with MITRE ATLAS, optimize model performance, conduct red-teaming exercises, and automate incident response. Through hands-on labs using industry tools like Bandit, Semgrep, PyRIT, and MLflow, you'll build expertise in static analysis, vulnerability assessment, adversarial testing, and mobile AI protection. Gain practical skills to identify AI-specific threats including prompt injection, model extraction, data poisoning, and supply chain attacks while implementing robust security controls, monitoring systems, and recovery strategies for production AI deployments.
Syllabus
- Course 1: Secure AI Code & Libraries with Static Analysis
- Course 2: Secure AI: Threat Model & Test Endpoints
- Course 3: Optimize AI Inference Speed & Accuracy
- Course 4: Harden AI: Secure Your ML Pipelines
- Course 5: Secure AI Model Deployments & Lifecycles
- Course 6: Secure AI Interpret and Protect Models
- Course 7: Secure AI with Privacy and Access Controls
- Course 8: Secure AI: Red-Teaming & Safety Filters
- Course 9: Secure AI Systems Across Lifecycle Stages
- Course 10: Automate AI Anomaly Detection & Response
- Course 11: Harden AI: Patch and Recover Incidents Fast
- Course 12: Secure Mobile AI Models Against Attacks
- Course 13: Detect & Respond to Mobile AI Threats
Courses
-
Smartphones now run powerful on-device AI that learns from your behavior—and that means new risk. In this intermediate course, you’ll learn how AI turns phones into active attack surfaces and how adversaries weaponize deepfakes, side-channel inference, and mobile LLM agents. Through short, focused videos and scenario-based discussions, you’ll see exactly how zero-permission sensors and cache traces reveal activity, how overlays and prompt injection hijack agents, and why “permissions” alone don’t ensure privacy. Then you’ll turn knowledge into action: baseline telemetry, write simple detection rules, verify links and intents, quarantine devices, rotate tokens, and draft a one-page SOP. AI-graded labs provide hands-on practice, and a capstone project ties everything together. By the end, you can detect, respond, and harden against AI-driven mobile threats—skills you can apply immediately at home or in an enterprise. This course is designed for IT professionals, security analysts, mobile administrators, and technical learners who want to strengthen their ability to protect mobile environments from emerging AI-driven threats. It is also valuable for MDM specialists, SOC/incident response teams, and cybersecurity students looking to understand how modern AI models and agents are changing the mobile threat landscape. Learners should have a basic understanding of mobile or IT security concepts, along with some comfort navigating Android settings, ADB, or Mobile Device Management (MDM) tools. General familiarity with AI systems or LLM-based agents will also help learners follow demonstrations and better understand how modern AI features influence mobile risk. By the end of the course, learners will be able to analyze how AI-driven capabilities—such as sensors, on-device models, and autonomous agents—expand the mobile attack surface and enable scams like deepfake social engineering. They will evaluate real-world AI attack paths, including zero-permission inference and multi-layer agent exploits, and will be able to design a practical detection and response plan using clear rules, fast containment steps, and core resilience controls tailored for mobile environments.
-
Imagine deploying a powerful machine learning model that performs flawlessly—until a single unpatched container, a poisoned dependency, or a misconfigured cloud service brings it crashing down. In today’s AI-driven world, securing ML systems is no longer optional; it’s essential to maintaining trust, compliance, and resilience. Harden AI: Secure Your ML Pipelines is an intermediate, scenario-driven cybersecurity and AI governance course that immerses learners in the realities of protecting machine learning infrastructure. Through a blend of theory sessions, guided demonstrations, and AI-assisted coach dialogues, participants explore how to harden ML environments, secure CI/CD workflows, and build resilient pipelines that can withstand compromise. Real-world case studies—ranging from exposed Jupyter notebooks to supply chain attacks and model drift—anchor the learning experience in practical relevance. This course is for ML engineers, DevOps professionals, and AI practitioners who want to secure their ML pipelines. It also suits data scientists and developers managing AI systems in cloud or containerised environments. Learners should have basic knowledge of ML workflows, cloud or container security, and general awareness of cyber threats. By the end of the course, learners will have developed a security-by-design mindset, equipped with both the technical skills and ethical awareness to deploy trustworthy, compliant, and resilient AI systems in real-world environments.
-
Production ML models failing your latency targets? Learn how to make them run 3-5x faster without losing accuracy. This course helps ML engineers and data scientists optimize neural network inference for real-world deployment—across mobile, edge, and cloud environments. If you face slow model inference, high infrastructure costs, or deployment constraints, this course provides practical solutions. You'll master profiling techniques to identify performance bottlenecks, apply quantization to cut precision requirements, and make smart trade-offs between speed, accuracy, and resource constraints. You'll learn to benchmark optimization techniques and select the right approach for deployment scenarios. You'll explore inference profiling and metrics, pruning strategies, and quantization methods. You'll practice with real-world cases—from streaming platforms to autonomous vehicles—using industry-standard tools like PyTorch Profiler, TensorRT, and pruning utilities. This course is ideal for machine learning engineers, data scientists, and AI practitioners who are deploying or optimizing models in production. It’s also valuable for MLOps professionals and system engineers responsible for performance tuning in resource-constrained environments (e.g., mobile, embedded, or cloud inference systems). Learners should have a good grasp of Python and basic experience with PyTorch or TensorFlow. Familiarity with machine learning concepts, such as model training and evaluation, is expected. Understanding how neural networks work and basic performance metrics like latency and accuracy will help you get the most from this course. By the end of this course, you’ll confidently optimize production models, cut inference costs, meet latency goals, and deploy ML systems that scale efficiently.
-
Master comprehensive static analysis workflows for AI security using industry-standard tools like Bandit, Semgrep, and pip-audit. Learn to identify AI-specific vulnerabilities including insecure pickle deserialization, hardcoded secrets in training scripts, and dependency risks that traditional security tools miss. Through hands-on labs with real vulnerable ML codebases, you'll configure automated security scanning in CI/CD pipelines, create custom detection rules for TensorFlow/PyTorch patterns, and implement supply chain security with SBOM generation. Address the unique challenges of ML projects with 50+ dependencies while establishing production-ready security policies. This course is ideal for anyone involved in AI development, automation, or system design, including software developers, data professionals, tech managers, and curious learners who want to understand modern multi-agent systems and how to govern them responsibly. Learners don’t need deep AI expertise to get started. A basic understanding of programming concepts and some familiarity with tools like Python or visual workflow builders will make the experience smoother, but the course guides you step by step from core ideas to more advanced design patterns. By course completion, you'll proactively secure AI systems against the growing threat landscape targeting machine learning workflows, preventing costly post-deployment fixes through early vulnerability detection in development processes.
-
If model rollouts feel risky, monitoring is an afterthought, and updates make you nervous, you’re not alone. As AI moves from prototype to production, the stakes rise: model supply chains, promotion workflows, and runtime behavior need guardrails, not just good intentions. This course is your blueprint for shipping with confidence by baking security into every phase of the AI Model lifecycle. You’ll learn to choose the right deployment strategy for your risk profile, enforce provenance and approvals with a model registry, and wire continuous monitoring for data/feature drift, performance, and safety signals. We also cover securing updates with signed artifacts, CI/CD policy gates, and rapid, auditable rollback. ML engineers, MLOps practitioners, and DevOps teams work together to ensure AI models move smoothly from development to production. ML engineers focus on building and training models, MLOps practitioners streamline and automate the model lifecycle, and DevOps teams manage infrastructure and deployment. Together, they create a reliable, scalable, and efficient pipeline for delivering AI solutions that perform consistently in real-world environments. Git & CI/CD basics, Docker or managed ML platform experience, working knowledge of Python ML workflows and environment/package management. By the end, you’ll ship behind structured change control, track lineage from dataset to container, and respond quickly when reality (or your threat model) changes. Whether you run on Kubernetes, serverless, or managed ML platforms, the practical flows, templates, and hands-on exercises in this course help you harden deployments without slowing delivery; turning ad-hoc launches into repeatable, secure lifecycles from commit to canary to continuous oversight.
-
As artificial intelligence powers our world, it creates a new frontier for complex threats that standard cybersecurity practices can't handle. This course equips you with the specialized, in-demand skills to defend these critical systems from end to end. You will learn to think like an attacker, identifying unique threats like data poisoning, adversarial evasion, and model inference attacks. We'll journey through the entire MLOps lifecycle, pinpointing vulnerabilities from the moment data is collected to the second a model is deployed. But this isn't just theory—you will immediately apply your knowledge in a series of hands-on labs. Using the industry-standard MITRE ATLAS framework, you'll perform a full threat model analysis on a sample AI application. You will then implement practical, code-based mitigation strategies to build more resilient systems, culminating your learning in a final project where you conduct a full security audit. This course is ideal for AI engineers, data scientists, cybersecurity professionals, and anyone involved in the design, development, or deployment of AI systems. It is especially valuable for professionals working in sectors where security is a priority, such as healthcare, finance, and government. Learners should have a foundational understanding of AI, machine learning, and basic cybersecurity concepts. Familiarity with software development practices and system architecture will be beneficial, but not required. By the end of this course, you will have the confidence and tangible skills to protect the next generation of technology and become an essential asset in the world of AI security.
-
As large language models revolutionize business operations, sophisticated attackers exploit AI systems through prompt injection, jailbreaking, and content manipulation—vulnerabilities that traditional security tools cannot detect. This intensive course empowers AI developers, cybersecurity professionals, and IT managers to systematically identify and mitigate LLM-specific threats before deployment. Master red-teaming methodologies using industry-standard tools like PyRIT, NVIDIA Garak, and Promptfoo to uncover hidden vulnerabilities through adversarial testing. Learn to design and implement multi-layered content-safety filters that block sophisticated bypass attempts while maintaining system functionality. Through hands-on labs, you'll establish resilience baselines, implement continuous monitoring systems, and create adaptive defenses that strengthen over time. This course is designed for AI engineers, security professionals, data scientists, and developers interested in ensuring the safety and robustness of AI models. It’s also ideal for technology leaders seeking to implement secure, responsible AI frameworks within their organizations. Learners should have a basic understanding of machine learning, AI model architecture, and programming concepts. No prior experience with AI red-teaming or safety systems is required. By end of this course, you'll confidently conduct professional AI security assessments, deploy robust safety mechanisms, and protect LLM applications from evolving attack vectors in production environments.
-
Master the critical skills needed to secure AI inference endpoints against emerging threats in this comprehensive intermediate-level course. As AI systems become integral to business operations, understanding their unique vulnerabilities is essential for security professionals. You'll learn to identify and evaluate AI-specific attack vectors including prompt injection, model extraction, and data poisoning through hands-on labs and real-world scenarios. Design comprehensive threat models using STRIDE and MITRE ATLAS frameworks specifically adapted for machine learning systems. Create automated security test suites covering unit tests for input validation, integration tests for end-to-end security, and adversarial robustness testing. Implement these security measures within CI/CD pipelines to ensure continuous validation and monitoring. Through practical exercises with Python, GitHub Actions, and monitoring tools, you'll gain experience securing production AI deployments. Perfect for developers, security engineers, and DevOps professionals ready to specialize in the rapidly growing field of AI security. This course is designed for developers, security engineers, and DevOps professionals looking to specialize in AI security. With a solid understanding of Python, APIs, and CI/CD concepts, you'll dive deep into securing AI inference endpoints against emerging threats like prompt injection and data poisoning. Through hands-on labs, you'll learn to design threat models, create automated security tests, and integrate continuous security measures into CI/CD pipelines. Perfect for those eager to enhance their expertise in safeguarding AI systems. A basic knowledge of Python, APIs, web services, and CI/CD concepts is essential for this course. Python will help with scripting, while understanding APIs and CI/CD will enable you to automate and manage deployments effectively. These skills are key to successfully navigating the course. By the end of this course, you'll have the skills to automate and secure your development workflows, leveraging tools like Bitbucket Pipelines. You'll be ready to apply industry best practices to integrate, test, and deploy applications seamlessly, enhancing both efficiency and security in your DevOps processes.
-
AI models are no longer locked in the cloud—they live in your pocket, powering mobile apps for fitness, finance, healthcare, and beyond. But with this power comes new risk: adversarial attacks, model theft, privacy leaks, and silent failures that undermine user trust. Securing Mobile AI Models against Attacks (SMAI) is a hands-on course for mobile app developers, AI engineers, and cybersecurity professionals who want to safeguard AI models on Android and iOS. Through interactive coach dialogues, video lessons, and practical labs, you’ll learn how to embed security from day one, analyze threats like reverse engineering and adversarial inputs, and implement layered defenses using encryption, obfuscation, and OpenTelemetry monitoring. By the end, you will have the skills to design, secure, and continuously monitor mobile AI applications, ensuring resilience, compliance, and user confidence in real-world deployments. Participants should have a basic understanding of AI, machine learning, and mobile development, along with knowledge of security concepts like encryption and data protection. Familiarity with AI model deployment and monitoring tools like OpenTelemetry is also helpful.
-
An outage rarely starts with a red dashboard-it starts as a small anomaly: a spike in latency, a surge in failures, or a subtle change in traffic. The faster you detect and respond, the less damage (and stress) you create. In this course, you’ll build an end-to-end anomaly detection and response loop on Azure. You’ll instrument an app with Application Insights, detect unusual behavior with Azure Monitor smart detection, dynamic thresholds, and KQL time-series functions, and then turn alerts into action using action groups and Logic Apps (with optional Azure Functions for custom remediation). You’ll learn a practical workflow: choose the right signal, set guardrails to reduce noise, enrich alerts with context, and automate a consistent response-notify the right channel, capture evidence, and trigger a safe mitigation step. This course is designed for IT professionals, including DevOps engineers, SREs, and Azure administrators, who want to learn how to automate anomaly detection and response workflows in Azure environments. Learners should be familiar with basic Azure Portal navigation, and JSON familiarity is helpful, along with basic monitoring concepts. No ML prerequisite. By the end, you’ll have a reusable blueprint (queries, alert rules, and automation) you can adapt to real systems to catch problems earlier and respond reliably.
-
Master the critical skills needed to maintain AI systems in production through this hands-on course designed for DevOps engineers, ML engineers, and SREs. As AI deployments grow more complex, the ability to patch safely, recover from incidents quickly, and maintain operational health becomes essential. Through realistic crisis scenarios, you'll learn systematic patching strategies that minimize downtime, conduct blameless post-mortems that transform failures into knowledge, and build monitoring systems that detect issues before users notice. Work with industry tools like MLflow while practicing with real incident data. You'll tackle challenges like emergency vulnerability patches, investigate mysterious model failures, and design monitoring for a million-user scale. Each module features immersive scenarios where you make critical decisions under pressure. Ideal for DevOps, ML engineers, and SREs managing AI systems in production. Perfect for those seeking to strengthen skills in monitoring, incident response, and reliability, or preparing for senior operations roles. Basic knowledge of AI/ML concepts, familiarity with deployment pipelines, and some experience in incident management are recommended for successful course completion. By course completion, you'll confidently handle production AI incidents, implement preventive measures, and lead operational excellence initiatives. Perfect for professionals managing AI in production or preparing for senior DevOps/SRE roles.
-
Ever wonder if your smart AI is actually secure? In this course, we'll ditch the dry theory to show you how to build genuinely resilient AI systems from the ground up, making security a core part of your design, not just an afterthought. You'll begin by stepping into the role of an AI Security Architect, running a “pre-mortem” to think like an attacker and neutralize threats before they even happen. Through focused videos and exercises, you’ll master essential defenses like blocking bad data with input sanitization, ‘vaccinating’ your model against attacks with adversarial training, and protecting user data with differential privacy. This all culminates in a hands-on lab where you'll personally fix a vulnerable model and prove its new resilience. The main goal is to shift your mindset from reactive patching to proactive design, so you’ll walk away with the real-world skills to analyze defense strategies, successfully harden a model in a lab, and design a comprehensive security plan for any new AI project. This course is for AI developers, security engineers, MLOps specialists, and data scientists aiming to master securing AI models against adversarial threats. Proficiency in Python and a machine learning framework (e.g., TensorFlow, PyTorch). Foundational knowledge of building and training AI models. By the end of this course, you’ll have gained the skills to thoroughly analyze and secure AI models, applying advanced defense mechanisms like adversarial training and differential privacy. You’ll be equipped to assess vulnerabilities, implement robust security strategies, and continuously test and improve your models. With hands-on experience fixing real-world AI vulnerabilities, you'll be prepared to design and deploy AI systems that are resilient against adversarial threats, ensuring their integrity and security throughout their lifecycle.
-
Artificial Intelligence brings transformative benefits but also unprecedented privacy, security, and compliance risks. Recent incidents (i.e. Samsung, McDonald’s, OpenAI, Slack) and regulatory actions show what happens when these risks are ignored. This course teaches learners to secure AI systems by implementing privacy-by-design, least privilege, DLP, and dynamic access controls and to map these controls to global regulations. Through case studies, policy drafting, and hands-on labs, learners develop the skills to assess risks, deploy controls, and respond to incidents in real AI environments. No advanced programming or AI expertise is required. All you need is basic IT/security knowledge.
Taught by
Aseem Singhal, Ashish Mohan, Brian Newman, Hanniel Jafaru, Mark Peters, Renaldi Gondosubroto, Reza Moradinezhad, Rifat Erdem Sahin, Ritesh Vajariya, Starweaver and Tom Themeles