Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Macquarie University

Cyber Security: Security of AI

Macquarie University via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
AI Security: Risks, Defences and Safety | Defend the Future of Intelligence. As artificial intelligence becomes embedded in everything from critical infrastructure to everyday applications, a new frontier of cyber risk has emerged. From adversarial attacks to backdoor exploits, AI systems are now prime targets, and powerful tools, in the hands of threat actors. Secure the Systems That Learn AI Security is your essential guide to defending intelligent systems. Developed by Macquarie University’s Cyber Skills Academy, ranked in the top 1% of universities globally and recognised as Australia’s leading cyber security school, this course has been co-designed with global tech leaders to ensure alignment with emerging threats and international standards. Whether you’re a cyber security professional, developer, data scientist, or policy leader, this course equips you to detect, prevent, and respond to the security risks unique to AI. Through deep, applied learning across six core modules, you’ll gain real-world skills to: • Understand AI systems, their architecture, and the security risks that arise from adversarial inputs, model poisoning, and data leakage. • Assess and mitigate AI-driven cyber-physical risks in Operational Technology (OT) and Industrial Control Systems (ICS). • Navigate threats in real-world AI deployments—from deepfakes and misinformation to ethical misuse and privacy violations. • Apply security testing strategies and technical controls, including encryption, red/purple/blue team exercises, and robustness benchmarking. • Align AI systems with frameworks for Responsible AI, covering fairness, transparency, regulatory compliance, and trust. • Look ahead to the evolving risks posed by Artificial General Intelligence (AGI) and deploy proactive defences for the next generation of AI. AI Is the New Attack Surface AI is transforming everything, from how we work to how we’re attacked. This course is built to prepare you for both. You’ll gain the technical fluency, ethical awareness, and strategic insight to secure AI across domains and industries. Lead the defence. Anticipate what’s next. Secure AI now.

Syllabus

  • Introduction and Emergent Threats of AI
    • Artificial Intelligence (AI) is revolutionising industries across the globe, but it’s also introducing a rapidly evolving set of cybersecurity threats. As AI systems become more complex and deeply embedded in everyday operations, understanding their foundational principles and emergent risks is essential. In this topic, you’ll explore the fundamentals of AI, what it is, how it works, and how it’s being applied across sectors. You’ll learn the difference between engineering-driven AI systems and deep learning models, and how each introduces unique security considerations. From there, we shift focus to the new and emerging threat landscape: adversarial AI, model manipulation, deepfakes, AI-driven scams, and the weaponisation of AI for misinformation. You’ll build an essential foundation in both traditional security frameworks and AI-specific risks, setting the stage for deeper exploration of securing AI applications throughout the rest of the course. Get ready to explore the frontline of AI security challenges, and understand the urgency of building trusted, robust, and defensible AI systems.
  • ICS (Industrial Control Systems) / OT (Operational Technology) Attacks in the context of Traditional Security Attacks
    • As AI becomes increasingly integrated into critical infrastructure and industrial systems, it brings with it new layers of complexity, and new avenues for attack. In this topic, you’ll explore how Artificial Intelligence is reshaping the security landscape of Industrial Control Systems (ICS) and Operational Technology (OT), and what this means for defenders working in high-risk, high-impact environments. We begin by examining how AI is applied in ICS and OT, enhancing operational efficiency, automation, and predictive maintenance. But with innovation comes risk: AI introduces novel vulnerabilities, from AI-driven manipulation of cyber-physical systems to emerging attack vectors in critical infrastructure such as energy grids and manufacturing lines. Through real-world case studies, you’ll investigate how adversaries exploit AI in industrial environments and how traditional OpSec and DevSecOps practices must be adapted to secure AI-enabled deployments. You'll also learn how to identify sensitive components within AI pipelines and apply context-specific defences based on sector, whether in military-grade applications, industrial settings, or consumer products. AI is powering the future of industry. Here, you’ll learn how to defend it.
  • AI Security and Risks to Real-life Applications
    • As AI systems transition from experimental models to real-world deployment, their exposure to adversarial threats and misuse increases dramatically. In this topic, we’ll explore how AI is being attacked and exploited in practice, and why securing these systems is now a critical focus for cyber professionals. You’ll dive into the mechanics of AI-specific attack vectors such as model poisoning, information leakage, model stealing, and backdoor exploits. These threats not only compromise the performance of AI models, but also pose serious risks to data privacy, intellectual property, and user safety. We’ll also examine the implications of harmful AI outputs, whether they arise from poorly aligned models, biased training data, or deliberate manipulation. You’ll learn how challenges such as output alignment, ethical censorship, and AI-powered surveillance affect both public trust and legal compliance. By analysing real-world case studies and scenarios, this topic will sharpen your ability to identify vulnerabilities in AI systems and understand the broader societal consequences of insecure deployments. AI is already shaping the world, this topic helps ensure it does so securely and responsibly.
  • Defences (AI Controls) and AI Security Testing
    • As AI systems become more powerful and integrated into critical operations, defending them against emerging threats is no longer optional, it’s mission-critical. In this topic, you’ll explore the technical controls and testing strategies used to secure AI models and protect them from compromise. You’ll learn how to apply AI-specific defences, from secure algorithm design to privacy-preserving techniques like differential privacy. You’ll also examine how to test and validate the robustness of AI models using red, purple, and blue teaming approaches. With a focus on balancing security, utility, and performance, this topic empowers you to make informed trade-offs in high-stakes environments. Whether you’re building or auditing AI systems, you’ll gain the practical skills needed to implement trusted controls and rigorously test for resilience against real-world threats.
  • Responsible AI, Regulation and Governance
    • As AI systems grow in influence and complexity, so too does the imperative to ensure they are designed, deployed, and governed responsibly. This topic introduces the foundational principles of Responsible AI, covering fairness, bias mitigation, transparency, and ethical accountability. You’ll explore how AI decisions can impact individuals and communities, and how to navigate trade-offs between user privacy, model performance, and transparency. Key challenges such as data sourcing, labelling, and the ethical implications of large-scale models will be unpacked, alongside practical strategies for enhancing trust in AI systems. We’ll also dive into global frameworks, policies, and governance models that support secure and ethical AI adoption, equipping you with the knowledge to ensure AI systems are not only functional, but fair, transparent, and aligned with regulatory expectations.
  • The Future of AI: A Look Ahead
    • AI is evolving rapidly, and with it, the scope and complexity of its security challenges. In this final topic, we turn our attention to the road ahead: examining how emerging applications and architectures will shape the next frontier of AI security. You’ll explore speculative but increasingly plausible uses of AI in sectors like healthcare, autonomous vehicles, and programming, unpacking the unique risks each use case presents. We’ll also introduce Artificial General Intelligence (AGI), examining its transformative potential alongside the profound security and ethical implications it may carry. From lightweight AI models for constrained devices to philosophical perspectives on security trade-offs, this topic encourages you to think critically and proactively. The goal: to equip you with the insight and foresight needed to anticipate future risks, influence responsible innovation, and contribute to the safe evolution of intelligent systems.

Taught by

Matt Bushby

Reviews

4.6 rating at Coursera based on 17 ratings

Start your review of Cyber Security: Security of AI

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.