Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Macquarie University

Cyber Security: Security of AI

Macquarie University via Coursera

Overview

Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
• Watch our course introduction video before you enroll! (copy and paste into browser) https://vimeo.com/1176025534 AI Security: Risks, Defences, and Safety. This course prepares cyber security professionals, developers, data scientists, and policy leaders to defend intelligent systems. As artificial intelligence integrates into critical infrastructure and applications, new cyber risks emerge, from adversarial attacks to data leakage. Developed by Macquarie University’s Cyber Skills Academy, this program aligns with emerging threats and international standards. You will gain real-world skills to: * Understand AI systems, architecture, and security risks (adversarial inputs, model poisoning, data leakage). * Assess and mitigate AI-driven cyber-physical risks in Operational Technology (OT) and Industrial Control Systems (ICS). * Navigate threats in AI deployments, including deepfakes, misinformation, ethical misuse, and privacy violations. * Apply security testing strategies and technical controls, such as encryption, red/purple/blue team exercises, and robustness benchmarking. * Align AI systems with Responsible AI frameworks, covering fairness, transparency, regulatory compliance, and trust. * Anticipate evolving risks from Artificial General Intelligence (AGI) and deploy proactive defences. To succeed, learners should have a foundational understanding of cyber security concepts. This course provides the technical fluency, ethical awareness, and strategic insight to secure AI across industries.Lead the defence. Secure AI now.

Syllabus

  • Introduction and Emergent Threats of AI
    • Artificial Intelligence (AI) introduces rapidly evolving cybersecurity threats. This module explores AI fundamentals, how it works, and its applications. You will learn the difference between engineering-driven AI systems and deep learning models, and their unique security considerations. We then focus on the emerging threat landscape: adversarial AI, model manipulation, deepfakes, AI-driven scams, and AI weaponization for misinformation. Build a foundation in traditional security frameworks and AI-specific risks, preparing you to secure AI applications. Understand the urgency of building trusted, defensible AI systems.
  • Industrial Control Systems / Operational Technology Attacks in the context of Traditional Security Attacks
    • AI integration into critical infrastructure and industrial systems creates new attack avenues. This module explores how Artificial Intelligence reshapes security in Industrial Control Systems (ICS) and Operational Technology (OT). You will examine AI applications in ICS/OT, enhancing efficiency, but also introducing novel vulnerabilities and attack vectors in critical infrastructure. Through case studies, investigate how adversaries exploit AI in industrial environments. Learn to adapt traditional OpSec and DevSecOps practices for AI-enabled deployments. Identify sensitive components within AI pipelines and apply context-specific defences. Learn to defend AI-powered industry.
  • AI Security and Risks to Real-life Applications
    • As AI systems deploy, exposure to adversarial threats and misuse increases. This module explores how AI is attacked and exploited, a critical focus for cyber professionals. You will dive into AI-specific attack vectors: model poisoning, information leakage, model stealing, and backdoor exploits. These threats compromise AI performance and pose risks to data privacy, intellectual property, and user safety. Examine harmful AI outputs from biased data or manipulation. Learn how output alignment, ethical censorship, and AI-powered surveillance affect public trust and legal compliance. Analyze case studies to identify AI vulnerabilities and understand societal consequences of insecure deployments. Ensure AI shapes the world securely and responsibly.
  • Defences (AI Controls) and AI Security Testing
    • Defending AI systems against emerging threats is critical. This module explores technical controls and testing strategies to secure AI models. You will learn to apply AI-specific defences, from secure algorithm design to privacy-preserving techniques like differential privacy. Examine how to test and validate AI model robustness using red, purple, and blue teaming approaches. Focus on balancing security, utility, and performance to make informed trade-offs. Gain practical skills to implement trusted controls and rigorously test for resilience against real-world threats, whether building or auditing AI systems.
  • Responsible AI, Regulation and Governance
    • As AI systems grow, responsible design, deployment, and governance are imperative. This module introduces Responsible AI principles: fairness, bias mitigation, transparency, and ethical accountability. You will explore how AI decisions impact individuals and communities, navigating trade-offs between user privacy, model performance, and transparency. Unpack challenges like data sourcing, labelling, and ethical implications of large-scale models. Learn practical strategies for enhancing trust in AI systems. Dive into global frameworks, policies, and governance models supporting secure, ethical AI adoption. Ensure AI systems are functional, fair, transparent, and aligned with regulatory expectations.
  • Future of AI: Emerging Risks
    • AI is evolving rapidly, increasing security challenges. This module examines how emerging applications and architectures will shape the future of AI security. You will explore plausible AI uses in healthcare, autonomous vehicles, and programming, unpacking unique risks. We introduce Artificial General Intelligence (AGI), its transformative potential, and profound security and ethical implications. From lightweight AI models to philosophical security trade-offs, this module encourages critical, proactive thinking. Gain insight and foresight to anticipate future risks, influence responsible innovation, and contribute to the safe evolution of intelligent systems.

Taught by

Matt Bushby

Reviews

4.7 rating at Coursera based on 23 ratings

Start your review of Cyber Security: Security of AI

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.