Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Coursera

Secure Mobile AI Models Against Attacks

Coursera via Coursera

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
AI models are no longer locked in the cloud—they live in your pocket, powering mobile apps for fitness, finance, healthcare, and beyond. But with this power comes new risk: adversarial attacks, model theft, privacy leaks, and silent failures that undermine user trust. Securing Mobile AI Models against Attacks (SMAI) is a hands-on course for mobile app developers, AI engineers, and cybersecurity professionals who want to safeguard AI models on Android and iOS. Through interactive coach dialogues, video lessons, and practical labs, you’ll learn how to embed security from day one, analyze threats like reverse engineering and adversarial inputs, and implement layered defenses using encryption, obfuscation, and OpenTelemetry monitoring. By the end, you will have the skills to design, secure, and continuously monitor mobile AI applications, ensuring resilience, compliance, and user confidence in real-world deployments. Participants should have a basic understanding of AI, machine learning, and mobile development, along with knowledge of security concepts like encryption and data protection. Familiarity with AI model deployment and monitoring tools like OpenTelemetry is also helpful.

Syllabus

  • Foundations of Mobile AI Models
    • This module introduces learners to the unique nature of AI models running on mobile devices and why security cannot be bolted on later. Through an AI-guided dialogue, short lessons, and a design-focused lab, learners see how early choices in packaging and deployment set the stage for resilience or vulnerability. In this module, the emphasis is that security is not a barrier to innovation, it is the enabler of sustainable mobile AI applications.
  • Evaluating Threats to Mobile AI Models
    • In this module, learners will dive deeply into the adversarial landscape, exploring how reverse engineering, data inference, and adversarial inputs compromise mobile AI systems. The AI coach uses a real-world scenario to show how curiosity can become an attack, while lessons and labs reveal the tangible risks of model theft and privacy leaks. Forwards the understanding that researching threats is not paranoia but the prerequisite for defending trust and intellectual property, the essential elements of a secure, and mobile, AI.
  • Defending and Monitoring Mobile AI Applications
    • This module shifts from analysis to action, equipping learners with strategies to harden models and continuously monitor them in production. Guided by an AI dialogue on stealthy breaches, learners see how OpenTelemetry and layered defenses provide visibility and resilience in the field. Overall, learners discover securing mobile AI is not a one-time act, but a continuous practice of observing, adapting, and improving.

Taught by

Mark Peters and Starweaver

Reviews

Start your review of Secure Mobile AI Models Against Attacks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.