Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

edX

Optimizing Generative AI on Arm Processors: from Edge to Cloud

Arm Education via edX

Overview

AI models are becoming increasingly powerful—but also increasingly demanding. As generative AI moves from cloud data centers to mobile phones, autonomous systems and embedded IoT devices, the need to optimize performance across diverse hardware environments has never been more critical. Arm-based processors power more than 300 billion devices globally, from smartphones to hyperscale cloud servers, making them a key foundation for efficient AI deployment across the compute landscape. To meet this growing demand, learners need the skills to translate machine learning models into real-time, hardware-aware implementations across Arm-based platforms.

Optimizing Generative AI on Arm Processors: from Edge to Cloud is designed for intermediate machine learning practitioners who want to bridge the gap between model design and deployment efficiency. Rather than revisiting ML fundamentals, this course dives straight into performance engineering for generative AI on Arm-based platforms, including edge and cloud environments.

You’ll explore real-world constraints, Arm architecture features, and software techniques used to accelerate AI inference—including SIMD (SVE, Neon), low-bit quantization, and the KleidiAI library. Each concept is taught using concise, interactive notebooks and narrated examples, enabling you to measure, tweak, and iterate on actual hardware like the Raspberry Pi 5 or AWS Graviton3 cloud instances.

This course consists of four modules and hands-on lab exercises:

Module 1:Challenges Facing Cloud and Edge GenAI Inference

Understanding the limitations and constraints of AI inference in different environments.

Module 2:Generative AI Models

Exploring model architectures, training methodologies, and deployment considerations.

Module 3:ML Frameworks and Optimized Libraries

A deep dive into AI software stacks, including PyTorch, llama.cpp, and Arm-specific optimizations.

Module 4: Optimization for CPU Inference

Techniques such as quantization, pruning, and leveraging SIMD instructions for faster AI performance.

Syllabus

Module 1:Challenges Facing Cloud and Edge GenAI Inference

Understanding the limitations and constraints of AI inference in different environments.

Module 2:Generative AI Models

Exploring model architectures, training methodologies, and deployment considerations.

Module 3:ML Frameworks and Optimized Libraries

A deep dive into AI software stacks, including PyTorch, Llama.cpp, and Arm-specific optimizations.

Module 4: Optimization for CPU Inference

Techniques such as quantization, pruning, and leveraging SIMD instructions for faster AI performance.

Taught by

Oliver Grainge and Kieran Hejmadi

Reviews

Start your review of Optimizing Generative AI on Arm Processors: from Edge to Cloud

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.