Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

TinyML at the Edge - Deploying and Optimizing AI Workloads on Zephyr RTOS

Linux Foundation via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore how to deploy and optimize TinyML workloads on Zephyr RTOS in this 36-minute conference talk from the Linux Foundation. Learn to effectively run machine learning inference directly on microcontrollers using Zephyr's lightweight, modular platform. Discover various inference engines including TensorFlow Lite Micro, microTVM, emlearn, and LiteRT, and understand the decision criteria for selecting runtimes based on hardware constraints. Master Zephyr's Linkable Loadable Extensions (LLEXT) for hot-swapping models without reflashing devices. Gain insights into performance optimization techniques such as quantization and operator fusion, and learn to benchmark on physical devices versus Renode simulation. Examine real-world applications including health monitors and predictive maintenance systems, while exploring best practices for over-the-air model updates and the future of embedded AI development with Zephyr RTOS.

Syllabus

TinyML at the Edge: Deploying and Optimizing AI Workloads on Zephyr RTOS - Amandeep Singh, Welzin

Taught by

Linux Foundation

Reviews

Start your review of TinyML at the Edge - Deploying and Optimizing AI Workloads on Zephyr RTOS

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.