Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Using TensorFlow Lite for Microcontrollers for High-Efficiency NN Inference on Ultra-Low Power Processors

tinyML via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the use of TensorFlow Lite for Microcontrollers in high-efficiency neural network inference on ultra-low power processors in this 38-minute tinyML Talks webcast. Discover how specific hardware extensions on embedded processors can significantly improve the performance of neural network inference operations, allowing targets to be met while consuming less power. Learn about the integration of optimized neural network inference libraries with popular machine learning front-ends to facilitate development flows. Gain insights into the Synopsys MLI Machine Learning Inference library running on a DSP-enhanced DesignWare® ARC® EM processor through practical demonstrations, including a Person Detect Demo and a Deployable System Demo. Delve into topics such as optimizations, power considerations, edge developers, and programmability in the context of deeply-embedded AIoT applications.

Syllabus

Introduction
Overview
Optimizations
Power
Edge Developers
Programmability
Person Detect Demo
Deployable System Demo
Summary

Taught by

tinyML

Reviews

Start your review of Using TensorFlow Lite for Microcontrollers for High-Efficiency NN Inference on Ultra-Low Power Processors

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.