Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Inside STM32N6 - How Stream-Based Acceleration Redefines Edge AI Efficiency

EDGE AI FOUNDATION via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the revolutionary NeuralArt stream-based accelerator architecture inside STMicroelectronics' STM32N6 microcontroller in this 15-minute technical deep-dive. Discover how a decade of prototyping has led to breakthrough innovations in edge AI efficiency that prioritize smarter data movement over raw CPU power. Learn about the stream-based architecture that maintains high compute utilization while minimizing bandwidth requirements, and examine the compiler scheduling techniques that reduce data shuttling overhead. Compare digital versus analog in-memory computing (IMC) approaches, analyzing their trade-offs in determinism, density, and power efficiency. Review impressive prototype results achieving 40 TOPS/W and 10 TOPS/mm² performance at 1 GHz operation. Understand the heterogeneous 2D mesh design that combines IMC and stream processing units for optimal performance. Gain insights into the NeoSoC research project and the upcoming 80-nanometer tapeout that will further advance this technology. Master how STMicroelectronics is fundamentally rethinking tensor data flow, layer scheduling algorithms, and compiler intelligence to address the critical bottleneck of memory bandwidth in edge AI applications.

Syllabus

Inside STM32N6: How Stream-Based Acceleration Redefines Edge AI Efficiency

Taught by

EDGE AI FOUNDATION

Reviews

Start your review of Inside STM32N6 - How Stream-Based Acceleration Redefines Edge AI Efficiency

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.