muRISCV-NN: Deep-Learning Inference Kernels for Embedded Platforms Using RISC-V Vector and Packed Extensions
EDGE AI FOUNDATION via YouTube
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Watch a technical conference talk exploring muRISCV-NN, an open-source compute library designed for embedded and microcontroller class systems using RISC-V architecture. Learn how this library leverages RISC-V's Vector V and Packed P extensions to accelerate deep learning workloads on resource-constrained edge devices. Discover the library's key features including bit-accuracy with CMSIS-NN, compatibility with TensorFlow Lite for Microcontrollers and microTVM, and support for various RISC-V processors and simulators. Explore performance benchmarks demonstrating up to 9x speedup and 5x EDP reduction compared to plain C implementations across MLPerf Tiny benchmarks. Understand how this vendor-agnostic solution provides a standardized HW/SW interface between industry-standard deep learning libraries and ultra-low-power compute platforms, eliminating the need for custom compute-library implementations by processor designers.
Syllabus
tinyML EMEA - Philipp van Kempen: muRISCV-NN: Deep-Learning Inference Kernels for Embedded...
Taught by
EDGE AI FOUNDATION