muRISCV-NN: Deep-Learning Inference Kernels for Embedded Platforms Using RISC-V Vector and Packed Extensions
EDGE AI FOUNDATION via YouTube
Free courses from frontend to fullstack and AI
Learn AI, Data Science & Business — Earn Certificates That Get You Hired
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Watch a technical conference talk exploring muRISCV-NN, an open-source compute library designed for embedded and microcontroller class systems using RISC-V architecture. Learn how this library leverages RISC-V's Vector V and Packed P extensions to accelerate deep learning workloads on resource-constrained edge devices. Discover the library's key features including bit-accuracy with CMSIS-NN, compatibility with TensorFlow Lite for Microcontrollers and microTVM, and support for various RISC-V processors and simulators. Explore performance benchmarks demonstrating up to 9x speedup and 5x EDP reduction compared to plain C implementations across MLPerf Tiny benchmarks. Understand how this vendor-agnostic solution provides a standardized HW/SW interface between industry-standard deep learning libraries and ultra-low-power compute platforms, eliminating the need for custom compute-library implementations by processor designers.
Syllabus
tinyML EMEA - Philipp van Kempen: muRISCV-NN: Deep-Learning Inference Kernels for Embedded...
Taught by
EDGE AI FOUNDATION