Memory-Oriented Design-Space Exploration of Edge-AI Hardware for XR Applications
EDGE AI FOUNDATION via YouTube
PowerBI Data Analyst - Create visualizations and dashboards from scratch
Save 43% on 1 Year of Coursera Plus
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Watch a technical research symposium presentation exploring memory-oriented design strategies for Edge-AI hardware in XR applications. Delve into an investigation of hand detection and eye segmentation workloads, examining how deep neural networks perform under various quantization levels and hardware constraints. Learn about comparative analyses between CPU and systolic inference accelerator implementations across different technology nodes, with special focus on emerging non-volatile memory technologies like STT/SOT/VGSOT MRAM. Discover how integrating non-volatile memory into XR-AI inference pipelines can achieve significant energy benefits of 24% or more for specific inference rates, while also reducing area requirements by 30% or greater compared to traditional SRAM solutions. Follow along through key topics including methodology, workload analysis, quantization effects, benchmarking processes, CMOS workflow considerations, and performance metrics like inference per second (IPS) and crossover points.
Syllabus
Introduction
Methodology
Workloads
Quantization
Benchmarking
Baseline
CMOS
Workflow
P0 P1
IPS
Crossover Point
Summary
Conclusion
Taught by
EDGE AI FOUNDATION