Memory-Oriented Design-Space Exploration of Edge-AI Hardware for XR Applications
EDGE AI FOUNDATION via YouTube
Build the Finance Skills That Lead to Promotions — Not Just Certificates
Google AI Professional Certificate - Learn AI Skills That Get You Hired
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Watch a technical research symposium presentation exploring memory-oriented design strategies for Edge-AI hardware in XR applications. Delve into an investigation of hand detection and eye segmentation workloads, examining how deep neural networks perform under various quantization levels and hardware constraints. Learn about comparative analyses between CPU and systolic inference accelerator implementations across different technology nodes, with special focus on emerging non-volatile memory technologies like STT/SOT/VGSOT MRAM. Discover how integrating non-volatile memory into XR-AI inference pipelines can achieve significant energy benefits of 24% or more for specific inference rates, while also reducing area requirements by 30% or greater compared to traditional SRAM solutions. Follow along through key topics including methodology, workload analysis, quantization effects, benchmarking processes, CMOS workflow considerations, and performance metrics like inference per second (IPS) and crossover points.
Syllabus
Introduction
Methodology
Workloads
Quantization
Benchmarking
Baseline
CMOS
Workflow
P0 P1
IPS
Crossover Point
Summary
Conclusion
Taught by
EDGE AI FOUNDATION