Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Memory-Oriented Design-Space Exploration of Edge-AI Hardware for XR Applications

EDGE AI FOUNDATION via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Watch a technical research symposium presentation exploring memory-oriented design strategies for Edge-AI hardware in XR applications. Delve into an investigation of hand detection and eye segmentation workloads, examining how deep neural networks perform under various quantization levels and hardware constraints. Learn about comparative analyses between CPU and systolic inference accelerator implementations across different technology nodes, with special focus on emerging non-volatile memory technologies like STT/SOT/VGSOT MRAM. Discover how integrating non-volatile memory into XR-AI inference pipelines can achieve significant energy benefits of 24% or more for specific inference rates, while also reducing area requirements by 30% or greater compared to traditional SRAM solutions. Follow along through key topics including methodology, workload analysis, quantization effects, benchmarking processes, CMOS workflow considerations, and performance metrics like inference per second (IPS) and crossover points.

Syllabus

Introduction
Methodology
Workloads
Quantization
Benchmarking
Baseline
CMOS
Workflow
P0 P1
IPS
Crossover Point
Summary
Conclusion

Taught by

EDGE AI FOUNDATION

Reviews

Start your review of Memory-Oriented Design-Space Exploration of Edge-AI Hardware for XR Applications

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.