Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Efficient and Robust Hardware for Neural Networks

NHR@FAU via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore cutting-edge approaches to hardware optimization for deep neural networks in this technical seminar talk delivered by Prof. Dr. Grace Li Zhang from Technische Universität Darmstadt. Dive into innovative solutions addressing the computational and memory challenges of modern DNNs, starting with class-aware pruning techniques to reduce multiply-and-accumulate operations. Learn about class-exclusion early-exit strategies, digital accelerator implementations using systolic arrays, and methods to optimize energy consumption through quantized weight selection and efficient logic design. Examine analog In-Memory-Computing platforms based on RRAM crossbars, and gain insights into current research developments and future directions in neural network hardware implementation. The 44-minute presentation includes comprehensive slides and is part of the NHR PerfLab Seminar series, offering valuable knowledge for those interested in the intersection of hardware architecture and neural network optimization.

Syllabus

Efficient and Robust Hardware for Neural Networks

Taught by

NHR@FAU

Reviews

Start your review of Efficient and Robust Hardware for Neural Networks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.