PowerBI Data Analyst - Create visualizations and dashboards from scratch
Foundations for Product Management Success
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore cutting-edge approaches to hardware optimization for deep neural networks in this technical seminar talk delivered by Prof. Dr. Grace Li Zhang from Technische Universität Darmstadt. Dive into innovative solutions addressing the computational and memory challenges of modern DNNs, starting with class-aware pruning techniques to reduce multiply-and-accumulate operations. Learn about class-exclusion early-exit strategies, digital accelerator implementations using systolic arrays, and methods to optimize energy consumption through quantized weight selection and efficient logic design. Examine analog In-Memory-Computing platforms based on RRAM crossbars, and gain insights into current research developments and future directions in neural network hardware implementation. The 44-minute presentation includes comprehensive slides and is part of the NHR PerfLab Seminar series, offering valuable knowledge for those interested in the intersection of hardware architecture and neural network optimization.
Syllabus
Efficient and Robust Hardware for Neural Networks
Taught by
NHR@FAU