2,000+ Free Courses with Certificates: Coding, AI, SQL, and More
Build with Azure OpenAI, Copilot Studio & Agentic Frameworks — Microsoft Certified
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the mathematical foundations of sparse ReLU neural networks in this 14-minute conference talk from the IFDS Workshop. Learn about the theoretical principles behind creating the sparsest possible ReLU (Rectified Linear Unit) neural networks and understand how sparsity can be optimized in neural network architectures. Discover the mathematical approaches and computational techniques used to minimize network complexity while maintaining performance, as presented by Julia Blair Nakhleh from the University of Wisconsin-Madison. Gain insights into the intersection of optimization theory and neural network design, particularly focusing on how ReLU activation functions can be leveraged to achieve maximum sparsity in network structures.
Syllabus
IFDS Workshop Short Talks–Sparsest ReLU Neural Networks
Taught by
Paul G. Allen School