VENOM: A Vectorized N:M Format for Unleashing the Power of Sparse Tensor Cores
Scalable Parallel Computing Lab, SPCL @ ETH Zurich via YouTube
Launch a New Career with Certificates from Google, IBM & Microsoft
Launch Your Cybersecurity Career in 6 Months
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Discover a groundbreaking approach to sparse tensor computation in this conference talk from the International Conference for High Performance Computing, Networking, Storage, and Analysis (#SC23). Explore the innovative V:N:M format that enables execution of arbitrary N:M ratios on NVIDIA's Sparse Tensor Cores (SPTCs), overcoming the limitations of the current 2:4 format. Delve into the high-performance sparse library Spatha, designed to efficiently exploit this new format, achieving up to 37x speedup over cuBLAS. Examine a novel second-order pruning technique that allows for high sparsity ratios in modern transformers with minimal accuracy loss. Gain insights into GPU Tensor Cores, sparse formats, sparse linear algebra, and evaluation methods as you uncover the potential of this vectorized approach to unleash the power of sparse tensor cores in deep learning applications.
Syllabus
Intro
GPU Tensor Cores
Sparse Formats
Sparse Linear Algebra
Second Order Pruning
Evaluation
Taught by
Scalable Parallel Computing Lab, SPCL @ ETH Zurich