Finch.jl - Flexible and Efficient Sparse Tensor Programming
The Julia Programming Language via YouTube
Gain a Splash of New Skills - Coursera+ Annual Nearly 45% Off
The Most Addictive Python and SQL Courses
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Discover Finch.jl, a state-of-the-art sparse tensor framework that revolutionizes flexible and efficient sparse tensor programming in Julia through this 29-minute conference talk from JuliaCon Global 2025. Learn how Finch leverages compiler technology to automatically generate customized, fused sparse kernels for specific use cases, enabling you to write readable, high-level sparse array programs without sacrificing performance. Explore Finch's comprehensive support for major sparse formats including CSR, CSC, DCSR, DCSC, CSF, COO, Hash, and Bytemap, while understanding how its parameterized format language allows you to define custom sparse formats. Master high-level array operations such as addition, multiplication, maximum, sum, map, broadcast, and reduce, and discover the @einsum syntax for implementing complex custom operations. Understand how to easily fuse multiple operations into single kernels through Finch's simple interface, a critical optimization for sparse computing where zeros enable computation pruning. Examine different optimizers including the state-of-the-art Galley optimizer that adapts to input sparsity patterns at runtime. Gain insights into Finch's underlying architecture, including its use of Looplets - a language of basic loop building blocks that hierarchically decompose structured sparsity to generate efficient code. Perfect for developers working with sparse tensors in applications involving graphs, meshes, pruned neural networks, or any domain where multidimensional tensor programming with sparse data structures is essential.
Syllabus
Finch.jl: Flexible and Efficient Sparse Tensor Programming! | Marie Ahrens | JuliaCon Global 2025
Taught by
The Julia Programming Language