MetaKernel - Enabling Efficient Encrypted Neural Network Inference Through Unified MVM and Convolution
ACM SIGPLAN via YouTube
The Most Addictive Python and SQL Courses
AI, Data Science & Business Certificates from Google, IBM & Microsoft
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore a 14-minute conference presentation from OOPSLA 2025 that introduces MKR, a groundbreaking composition-based compiler approach for optimizing encrypted neural network inference under the CKKS fully homomorphic encryption scheme. Learn how researchers from Ant Group and UNSW Australia developed a unified framework that addresses critical inefficiencies in Matrix-Vector Multiplication (MVM) and Convolution (Conv) operations by decomposing kernels into composable MetaKernels. Discover how this innovative approach enhances SIMD parallelism within ciphertexts through horizontal batching and computational parallelism across them via vertical batching, while tackling previously unaddressed challenges including rotation overhead reduction through rotation-aware cost models for data packing. Understand the technical innovations that ensure high slot utilization, uniform handling of inputs with arbitrary sizes, and compatibility with output tensor layouts, resulting in remarkable performance improvements of 10.08×–185.60× speedups for individual kernels and 1.75×–11.84× for end-to-end inference compared to state-of-the-art FHE compilers. Gain insights into how MKR enables homomorphic execution of large deep neural network models where previous methods failed, significantly advancing the practical applicability of fully homomorphic encryption compilers in real-world scenarios.
Syllabus
[OOPSLA'25] MetaKernel: Enabling Efficient Encrypted Neural Network Inference Through Unified MVM(…)
Taught by
ACM SIGPLAN