Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Quantization with Guaranteed Floating-Point Neural Network Classifications

ACM SIGPLAN via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about a novel approach to neural network quantization that maintains classification consistency with original floating-point networks in this 15-minute conference presentation from OOPSLA 2025. Discover how researchers Anan Kabaha and Dana Drachsler Cohen from Technion address the critical challenge of computational cost reduction in neural networks while preserving classification accuracy. Explore the CoMPAQt algorithm, which uses mixed-integer linear programming (MILP) with custom linear relaxations to compute maximal classification confidence and detect inconsistencies at inference time. Understand two correction mechanisms: one that guarantees 100% consistency with floating-point networks using increasing bit precisions, and another that uses ensemble methods to mitigate classification inconsistencies. Examine experimental results on MNIST, ACAS-Xu, and tabular datasets showing 3.8x to 4.1x computational cost reductions while maintaining near-perfect classification consistency. Gain insights into the first formal guarantee approach for classification consistency in quantized neural networks, combining verification techniques with practical quantization methods for both fully connected and convolutional architectures.

Syllabus

[OOPSLA'25] Quantization with Guaranteed Floating-Point Neural Network Classifications

Taught by

ACM SIGPLAN

Reviews

Start your review of Quantization with Guaranteed Floating-Point Neural Network Classifications

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.