Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn about WraAct, a novel approach for efficiently constructing tight over-approximations of activation function convex hulls in neural network verification through this 14-minute conference presentation from OOPSLA 2025. Discover how researchers from the University of Queensland address the critical challenge of handling non-linear activation functions in formal verification of deep learning systems used in safety-critical domains. Explore the core methodology that introduces linear constraints to smooth function fluctuations using double-linear-piece (DLP) functions, effectively simplifying local geometry and reducing the problem to efficiently manageable over-approximations. Examine comprehensive evaluation results showing WraAct's superior performance over the state-of-the-art SBLM+PDDM method, achieving 400X faster efficiency and 150X better precision while reducing constraint count by 50% on functions like Sigmoid, Tanh, and MaxPool. Understand how the approach handles up to 8 input dimensions within 10 seconds and significantly enhances neural network verification capabilities, improving single-neuron verification from under 10 to over 40 samples and outperforming the PRIMA multi-neuron verifier with up to 20 additional verified samples. Learn about practical applications on large networks like ResNets with 22,000 neurons, where verification completes within one minute per sample, demonstrating the method's scalability and real-world applicability in formal verification of neural network robustness.
Syllabus
[OOPSLA'25] Convex Hull Approximation for Activation Functions
Taught by
ACM SIGPLAN