Learn Backend Development Part-Time, Online
AI, Data Science & Business Certificates from Google, IBM & Microsoft
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Join this hour-long seminar from the "Random Samples" weekly series that explores a novel combinatorial approach to neural network interpretability based on MIT CSAIL and IST Austria research. Discover how the Feature Channel Coding Hypothesis reveals the way neural networks compute Boolean expressions by mapping features to neuron combinations, forming "codes" that allow for decoding network logic without retraining. Learn about "code interference" as a complexity-driven phenomenon that exposes natural limitations in neural computation. Gain deeper insights into how neural networks "think," essential knowledge for developing more interpretable, scalable, and trustworthy AI systems. The presentation references research from the paper available at https://arxiv.org/abs/2504.08842 and is complemented by a blog post on Red Hat's developer platform. This May 9, 2025 session is part of Neural Magic's weekly Friday series designed for AI developers, data scientists, and researchers bridging cutting-edge AI research with practical applications.
Syllabus
Random Samples: Towards Combinatorial Interpretability of Neural Computation [May 9, 2025]
Taught by
Neural Magic