Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Join this hour-long seminar from the "Random Samples" weekly series that explores a novel combinatorial approach to neural network interpretability based on MIT CSAIL and IST Austria research. Discover how the Feature Channel Coding Hypothesis reveals the way neural networks compute Boolean expressions by mapping features to neuron combinations, forming "codes" that allow for decoding network logic without retraining. Learn about "code interference" as a complexity-driven phenomenon that exposes natural limitations in neural computation. Gain deeper insights into how neural networks "think," essential knowledge for developing more interpretable, scalable, and trustworthy AI systems. The presentation references research from the paper available at https://arxiv.org/abs/2504.08842 and is complemented by a blog post on Red Hat's developer platform. This May 9, 2025 session is part of Neural Magic's weekly Friday series designed for AI developers, data scientists, and researchers bridging cutting-edge AI research with practical applications.