Master Windows Internals - Kernel Programming, Debugging & Architecture
Gain a Splash of New Skills - Coursera+ Annual Just ₹7,999
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a novel theoretical framework explaining the grokking phenomenon in AI models through geometric representations of knowledge patterns in neural network memory. Delve into an innovative perspective on how Transformer and Mamba architectures develop internal geometric structures during the grokking phase, where models suddenly transition from memorization to generalization. Examine the geometric interpretation of knowledge encoding in neural networks and understand how spatial representations might explain the mysterious grokking behavior observed in language models. Learn about the theoretical connections between geometric patterns in model weights and the emergence of true understanding versus mere memorization. Discover insights into how neural networks organize information spatially and why this geometric perspective offers a fresh lens for understanding AI model behavior during critical learning transitions.
Syllabus
Geometric GROKKING Unlocked & Explained
Taught by
Discover AI