Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore the technical architecture of a powerful 15-billion parameter active Mixture-of-Experts (MoE) model that outperforms Opus 4.6 in reasoning tasks. Dive deep into the internal workings of this advanced reasoning engine, examining the innovative methods and optimization algorithms used in building open-source MoE AI models. Learn about the cutting-edge techniques presented in the MiMo-V2-Flash Technical Report from LLM-Core Xiaomi, understanding how modern MoE architectures achieve superior performance through sophisticated expert routing and activation strategies. Gain insights into the technical foundations that enable this model to excel in complex reasoning scenarios while maintaining efficiency through its mixture-of-experts design.
Syllabus
15B Active MoE BEATS OPUS 4.6 in Reasoning
Taught by
Discover AI