MIT Sloan AI Adoption: Build a Playbook That Drives Real Business ROI
Stuck in Tutorial Hell? Learn Backend Dev the Right Way
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore an in-depth analysis of the Mixtral of Experts paper in this comprehensive video lecture. Delve into the intricacies of Sparse Mixture of Experts (SMoE) language models, comparing Mixtral 8x7B's architecture to Mistral 7B and examining its performance against Llama 2 70B and GPT-3.5. Learn about expert routing, sparse expert routing, and expert parallelism. Discover the experimental results, routing analysis, and conclusions drawn from this groundbreaking research in natural language processing and artificial intelligence.
Syllabus
- Introduction
- Mixture of Experts
- Classic Transformer Blocks
- Expert Routing
- Sparse Expert Routing
- Expert Parallelism
- Experimental Results
- Routing Analysis
- Conclusion
Taught by
Yannic Kilcher