Efficiency in the Age of Large Scale Models - Designing and Optimizing Deep Learning Systems
HUJI Machine Learning Club via YouTube
NY State-Licensed Certificates in Design, Coding & AI — Online
Lead AI-Native Products with Microsoft's Agentic AI Program
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore a comprehensive lecture on the evolution and efficiency challenges of large-scale machine learning models. Delve into both theoretical and practical aspects of model efficiency, from the rapid scaling of neural networks over the past decade to current challenges in computational costs and accessibility. Learn how architectural choices impact model expressiveness, discover domain-specific optimization strategies for NLP and Quantum Physics, and understand a groundbreaking incremental computation approach that achieves up to 100X reduction in computational costs for large language model inference. The speaker, Or Sharir, brings extensive expertise from his work at AI21 Labs, including development of a 178B-parameter language model, and his current research at Caltech focusing on quantum many-body problems and efficient model inference. Gain insights into addressing the tension between model performance and resource constraints in modern AI development.
Syllabus
Presented on Thursday, February 8th, 2024, AM, room C221
Taught by
HUJI Machine Learning Club