The First Asynchronous SGD with Optimal Time Complexity - Seminar #126
Federated Learning One World Seminar via YouTube
Free courses from frontend to fullstack and AI
AI, Data Science & Business Certificates from Google, IBM & Microsoft
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Watch this 59-minute presentation from the Federated Learning One World Seminar series where Arto Maranjyan from KAUST discusses the first asynchronous Stochastic Gradient Descent (SGD) algorithm with optimal time complexity. Delivered on April 16, 2025, this talk explores groundbreaking developments in asynchronous optimization methods. Learn about the theoretical foundations and practical implications of this advancement in distributed machine learning. For more information, visit the seminar website or connect with the speaker through his personal webpage. The presentation is part of the FLOW Seminar series (#126) focused on federated learning innovations.
Syllabus
FLOW Seminar #126: Arto Maranjyan (KAUST) The First Asynchronous SGD with Optimal Time Complexity
Taught by
Federated Learning One World Seminar