MIT Sloan AI Adoption: Build a Playbook That Drives Real Business ROI
AI Engineer - Learn how to integrate AI into software applications
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore the challenges of training infinitely large neural networks in this distinguished seminar featuring Mikhail Belkin from UCSD. Delve into the recent successes of deep learning and the trend towards larger neural networks for improved performance. Discover why training infinitely large networks directly might be beneficial and learn about the Neural Tangent Kernel's role in equating infinitely wide neural networks to kernel machines. Examine the two primary challenges in training such networks: the inability to leverage feature learning and the computational difficulties in scaling kernel machines to large data sizes. Gain insights into Recursive Feature Machines (RFMs) that incorporate feature learning without backpropagation and outperform Multilayer Perceptrons. Understand the potential of RFMs in achieving state-of-the-art performance on tabular data and their efficiency for small to medium data sizes. Learn about recent efforts to scale kernel machines to larger datasets and the possibilities of reaching the data sizes used in modern Large Language Models (LLMs).
Syllabus
Distinguished Seminar in Optimization and Data: Mikhail Belkin (UCSD)
Taught by
Paul G. Allen School