Does Equivariance Matter at Scale? A Study of Neural Architectures and Symmetries
Valence Labs via YouTube
The Fastest Way to Become a Backend Developer Online
Launch a New Career with Certificates from Google, IBM & Microsoft
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Explore a comprehensive research presentation that investigates the effectiveness of equivariant and non-equivariant neural networks at scale in AI drug discovery. Learn how different neural architectures perform when dealing with large datasets and substantial computational resources, focusing specifically on rigid-body interactions and transformer architectures. Discover key findings about data efficiency, including how non-equivariant models with data augmentation can match equivariant models given enough training epochs. Understand the power law relationship in computational scaling, where equivariant models consistently outperform their non-equivariant counterparts across various compute budgets. Examine the optimal allocation strategies for computational resources between model size and training duration, and how these strategies differ between equivariant and non-equivariant approaches. Access additional insights and connect with speakers through the AI drug discovery community at Portal.
Syllabus
Does equivariance matter at scale? | Johann Brehmer
Taught by
Valence Labs