Data and Model Geometry in Deep Learning - Implications of Geometric Structure
Harvard CMSA via YouTube
Google AI Professional Certificate - Learn AI Skills That Get You Hired
Start speaking a new language. It’s just 3 weeks away.
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore a 55-minute conference talk from the Big Data Conference 2024 where Harvard Mathematics professor Melanie Weber delves into the geometric structures within machine learning data and their implications for neural network design. Learn about the prevalence of geometric patterns in machine learning, particularly focusing on fundamental symmetries like permutation-invariance in graphs and translation-invariance in images. Discover a novel architecture based on unitary group convolutions that addresses stability issues in deep equivariant networks. Examine how data and model geometry influence neural network learnability, with specific attention to equivariant neural networks and the geometry of input data manifolds. Gain insights into the intersection of mathematical principles and practical machine learning applications through this comprehensive exploration of geometric structures in deep learning systems.
Syllabus
**Note: audio begins at **
Taught by
Harvard CMSA