Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Explore connections between random embeddings and neural networks through the lens of convex analysis in this lecture from Stanford University's Mert Pilanci. Delve into exact convex formulations of neural network training problems and discover how rectified linear unit (ReLU) networks can be globally trained via convex programs. Learn about a randomized zonotope vertex sampling algorithm that reduces exponential dependence on feature dimension, and understand its connections to randomized embeddings, Dvoretzky's theorem, and hyperplane tessellations. Examine numerical simulations that verify the claims and demonstrate the proposed approach's superiority over standard local search heuristics like stochastic gradient descent.