Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how to transform traditional data center networks into specialized AI-optimized infrastructures in this 48-minute conference talk from NANOG. Discover the fundamental network engineering principles required to support AI workloads transitioning from experimental phases to massive production scale, addressing unprecedented demands for bandwidth, low latency, and lossless communication. Explore five key technical pillars essential for AI networking: architecture and topologies that support AI workload patterns, congestion control mechanisms for high-throughput environments, load balancing strategies for distributed AI training, network segmentation approaches for AI workflows, and operations and visibility tools for monitoring AI network performance. Gain practical insights from Tyler Conrad, an Arista Systems Engineering Tech-Lead with 15 years of experience spanning semiconductor manufacturing, private cloud, and AI network infrastructure, as he provides a foundational roadmap for network engineers making the critical transition from general-compute architectures to specialized AI networking environments.
Syllabus
From Datacenter to AI Center, building the networks that build AI
Taught by
NANOG