Driving AI at Scale with 1.6T Networking - How Open 100T Switches will Redefine Data Centers
Open Compute Project via YouTube
Overview
Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how next-generation 1.6Tbps switching platforms are engineered to meet the demanding requirements of modern AI clusters in this 24-minute conference presentation. Explore advanced 200G SERDES technology and innovative thermal and mechanical design approaches that deliver high-performance, low-latency networking solutions for AI infrastructure. Discover the Celestica DS6000 portfolio of switches, developed through close collaboration with Broadcom, which maintains full compliance with OCP and open standards like SONiC to ensure flexibility and seamless integration into hyperscale AI fabrics. Examine the early bring-up and speed-to-market strategies that enable rapid deployment of high-bandwidth infrastructure specifically designed to support the latest GPU technologies. Gain insights into how open 100T switches are positioned to redefine data center architectures and understand the technical considerations for scaling AI workloads through advanced networking solutions.
Syllabus
Driving AI at Scale with 1 6T Networking How Open 100T Switches will Redefine Data Centers in
Taught by
Open Compute Project