Driving AI at Scale with 1.6T Networking - How Open 100T Switches will Redefine Data Centers
Open Compute Project via YouTube
Learn Generative AI, Prompt Engineering, and LLMs for Free
The Fastest Way to Become a Backend Developer Online
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
Learn how next-generation 1.6Tbps switching platforms are engineered to meet the demanding requirements of modern AI clusters in this 24-minute conference presentation. Explore advanced 200G SERDES technology and innovative thermal and mechanical design approaches that deliver high-performance, low-latency networking solutions for AI infrastructure. Discover the Celestica DS6000 portfolio of switches, developed through close collaboration with Broadcom, which maintains full compliance with OCP and open standards like SONiC to ensure flexibility and seamless integration into hyperscale AI fabrics. Examine the early bring-up and speed-to-market strategies that enable rapid deployment of high-bandwidth infrastructure specifically designed to support the latest GPU technologies. Gain insights into how open 100T switches are positioned to redefine data center architectures and understand the technical considerations for scaling AI workloads through advanced networking solutions.
Syllabus
Driving AI at Scale with 1 6T Networking How Open 100T Switches will Redefine Data Centers in
Taught by
Open Compute Project