Using Modularity to Enable Hardware Reuse Across AI Platforms in a Rapidly Evolving Ecosystem
Open Compute Project via YouTube
MIT Sloan: Lead AI Adoption Across Your Organization — Not Just Pilot It
AI, Data Science & Cloud Certificates from Google, IBM & Meta
Overview
Google, IBM & Meta Certificates — All 10,000+ Courses at 40% Off
One annual plan covers every course and certificate on Coursera. 40% off for a limited time.
Get Full Access
A 15-minute conference talk from Open Compute Project explores how modular architectures can address the challenges of hardware reusability in rapidly evolving AI platforms. Discover the impact of accelerated GPU development cycles on datacenter rack design, focusing on the complexities of managing different refresh rates between network hardware and compute components. Learn about practical solutions for host interface and data ingest platform design as GPU sled configurations continue to evolve. Examine specific modular architecture examples that demonstrate flexible deployment strategies, enabling hardware reuse across multiple platform generations while addressing power and cooling challenges in both AI training and inference applications.
Syllabus
Using Modularity to Enable Hardware Re use across AI Platforms in a Rapidly Evolving Ecosyste
Taught by
Open Compute Project