Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Scaling AI at the Speed of Openness - From Silicon to Systems

Open Compute Project via YouTube

Overview

Coursera Flash Sale
40% Off Coursera Plus for 3 Months!
Grab it
Learn how AI deployment is evolving from centralized training models to real-time agentic systems that continuously reason, plan, and act across AI factories, enterprise data centers, and edge environments in this keynote presentation. Discover why scaling AI inference requires more than just powerful chips, demanding comprehensive hardware innovation including memory and logic scaling, robust manufacturing capabilities, and strategic supply chain management. Explore how agentic AI systems prioritize sustained throughput, low latency, memory orchestration, and system-level efficiency over peak compute performance. Understand the critical role of modular systems co-designed for performance, efficiency, flexibility, and developer continuity through deployable, rack-scale architectures optimized for inference-first AI applications. Examine how shared innovation, openness, hardware diversity, and system-level design principles accelerate AI adoption through collaborative partnerships within the open compute community to deliver purposeful intelligence solutions.

Syllabus

Scaling AI at the Speed of Openness From Silicon to Systems

Taught by

Open Compute Project

Reviews

Start your review of Scaling AI at the Speed of Openness - From Silicon to Systems

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.