NeoClouds vs Hyperscalers - 1MW Racks, 1% Budget, and the Battle for the AI Margin Stack
Open Compute Project via YouTube
Power BI Fundamentals - Create visualizations and dashboards from scratch
40% Off Career-Building Certificates
Overview
Coursera Spring Sale
40% Off Coursera Plus Annual!
Grab it
Explore the intense competition between NeoClouds and hyperscalers in this conference talk examining how AI infrastructure demands are reshaping the cloud computing landscape. Learn about the technical and business challenges as AI workloads push power and cooling requirements beyond 1MW per rack, forcing a fundamental rethink of data center design and economics. Discover how emerging NeoClouds like CoreWeave and FulcrumCloud are challenging established hyperscaler dominance through leaner, more agile infrastructure builds while operating on significantly smaller budgets. Analyze the strategic advantages of Google's 400V rack systems and Cerebras' vertically integrated AI cloud solutions, and understand how hardware vendors are increasingly bypassing traditional OEMs to capture more value in the AI stack. Examine the critical role of cooling systems as a strategic differentiator and competitive advantage in high-density AI deployments. Investigate whether true market differentiation comes from chip-level innovations, rack-scale engineering, or comprehensive full-stack services, and assess how the Open Compute Project's open and interoperable design principles are fundamentally reshaping ownership of the AI margin stack. Evaluate the ongoing debate over whether tightly integrated infrastructure remains a hyperscaler privilege or if startups can successfully compete through superior agility, openness, and sovereign-ready infrastructure solutions.
Syllabus
NeoClouds vs Hyperscalers 1MW Racks, 1% Budget, and the Battle for the AI Margin Stack
Taught by
Open Compute Project