OpenAI is leading a $300 billion expansion in AI hardware infrastructure, creating a tightly integrated ecosystem that links chip suppliers, financiers and energy providers. The initiative includes multi-year contracts with AMD and Broadcom to deliver 16 gigawatts of new compute capacity between 2026 and 2029—comparable to the electricity use of some small nations.

AMD will provide 6 gigawatts of Instinct GPUs and has issued equity warrants to OpenAI tied to performance milestones, deepening the financial link between supplier value and AI scaling. Broadcom will deliver 10 gigawatts of custom silicon and rack systems, co-designed for OpenAI’s AI workloads. This collaboration builds on the Stargate infrastructure project with Oracle and SoftBank, which includes five new US data centre sites. Stargate alone accounts for 7 gigawatts of planned capacity, with total investment across the initiative expected to exceed $400 billion by the end of 2025.

The approach reflects a circular AI economy, where equity incentives, capital flows and purchase agreements bind model operators to vendors. Nvidia’s 7 percent stake in CoreWeave and the latter’s $22.4 billion contract book with OpenAI exemplify this model, as does Nvidia’s reported $100 billion in chip orders driven by OpenAI demand.

Execution risks remain. Energy access and utilisation efficiency will be key to financial returns. Goldman Sachs projects global data centre electricity demand will rise 165% by 2030, while McKinsey warns that US centres alone could consume 14% of national power by decade’s end. Long-term power deals and onsite generation will be essential for sustainable scaling. Regulatory concerns also persist—though the UK Competition and Markets Authority declined to probe the Microsoft-OpenAI partnership in March 2025, future equity-linked supply deals may attract renewed scrutiny.

On the hardware front, Broadcom’s custom silicon, built with OpenAI and fabricated by TSMC, is set to boost efficiency for inference workloads from 2026 to 2029. OpenAI is also working with Arm to develop a server-grade CPU optimised for this ecosystem. These advances are expected to reduce compute costs and improve scalability.

AMD is progressing its Helios rack-scale AI platform, unveiled at the 2025 Open Compute Project Global Summit. Helios integrates EPYC CPUs, MI450 GPUs and high-speed networking, supporting 72 GPUs per rack and offering 1.4 exaFLOPS of FP8 performance—50% more memory than Nvidia’s comparable systems. Oracle plans to deploy 50,000 MI450 GPUs from Q3 2026, reinforcing AMD’s presence in AI-first data centres. A landmark $40 billion acquisition of Aligned Data Centers by a consortium including Microsoft, Nvidia and BlackRock further signals deepening private sector investment. Aligned controls over 5 gigawatts of capacity across the US and Latin America.

The next two to three years will be pivotal. As deployments go live and power agreements close, OpenAI will focus on converting purchase commitments into usable compute and revenue. If enterprise usage scales in line with infrastructure buildout, the financial structure behind this AI ecosystem could evolve into a sustainable compute economy rather than a source of vendor risk.

OpenAI’s infrastructure drive blends hardware innovation with financial and energy strategy, laying a foundation for AI leadership. For the UK, this underscores the need to develop integrated frameworks that align innovation with regulation and sustainability—ensuring global competitiveness in the era of scalable AI.

Created by Amplify: AI-augmented, human-curated content.