A blockbuster deal was signed in October 2025 between AMD and OpenAI, involving 6 Gigawatts (GW) of AMD GPUs.
OpenAI will deploy 6 GW of AMD semiconductors in a five-year agreement with an initial deployment of 1-GW worth of power in the form of AMD Instinct MI450 Series GPUs starting in 2H 2026. The deal will cover multiple generations of AMD Instinct microprocessors. OpenAI is planning to use Instinct MI450 series chips and rack-scale AI solutions such as the forthcoming Helios rack to continue developing its AI technologies.
Some of the most important AI model builders are also using Instinct chips in their data centers, including Meta, Cohere, Oracle, and Microsoft. AMD and OpenAI started their hardware and software collaboration with the MI300X and continued with the MI350X series.
The convoluted financial structures used by OpenAI to strike increasingly creative deals are on display here: the company has been issued a warrant for up to 160 million shares of AMD stock, to be released in progressive stages as purchases scale up to 6 GW. Specific commercial and technical conditions must be met by both companies in the next five years, for all stages to progress.
The deal marks a huge vote of confidence in AMD’s roadmap and capabilities and is a massive win in the fierce battle against competitor Nvidia. It takes place after an agreement between OpenAI and Nvidia to deploy 10 GW worth of AI infrastructure running on Nvidia chips. This partnership also follows a staggered approach whereby the processor designer intends to invest up to $100bn in OpenAI progressively as each GW is deployed.
However, it could be argued that these partnerships are creating a circular economy with multiple deals involving the same money changing hands between a few companies in competing agreements.
More partnerships are predicted for the next few months. AI is quickly turning into a capital ecosystem. The partnerships, investments, and cloud commitments between all these companies show how deeply intertwined infrastructure and intelligence have become.
Compute power has become the currency in this capital-intensive ecosystem, driven by the demand for inferencing, or the computations that allow AI applications such as chat bots to respond to user queries.
It is expected that demand for compute based on inference workloads will soon be equal to model training, although training will remain the foundation to develop AI systems.
As AI undertakes complex tasks like reasoning, driving demand for more compute, by around 2028, inference could be as much as two-thirds of the market.
Source link