Under the seven-year agreement, OpenAI will leverage AWS’s compute resources—spanning hundreds of thousands of NVIDIA GPUs and the ability to scale to tens of millions of CPUs—to power its frontier AI systems, including ChatGPT and future large language models.
AWS will deploy specialized GPU clusters featuring GB200 and GB300 chips connected through Amazon EC2 UltraServers, designed for high-efficiency AI processing and low-latency performance. This infrastructure will support both model training and inference at massive scale, with full deployment expected by the end of 2026 and potential expansion into 2027.
“Scaling frontier AI requires massive, reliable compute,” said OpenAI CEO Sam Altman. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”
Matt Garman, CEO of AWS, added, “As OpenAI continues to push the boundaries of what’s possible, AWS’s best-in-class infrastructure will serve as the backbone for their AI ambitions.”
The partnership builds on ongoing collaboration between the two companies. OpenAI’s open-weight foundation models are already available on Amazon Bedrock, where they’ve become popular among enterprise users such as Thomson Reuters, Peloton, and Verana Health for scientific, analytical, and agentic workflows.
/images/ppid_59c68470-image-176218002997633911.webp)


/images/ppid_59c68470-image-176215003452890100.webp)

/images/ppid_a911dc6a-image-17621444254332693.webp)
/images/ppid_a911dc6a-image-176191406775955712.webp)
/images/ppid_a911dc6a-image-176190706800684134.webp)
/images/ppid_59c68470-image-176192254954525414.webp)
/images/ppid_a911dc6a-image-176191533646047854.webp)

/images/ppid_a911dc6a-image-176191823325388400.webp)