New Delhi: Intel and Google have announced a multi-year collaboration aimed at strengthening the next phase of AI and cloud infrastructure. The partnership
focuses on improving how large scale AI systems are built and managed, with a renewed emphasis on CPUs and specialised infrastructure components alongside accelerators.
The move comes as AI workloads grow more complex, requiring balanced systems rather than reliance on a single type of processor. Both companies say the collaboration will align future generations of Intel Xeon processors with Google’s global cloud infrastructure to improve performance, efficiency and cost management.
Intel and Google expand AI infrastructure partnership
Under the agreement, Google Cloud will continue deploying Intel Xeon processors across its workload optimised instances, including platforms such as C4 and N4. These systems support a range of use cases, from coordinating large scale AI training to handling inference and general computing tasks.
Intel CEO Lip-Bu Tan said, “AI is reshaping how infrastructure is built and scaled. Scaling AI requires more than accelerators it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand.”
The companies will also collaborate on improving total cost of ownership and energy efficiency across data centres, an area that has gained attention as AI deployments expand globally.
Beyond accelerators to system-level design
The partnership reflects a broader industry shift where CPUs are being repositioned as core components in AI systems. While GPUs remain critical for model training, CPUs handle orchestration, data processing and system level coordination.
Google’s Amin Vahdat said, “CPUs and infrastructure acceleration remain a cornerstone of AI systems from training orchestration to inference and deployment.”
This approach highlights the need for tightly integrated systems that combine different types of compute rather than focusing on a single hardware layer.
IPUs to support networking and data center efficiency
Alongside CPUs, Intel and Google are expanding work on custom infrastructure processing units, or IPUs. These chips are designed to manage networking, storage and security tasks, reducing the load on CPUs.
By offloading these functions, IPUs can improve utilisation and enable more predictable performance in large-scale cloud environments. This combination of CPUs and IPUs is expected to support more efficient and scalable AI deployments.
What it means for AI and the cloud ecosystem
The collaboration signals a shift towards more balanced AI infrastructure design, where general-purpose and specialised hardware work together. For cloud providers and enterprises, this could lead to improved efficiency and reduced complexity in managing AI workloads.














