Google is in advanced negotiations with Marvell Technology to co-develop two new processors specifically designed for artificial intelligence inference, according to reports from The Information and Reuters on April 20, 2026. The move represents a strategic expansion of Google’s custom silicon portfolio as the company seeks to challenge Nvidia’s dominance in the rapidly growing AI accelerator market.

The proposed collaboration involves the development of a memory processing unit (MPU) and a new generation of the Tensor Processing Unit (TPU) optimized for inference workloads. The MPU is designed to work in conjunction with Google’s existing TPUs to enhance data handling and reduce latency, while the inference-specific TPU aims to run AI models more efficiently once they have been deployed. Technical sources indicate that Google intends to finalize the designs as early as 2027 to meet the surging demand for real-time AI applications.

This development follows a period of significant activity in Google’s hardware division. Earlier this month, Broadcom confirmed in a securities filing that it had secured a long-term agreement to design and supply Google’s TPUs and networking components through 2031. By engaging Marvell, Google is effectively diversifying its supply chain, adding a third major design partner alongside Broadcom and MediaTek. Marvell already collaborates with Google on the ARM-based Axion CPU and provides custom silicon services for other hyperscalers, including Amazon and Microsoft.

The industry-wide shift from training large language models to the high-volume inference phase is a primary driver for these new designs. Market data released on April 20 suggests the custom application-specific integrated circuit (ASIC) market is projected to grow 45% in 2026, reaching an estimated $118 billion by 2033. While Nvidia’s GPUs remain the standard for training, custom ASICs like Google’s TPUs offer superior performance-per-watt and lower total cost of ownership for the ongoing process of serving AI queries at scale.

In a related infrastructure expansion, AI developer Anthropic recently signed an agreement with Google and Broadcom to access approximately 3.5 gigawatts of next-generation TPU-based compute starting in 2027. This is in addition to the 1 gigawatt of capacity scheduled to come online later this year, highlighting the massive scale of Google's hardware roadmap.

The reports of the Marvell partnership arrive just days before the Google Cloud Next ‘26 conference, scheduled for April 22-24 in Las Vegas. The event is expected to feature the official debut of Google’s next-generation TPU architecture and a roadmap for its Hypercomputer systems, which utilize optical circuit switching (OCS) to scale AI clusters. Neither Google nor Marvell has publicly confirmed the current negotiations.