Alphabet Inc. has entered into advanced negotiations with Marvell Technology Inc. to co-develop a new generation of custom silicon optimized for artificial intelligence inference. According to sources familiar with the matter, the partnership aims to design high-performance application-specific integrated circuits (ASICs) that will allow Google to scale its generative AI services while significantly reducing operational overhead. The discussions focus on creating hardware specifically tailored for the Gemini model architecture, moving beyond the general-purpose capabilities of current industry-standard processors.
The move marks a pivotal shift in Alphabet’s long-standing hardware procurement strategy. For over a decade, Google has relied heavily on Broadcom for the development of its Tensor Processing Units (TPUs). By engaging Marvell, Alphabet is diversifying its technical partnerships and seeking to leverage Marvell’s specialized expertise in high-speed data interconnects and multi-die chiplet technology. The proposed chips are intended to handle the inference stage of AI—the process where a trained model processes new data to provide answers—which has become the most significant cost driver for Google as it integrates AI into Search, Workspace, and YouTube.
Marvell Technology, under the leadership of CEO Matt Murphy, has aggressively expanded its custom compute division to cater to hyperscale cloud providers. The company’s recent financial disclosures indicate that its data center revenue has seen triple-digit year-over-year growth, largely driven by the demand for custom AI accelerators and optical interconnects. Marvell’s platform allows partners like Alphabet to integrate proprietary intellectual property into a proven silicon framework, accelerating the time-to-market for new hardware. This collaboration is expected to utilize Marvell’s 3-nanometer manufacturing process, offering substantial improvements in power efficiency and computational density.
The strategic rationale for the project, reportedly referred to internally as Project Solon, is the reduction of Alphabet’s reliance on external GPU vendors, specifically Nvidia Corporation. While Alphabet continues to purchase large quantities of Nvidia’s Blackwell-series chips for model training, the cost of using these high-end units for routine inference tasks has become increasingly prohibitive. Custom ASICs designed for specific neural network layers can offer a more favorable performance-per-watt ratio, which is critical for managing the thermal and energy constraints of global data centers.
While official representatives from Alphabet and Marvell declined to comment on the specific terms of the deal, the project is expected to move into the tape-out phase by early 2027. The agreement would solidify Marvell’s position as a dominant player in the custom silicon market, alongside its existing contracts with other major cloud service providers. For Alphabet, the successful deployment of these chips would represent a major step toward full vertical integration of its AI infrastructure, from the software models down to the underlying silicon.