Meta Platforms Inc. officially announced on April 20, 2026, a major expansion of its strategic partnership with Broadcom Inc. to co-develop multiple future generations of Meta’s custom artificial intelligence silicon. The collaboration focuses on the Meta Training and Inference Accelerator (MTIA) family of chips, designed to power the company's increasingly complex AI workloads. This expansion marks a transition from pilot-scale deployments to a massive infrastructure rollout, with Meta confirming an initial deployment exceeding one gigawatt (GW) of power capacity dedicated to these custom processors.

The partnership aims to optimize the integration of Broadcom’s high-speed interconnect technology and custom ASIC (Application-Specific Integrated Circuit) design expertise with Meta’s proprietary software stack. According to official statements, the new generations of MTIA chips will feature enhanced memory bandwidth and energy efficiency, specifically tailored for the training and inference of Meta’s Llama large language models and its core recommendation engines. The multi-gigawatt roadmap outlined in the agreement is intended to provide the foundational compute power for Meta’s long-term AI infrastructure goals through the end of the decade.

Meta’s Vice President of Infrastructure, Santosh Janardhan, stated that the move toward deeper vertical integration in silicon is essential for managing the scaling requirements of generative AI. Broadcom CEO Hock Tan noted that the collaboration leverages Broadcom’s XPU platform to deliver high-performance, low-latency solutions that are critical for Meta’s hyperscale data centers. The deal builds upon a multi-year relationship during which Broadcom has served as a primary partner for Meta’s networking hardware and previous iterations of custom silicon.

Beyond the initial 1GW deployment, the companies have committed to a phased rollout that will eventually reach multi-gigawatt levels across Meta’s global data center footprint. This infrastructure is designed to support the massive computational demands of real-time AI features across Facebook, Instagram, and WhatsApp, as well as the company’s ongoing research into multimodal AI. The agreement also includes provisions for Broadcom to provide advanced packaging solutions and co-packaged optics (CPO) to address the thermal and connectivity challenges inherent in high-density AI clusters.

While specific dollar values for the multi-year contract were not disclosed, the scale of the multi-gigawatt commitment represents one of the largest custom silicon engagements in the semiconductor industry to date. Meta indicated that the shift toward MTIA chips is part of a broader strategy to reduce reliance on general-purpose GPUs and lower the total cost of ownership for its AI infrastructure. The company confirmed that the first chips under this expanded agreement have already begun shipping to its data centers for integration into new server racks.