Meta Platforms Inc. and Broadcom Inc. announced on April 20, 2026, a significant expansion of their strategic partnership to co-develop multiple future generations of custom artificial intelligence silicon. The agreement extends their collaboration through 2029 and focuses on the Meta Training and Inference Accelerator (MTIA) series, Meta’s proprietary hardware designed to optimize its specific AI workloads. This expansion marks a major step in Meta’s effort to vertically integrate its hardware stack and reduce its dependence on third-party merchant silicon providers.

The partnership includes an immediate commitment for an initial deployment exceeding one gigawatt (GW) of computing capacity, which Meta described as the first phase of a sustained, multi-gigawatt rollout. This scale of infrastructure is intended to support the massive computational requirements of Meta’s generative AI models and recommendation systems across its global platforms, including Facebook, Instagram, WhatsApp, and Threads. Notably, the companies confirmed that the upcoming iterations of MTIA silicon will be the first custom AI chips in the industry to utilize a 2-nanometer (2nm) manufacturing process, promising significant gains in performance and energy efficiency.

Broadcom’s role in the collaboration involves providing end-to-end expertise across chip design, advanced packaging, and high-speed networking. The partnership leverages Broadcom’s foundational XPU platform, which integrates logic, memory, and interconnect technologies. To support the scaling of these AI clusters, Broadcom will supply advanced Ethernet switching, PCIe connectivity, and optical interconnect solutions. These technologies are critical for eliminating data bottlenecks as Meta scales its compute clusters to thousands of nodes.

As part of the expanded agreement, Broadcom CEO Hock Tan will transition from his position on Meta’s board of directors to a specialized advisor role. In this capacity, Tan will provide strategic guidance on Meta’s custom silicon roadmap and infrastructure investments. Meta CEO Mark Zuckerberg stated that the partnership is essential for building the computing foundation required to deliver personal superintelligence to billions of users. Zuckerberg noted that the custom silicon allows Meta to achieve a level of optimization between hardware and software that is unattainable with general-purpose components.

The deal aligns with Meta’s broader infrastructure strategy, which includes a projected capital expenditure of up to $135 billion in 2026. This spending is directed toward building AI-ready data centers equipped with liquid cooling and high-density power systems. By developing its own accelerators with Broadcom, Meta aims to lower the total cost of ownership for its AI services while maintaining a predictable supply chain. The MTIA program currently includes the MTIA 300 for ranking and recommendation, with three additional generations planned through 2027 to handle increasingly complex training and inference tasks.