Meta Platforms announced on April 24, 2026, a significant expansion of its infrastructure partnership with Amazon Web Services (AWS), committing to the deployment of tens of millions of AWS Graviton5 cores. This deployment will be integrated into Meta’s global compute portfolio, serving as a foundational element for the company’s artificial intelligence and machine learning operations. The agreement includes a provision for further expansion, allowing Meta to scale its capacity in response to future demand.
The Graviton5 processor, AWS’s most recent custom-designed Arm-based chip, is the centerpiece of this collaboration. According to technical specifications provided by AWS, the Graviton5 delivers up to 30% higher performance for compute-intensive tasks and 50% more memory bandwidth than its predecessor, the Graviton4. For Meta, the primary objective of this deployment is to improve the efficiency and performance-to-power ratio of its data center operations, which support the computational needs of Facebook, Instagram, and WhatsApp.
Santosh Janardhan, Meta’s Vice President of Infrastructure, stated that the partnership is a key component of the company’s diversified hardware strategy. By incorporating AWS Graviton5 cores alongside Meta’s proprietary silicon, such as the Meta Training and Inference Accelerator (MTIA), the company aims to build a more resilient and flexible infrastructure. This multi-faceted approach is intended to optimize the processing of large-scale generative AI models and recommendation systems while managing the high energy and cost requirements of modern AI workloads.
Matt Garman, CEO of AWS, characterized the agreement as one of the largest deployments of Graviton-based instances to date. Garman emphasized that the collaboration focuses on reducing the total cost of ownership for Meta’s large-scale cloud operations. The partnership also includes a technical collaboration phase where engineers from both companies will work to optimize Meta’s software stack for the Graviton architecture, specifically targeting the performance of the Llama family of large language models.
The phased rollout of the Graviton5 cores is scheduled to commence immediately, with the initial deployment of tens of millions of cores expected to be operational by the conclusion of the 2027 fiscal year. While the specific financial valuation of the contract was not disclosed, the scale of the commitment highlights Meta's strategy of utilizing external cloud capacity to supplement its internal data center investments. This decision aligns with Meta's broader policy of maintaining hardware diversity to mitigate supply chain risks and avoid reliance on a single architecture for its AI development.