On May 5, 2026, the U.S. Department of Commerce announced that Google, Microsoft, and xAI have officially committed to providing the U.S. AI Safety Institute (US AISI) with early access to their most advanced artificial intelligence models. This expansion of the federal safety framework brings the total number of major participating labs to five, including OpenAI and Anthropic, which reached similar agreements in late 2025. The move formalizes a technical pipeline for the government to evaluate the capabilities and risks of next-generation AI systems before they are deployed to the public or integrated into critical infrastructure.

The agreements mandate that developers share frontier models—defined by the Department of Commerce as systems trained using a computational threshold exceeding 10^26 integer or floating-point operations—with the US AISI for rigorous evaluation. The testing protocols are designed to identify high-consequence risks, including the potential for models to assist in the development of biological, chemical, or nuclear weapons. Furthermore, the institute will conduct red-teaming exercises to assess the models' capabilities in facilitating large-scale cyberattacks, performing autonomous deception, or bypassing established safety filters.

Under the terms of the memorandum of understanding, the US AISI will receive access to model weights or dedicated, high-compute API endpoints during the final stages of the training process. This allows federal researchers to conduct independent benchmarking in a secure environment. The Department of Commerce stated that these evaluations are intended to provide a technical baseline for safety without granting the government a veto over model releases. Instead, the findings will be shared with the developers to inform their internal mitigation strategies and safety fine-tuning prior to any commercial launch.

Secretary of Commerce Gina Raimondo stated that the inclusion of Google, Microsoft, and xAI represents a critical milestone in establishing a unified national approach to AI safety. The Secretary noted that the framework is designed to ensure that the rapid pace of innovation does not outstrip the ability of the federal government to assess systemic risks to national security. The US AISI, which operates under the National Institute of Standards and Technology (NIST), has expanded its technical staff to approximately 175 researchers to manage the anticipated increase in model evaluations throughout the 2026-2027 period.

This agreement also establishes a feedback loop between the private sector and the federal government regarding the state of the science in AI safety. By providing early access, the labs allow the US AISI to develop more accurate safety standards that reflect the current capabilities of the most advanced systems. The Department of Commerce confirmed that the data shared during these evaluations will be protected under strict confidentiality protocols to safeguard the proprietary intellectual property and trade secrets of the participating companies.